Text
stringlengths
45
130k
Id
stringlengths
8
8
Summary
stringlengths
55
2.67k
using the web to obtain frequencies for unseen bigrams this article shows that the web can be employed to obtain frequencies for bigrams that are unseen in a given corpus we describe a method for retrieving counts for adjectivenoun nounnoun and verbobject bigrams from the web by querying a search engine we evaluate this method by demonstrating a high correlation between web frequencies and corpus frequencies a reliable correlation between web frequencies and plausibility judgments a reliable correlation between web frequencies and frequencies recreated using classbased smoothing a good performance of web frequencies in a pseudodisambiguation task this article shows that the web can be employed to obtain frequencies for bigrams that are unseen in a given corpuswe describe a method for retrieving counts for adjectivenoun nounnoun and verbobject bigrams from the web by querying a search enginewe evaluate this method by demonstrating a high correlation between web frequencies and corpus frequencies a reliable correlation between web frequencies and plausibility judgments a reliable correlation between web frequencies and frequencies recreated using classbased smoothing a good performance of web frequencies in a pseudodisambiguation taskin two recent papers banko and brill criticize the fact that current nlp algorithms are typically optimized tested and compared on fairly small data sets even though data sets several orders of magnitude larger are available at least for some nlp tasksbanko and brill experiment with contextsensitive spelling correction a task for which large amounts of data can be obtained straightforwardly as no manual annotation is requiredthey demonstrate that the learning algorithms typically used for spelling correction benefit significantly from larger training sets and that their performance shows no sign of reaching an asymptote as the size of the training set increasesarguably the largest data set that is available for nlp is the web1 which currently consists of at least 3033 million pages2 data retrieved from the web therefore provide enormous potential for training nlp algorithms if banko and brills findings for spelling corrections generalize potential applications include tasks that involve word ngrams and simple surface syntaxthere is a small body of existing research that tries to harness the potential of the web for nlpgrefenstette and nioche and jones and ghani use the web to generate corpora for languages for which electronic resources are scarce and resnik describes a method for mining the web in order to obtain bilingual textsmihalcea and moldovan and agirre and martinez use the web for word sense disambiguation volk proposes a method for resolving pp attachment ambiguities based on web data markert nissim and modjeska use the web for the resolution of nominal anaphora and zhu and rosenfeld use webbased ngram counts to improve language modelinga particularly interesting application is proposed by grefenstette who uses the web for examplebased machine translationhis task is to translate compounds from french into english with corpus evidence serving as a filter for candidate translationsan example is the french compound groupe de travailthere are five translations of groupe and three translations for travail resulting in 15 possible candidate translationsonly one of them namely work group has a high corpus frequency which makes it likely that this is the correct translation into englishgrefenstette observes that this approach suffers from an acute data sparseness problem if the counts are obtained from a conventional corpushowever as grefenstette demonstrates this problem can be overcome by obtaining counts through web searches instead of relying on a corpusgrefenstette therefore effectively uses the web as a way of obtaining counts for compounds that are sparse in a given corpusalthough this is an important initial result it raises the question of the generality of the proposed approach to overcoming data sparsenessit remains to be shown that web counts are generally useful for approximating data that are sparse or unseen in a given corpusit seems possible for instance that grefenstettes results are limited to his particular task or to his particular linguistic phenomenon another potential problem is the fact that web counts are far more noisy than counts obtained from a welledited carefully balanced corpusthe effect of this noise on the usefulness of the web counts is largely unexploredzhu and rosenfeld use webbased ngram counts for language modelingthey obtain a standard language model from a 103millionword corpus and employ webbased counts to interpolate unreliable trigram estimatesthey compare their interpolated model against a baseline trigram language model and show that the interpolated model yields an absolute reduction in word error rate of 93 over the baselinezhu and rosenfelds results demonstrate that the web can be a source of data for language modelingit is not clear however whether their result carries over to tasks that employ linguistically meaningful word sequences rather than simply adjacent wordsfurthermore zhu and rosenfeld do not undertake any studies that evaluate web frequencies directly this could be done for instance by comparing web frequencies to corpus frequencies or to frequencies recreated by smoothing techniquesthe aim of the present article is to generalize grefenstettes and zhu and rosenfelds findings by testing the hypothesis that the web can be employed to obtain frequencies for bigrams that are unseen in a given corpusinstead of having a particular task in mind we rely on sets of bigrams that are randomly selected from a corpuswe use a webbased approach for bigrams that encode meaningful syntactic relations and obtain web frequencies not only for nounnoun bigrams but also for adjectivenoun and verbobject bigramswe thus explore whether this approach generalizes to different predicateargument combinationswe evaluate our web counts in four ways comparison with actual corpus frequencies from two different corpora comparison with human plausibility judgments comparison with frequencies recreated using classbased smoothing and performance in a pseudodisambiguation task on data sets from the literaturethe data sets used in the present experiment were obtained from the british national corpus the bnc is a large synchronic corpus consisting of 90 million words of text and 10 million words of speechthe bnc is a balanced corpus the written part includes samples from newspapers magazines books letters and school and university essays among other kinds of textthe spoken part consists of spontaneous conversations recorded from volunteers balanced by age region and social classother samples of spoken language are also included ranging from business or government meetings to radio shows and phoneinsthe corpus represents many different styles and varieties and is not limited to any particular subject field genre or registerfor the present study the bnc was used to extract data for three types of predicateargument relationsthe first type is adjectivenoun bigrams in which we assume that the noun is the predicate that takes the adjective as its argument3 the second predicateargument type we investigated is nounnoun compoundsfor these we assume that the rightmost noun is the predicate that selects the leftmost noun as its argument third we included verbobject bigrams in which the verb is the predicate that selects the object as its argumentwe considered only direct np objects the bigram consists of the verb and the head noun of the objectfor each of the three predicateargument relations we gathered two data sets one containing seen bigrams and one with unseen bigrams for the seen adjectivenoun bigrams we used the data of lapata mcdonald and keller who compiled a set of 90 bigrams as followsfirst 30 adjectives were randomly chosen from a partofspeechtagged and lemmatized version of the bnc so that each adjective had exactly two senses according to wordnet and was unambiguously tagged as adjective 986 of the timelapata mcdonald and keller used the partofspeechtagged version that is made available with the bnc and was tagged using claws4 a probabilistic partofspeech tagger with error rate ranging from 3 to 4the lemmatized version of the corpus was obtained using karp et als morphological analyzerthe 30 adjectives ranged in bnc frequency from 19 to 491 per million words that is they covered the whole range from fairly infrequent to highly frequent itemsgsearch a chart parser that detects syntactic patterns in a tagged corpus by exploiting a userspecified contextfree grammar and a syntactic query was used to extract all nouns occurring in a headmodifier relationship with one of the 30 adjectivesexamples of the syntactic patterns the parser identified are given in table 1in the case of adjectives modifying compound nouns only sequences of two nouns were included and the rightmostoccurring noun was considered the headbigrams involving proper nouns or lowfrequency nouns were discardedthis was necessary because the bigrams were used in experiments involving native speakers and we wanted to reduce the risk of including words unfamiliar to the experimental subjectsfor each adjective the set of bigrams was divided into three frequency bands based on an equal division of the range of logtransformed cooccurrence frequenciesthen one bigram was chosen at random from each bandthis procedure ensures that the whole range of frequencies is represented in our samplelapata keller and mcdonald compiled a set of 90 unseen adjectivenoun bigrams using the same 30 adjectivesfor each adjective gsearch was used to compile a list of all nouns that did not cooccur in a headmodifier relationship with the adjectiveagain proper nouns and lowfrequency nouns were discarded from this listthen each adjective was paired with three randomly chosen nouns from its list of noncooccurring nounsexamples of seen and unseen adjectivenoun bigrams are shown in table 2for the present study we applied the procedure used by lapata mcdonald and keller and lapata keller and mcdonald to nounnoun bigrams and to verbobject bigrams creating a set of 90 seen and 90 unseen bigrams for each type of predicateargument relationshipmore specifically 30 nouns and 30 verbs were chosen according to the same criteria proposed for the adjective study all nouns modifying one of the 30 nouns were extracted from the bnc using a heuristic from lauer that looks for consecutive pairs of nouns that are neither preceded nor succeeded by another nounlauers heuristic effectively avoids identifying as twoword compounds noun sequences that are part of a larger compoundhere w1 w2 w3 w4 denotes the occurrence of a sequence of four words and n is the set of words tagged as nouns in the corpusc is the set of compounds identified by lauers heuristicverbobject bigrams for the 30 preselected verbs were obtained from the bnc using cass a robust chunk parser designed for the shallow analysis of noisy textthe parser recognizes chunks and simplex clauses using a regular expression grammar and a partofspeechtagged corpus without attempting to resolve attachment ambiguitiesit comes with a largescale grammar for english and a builtin tool that extracts predicateargument tuples out of the parse trees that cass producesthe parsers output was postprocessed to remove bracketing errors and errors in identifying chunk categories that could potentially result in bigrams whose members do not stand in a verbargument relationshiptuples containing verbs or nouns attested in a verbargument relationship only once were eliminatedparticle verbs were retained only if the particle was adjacent to the verb verbs followed by the preposition by and a head noun were considered instances of verbsubject relationsit was assumed that pps adjacent to the verb headed by any of the prepositions in to for with on at from of into through and upon were prepositional objects only nominal heads were retained from the objects returned by the parseras in the adjective study nounnoun bigrams and verbobject bigrams with proper nouns or lowfrequency nouns were discardedthe sets of nounnoun and verbobject bigrams were divided into three frequency bands and one bigram was chosen at random from each bandthe procedure described by lapata keller and mcdonald was followed for creating sets of unseen nounnoun and verbobject bigrams for each noun or verb we compiled a list of all nouns with which it did not cooccur within a nounnoun or verbobject bigram in the bncagain lauers heuristic and abneys partial parser were used to identify bigrams and proper nouns and lowfrequency nouns were excludedfor each noun and verb three bigrams were formed by pairing it with a noun randomly selected from the set of the noncooccurring nouns for that noun or verbtable 2 lists examples for the seen and unseen nounnoun and verbobject bigrams generated by this procedurethe extracted bigrams are in several respects an imperfect source of information about adjectivenoun or nounnoun modification and verbobject relationsfirst notice that both gsearch and cass detect syntactic patterns on partofspeechtagged corporathis means that parsing errors are likely to result because of tagging mistakessecond even if one assumes perfect tagging the heuristic nature of our extraction procedures may introduce additional noise or miss bigrams for which detailed structural information would be neededfor instance our method for extracting adjectivenoun pairs ignores cases in which the adjective modifies noun sequences of length greater than twothe heuristic in considers only twoword noun sequencesabneys chunker recognizes basic syntactic units without resolving attachment ambiguities or recovering missing information although parsing is robust and fast the identified verbargument relations are undoubtedly somewhat noisy given the errors inherent in the partofspeech tagging and chunk recognition procedurewhen evaluated against manually annotated data abneys parser identified chunks with 879 precision and 871 recallthe parser further achieved a perword accuracy of 921 despite their imperfect output heuristic methods for the extraction of syntactic relations are relatively common in statistical nlpseveral statistical models employ frequencies obtained from the output of partial parsers and other heuristic methods these include models for disambiguating the attachment site of prepositional phrases models for interpreting compound nouns and polysemous adjectives models for the induction of selectional preferences methods for automatically clustering words according to their distribution in particular syntactic contexts automatic thesaurus extraction and similaritybased models of word cooccurrence probabilities in this article we investigate alternative ways for obtaining bigram frequencies that are potentially useful for such models despite the fact that some of these bigrams are identified in a heuristic manner and may be noisywe also obtained corpus counts from a second corpus the north american news text corpus this corpus differs in several important respects from the bncit is substantially larger as it contains 350 million words of textalso it is not a balanced corpus as it contains material from only one genre namely news texthowever the text originates from a variety of sources whereas the bnc covers british english the nantc covers american englishall these differences mean that the nantc provides a second independent standard against which to compare web countsat the same time the correlation found between the counts obtained from the two corpora can serve as an upper limit for the correlation that we can expect between corpus counts and web countsthe nantc corpus was parsed using minipar a broadcoverage parser for englishminipar employs a manually constructed grammar and a lexicon derived from wordnet with the addition of proper names lexicon entries contain partofspeech and subcategorization informationthe grammar is represented as a network of 35 nodes and 59 edges minipar employs a distributedchart parsing algorithminstead of a single chart each node in the grammar network maintains a chart containing partially built structures belonging to the grammatical category represented by the nodegrammar rules are implemented as constraints associated with the nodes and edgesthe output of minipar is a dependency tree that represents the dependency relations between words in a sentencetable 3 shows a subset of the dependencies minipar outputs for the sentence the fat cat ate the door matin contrast to gsearch and cass minipar produces all possible parses for a given sentencethe parses are ranked according to the product of the probabilities of their edges and the most likely parse is returnedlin evaluated the parser on the susanne corpus a domainindependent corpus of british english and achieved a recall of 79 and precision of 89 on the dependency relations mat ndetdet the determiner of noun mat nnnn door prenominal modifier of noun for our experiments we concentrated solely on adjectivenoun nounnoun and verb object relations from the syntactic analysis provided by the parser we extracted all occurrences of bigrams that were attested both in the bnc and the nantc corpusin this way we obtained nantc frequency counts for the bigrams that we had randomly selected from the bnctable 4 shows the nantc counts for the set of seen bigrams from table 2because of the differences in the extraction methodology and the text genre we expected that some bnc bigrams would not be attested in the nantc corpusmore precisely zero frequencies were returned for 23 adjectivenoun 16 verbnoun and 37 nounnoun bigramsthe fact that more zero frequencies were observed for nounnoun bigrams than for the other two types is perhaps not surprising considering the ease with which novel compounds are created we adjusted the zero counts by setting them to 5this was necessary because all further analyses were carried out on logtransformed frequencies web counts for bigrams were obtained using a simple heuristic based on queries to the search engines altavista and googleall search terms took into account the inflectional morphology of nouns and verbsthe search terms for verbobject bigrams matched not only cases in which the object was directly adjacent to the verb but also cases in which there was an intervening determiner the following search terms were used for adjectivenoun nounnoun and verbobject bigrams respectively note that all searches were for exact matches which means that the words in the search terms had to be directly adjacent to score a matchthis is encoded by enclosing the search term in quotation marksall our search terms were in lower casewe searched the whole web that is the queries were not restricted to pages in englishbased on the web searches we obtained bigram frequencies by adding up the number of pages that matched the morphologically expanded forms of the search terms this process can be automated straightforwardly using a script that generates all the search terms for a given bigram issues an altavista or google query for each of the search terms and then adds up the resulting number of matches for each bigramwe applied this process to all the bigrams in our data set covering seen and unseen adjectivenoun nounnoun and verbobject bigrams the queries were carried out in january 2003 for some bigrams that were unseen in the bnc our webbased procedure returned zero counts that is there were no matches for those bigrams in the web searchesit is interesting to compare the web and nantc with respect to zero counts both data sources are larger than the bnc and hence should be able to mitigate the data sparseness problem to a certain extenttable 5 provides the number of zero counts for both web search engines and compares them to the number of bigrams that yielded no matches in the nantcwe observe that the web counts are substantially less sparse than the nantc counts in the worst case there are nine bigrams for which our web queries returned no matches whereas up to 82 bigrams were unseen in the nantc recall that the nantc is 35 times larger than the bnc which does not seem to be enough to substantially mitigate data sparsenessall further analyses were carried out on logtransformed frequencies hence we adjusted zero counts by setting them to 5table 6 shows descriptive statistics for the bigram counts we obtained using altavista and googlefor comparison this table also provides descriptive statistics for the bnc and nantc counts and for the counts recreated using classbased smoothing from these data we computed the average factor by which the web counts are larger than the bnc countsthe results are given in table 7 and indicate that the altavista counts are between 550 and 691 times larger than the bnc counts and that the google counts are between 1064 and 1306 times larger than the bnc countsas we know the size of the bnc we can use these figures to estimate the number of words available on the web between 550 and 691 billion words for altavista and between 1064 and 1396 billion words for googlethese estimates are in the same order of magnitude as grefenstette and nioches estimate that 481 billion words of english are available on the web they also agree with zhu and rosenfelds estimate that the effective size of the web is between 79 and 108 billion words the method we used to retrieve web counts is based on very simple heuristics it is thus inevitable that the counts generated will contain a certain amount of noisein this section we discuss a number of potential sources of such noisean obvious limitation of our method is that it relies on the page counts returned by the search engines we do not download the pages themselves for further processingnote that many of the bigrams in our sample are very frequent hence the effort involved in downloading all pages would be immense our approach estimates web frequencies based not on bigram counts directly but on page countsin other words it ignores the fact that a bigram can occur more than once on a given web pagethis approximation is justified as zhu and rosenfeld demonstrated for unigrams bigrams and trigrams page counts and ngram counts are highly correlated on a loglog scalethis result is based on zhu and rosenfelds queries to altavista a search engine that at the time of their research returned both the number of pages and the overall number of matches for a given query4 another important limitation of our approach arises from the fact that both google and altavista disregard punctuation and capitalization even if the search term is placed within quotation marksthis can lead to false positives for instance if the match crosses a phrase boundary such as in which matches hungry preyother false positives can be generated by page titles and links such as the examples and which match edition broadcast5 the fact that our method does not download web pages means that no tagging chunking or parsing can be carried out to ensure that the matches are correctinstead we rely on the simple adjacency of the search terms which is enforced by using queries enclosed within quotation marks this means that we miss any nonadjacent matches even though a chunker or parser would find theman example is an adjectivenoun bigram in which an adverbial intervenes between the adjective and the noun furthermore the absence of tagging chunking and parsing can also generate false positives in particular for queries containing words with partofspeech ambiguityas an example consider process directory which in our data set is a nounnoun bigram one of the matches returned by google is in which process is a verbanother example is fund membrane which is a nounnoun bigram in our data set but matches in googlekeller and lapata web frequencies for unseen bigrams another source of noise is the fact that google will sometimes return pages that do not include the search term at allthis can happen if the search term is contained in a link to the page as we did not limit our web searches to english there is also a risk that false positives are generated by crosslinguistic homonyms that is by words of other languages that are spelled in the same way as the english words in our data setshowever this problem is mitigated by the fact that english is by far the most common language on the web as shown by grefenstette and nioche also the chance of two such homonyms forming a valid bigram in another language is probably fairly smallto summarize web counts are certainly less sparse than the counts in a corpus of a fixed size however web counts are also likely to be significantly more noisy than counts obtained from a carefully tagged and chunked or parsed corpus as the examples in this section showit is therefore essential to carry out a comprehensive evaluation of the web counts generated by our methodthis is the topic of the next sectionsince web counts can be relatively noisy as discussed in the previous section it is crucial to determine whether there is a reliable relationship between web counts and corpus countsonce this is assured we can explore the usefulness of web counts for overcoming data sparsenesswe carried out a correlation analysis to determine whether there is a linear relationship between bnc and nantc counts and altavista and google countsall correlation coefficients reported in this article refer to pearsons r6 all results were obtained on logtransformed counts7 table 8 shows the results of correlating web counts with corpus counts from the bnc the corpus from which our bigrams were sampled a high correlation coefficient was obtained across the board ranging from 720 to 847 for altavista counts and from 720 to 850 for google countsthis indicates that web counts approximate bnc counts for the three types of bigrams under investigationnote that there is almost no difference between the correlations achieved using google and altavista countsit is important to check that these results are also valid for counts obtained from other corporawe therefore correlated our web counts with the counts obtained from nantc a corpus that is larger than the bnc but is drawn from a single genre namely news text the results are shown in table 9we find that google and altavista counts also correlate significantly with nantc countsthe correlation coefficients range from 667 to 788 for altavista and from 662 to 787 for googleagain there is virtually no difference between the correlations for the two search engineswe also observe that the correlation between web counts and bnc is generally slightly higher than the correlation between web counts and nantc countswe carried out onetailed ttests to determine whether the differences in the correlation coefficients were significantwe found that both altavista counts 311 p 01 and google counts 321 p 01 were significantly better correlated with bnc counts than with nantc counts for adjectivenoun bigramsthe difference in correlation coefficients was not significant for nounnoun and verbobject bigrams for either search enginetable 9 also shows the correlations between bnc counts and nantc countsthe intercorpus correlation can be regarded as an upper limit for the correlations we can expect between counts from two corpora that differ in size and genre and that have been obtained using different extraction methodsthe correlation between altavista and google counts and nantc counts reached the upper limit for all three bigram types the correlation between bnc counts and web counts reached the upper limit for nounnoun and verbobject bigrams and significantly exceeded it for adjectivenoun bigrams for altavista 316 p 01 and google 326 p 01we conclude that simple heuristics are sufficient to obtain useful frequencies from the web it seems that the large amount of data available for web counts outweighs the associated problems we found that web counts were highly correlated with frequencies from two different corporafurthermore web counts and corpus counts are as highly correlated as counts from two different corpora note that tables 8 and 9 also provide the correlation coefficients obtained when corpus frequencies are compared with frequencies that were recreated through classbased smoothing using the bnc as a training corpus this will be discussed in more detail in section 33previous work has demonstrated that corpus counts correlate with human plausibility judgments for adjectivenoun bigramsthis result holds both for seen bigrams and for unseen bigrams whose counts have been recreated using smoothing techniques based on these findings we decided to evaluate our web counts on the task of predicting plausibility ratingsif the web counts for bigrams correlate with plausibility judgments then this indicates that the counts are valid in the sense of being useful for predicting the intuitive plausibility of predicateargument pairsthe degree of correlation between web counts and plausibility judgments is an indicator of the quality of the web counts 321 methodfor seen and unseen adjectivenoun bigrams we used the two sets of plausibility judgments collected by lapata mcdonald and keller and lapata keller and mcdonald respectivelywe conducted four additional experiments to collect judgments for nounnoun and verbobject bigrams both seen and unseenthe experimental method was the same for all six experimentsmaterialsthe experimental stimuli were based on the six sets of seen or unseen bigrams extracted from the bnc as described in section 21 in the adjectivenoun and nounnoun cases the stimuli consisted simply of the bigramsin the verbobject case the bigrams were embedded in a short sentence to make them more natural a propernoun subject was addedprocedurethe experimental paradigm was magnitude estimation a technique standardly used in psychophysics to measure judgments of sensory stimuli which bard robertson and sorace and cowart have applied to the elicitation of linguistic judgmentsthe me procedure requires subjects to estimate the magnitude of physical stimuli by assigning numerical values proportional to the stimulus magnitude they perceivein contrast to the five or sevenpoint scale conventionally used to measure human intuitions me employs an interval scale and therefore produces data for which parametric inferential statistics are validme requires subjects to assign numbers to a series of linguistic stimuli in a proportional fashionsubjects are first exposed to a modulus item to which they assign an arbitrary numberall other stimuli are rated proportional to the modulusin this way each subject can establish his or her own rating scale thus yielding maximally finegraded data and avoiding the known problems with the conventional ordinal scales for linguistic data the experiments reported in this article were carried out using the webexp software package a series of previous studies has shown that data obtained using webexp closely replicate results obtained in a controlled laboratory setting this has been demonstrated for acceptability judgments coreference judgments and sentence completions in the present experiments subjects were presented with bigram pairs and were asked to rate the degree of plausibility proportional to a modulus itemthey first saw a set of instructions that explained the me technique and the judgment taskthe concept of plausibility was not defined but examples of plausible and implausible bigrams were given then subjects were asked to fill in a questionnaire with basic demographic informationthe experiment proper consisted of three phases a calibration phase designed to familiarize subjects with the task in which they had to estimate the length of five horizontal lines a practice phase in which subjects judged the plausibility of eight bigrams the main experiment in which each subject judged one of the six stimulus sets the stimuli were presented in random order with a new randomization being generated for each subjectsubjectsa separate experiment was conducted for each set of stimulithe number of subjects per experiment is shown in table 10 all subjects were selfreported native speakers of english they were recruited by postings to newsgroups and mailing listsparticipation was voluntary and unpaidwebexp collects byitem response time data subjects whose response times were very short or very long were excluded from the sample as they are unlikely to have completed the experiment adequatelywe also excluded the data of subjects who had participated more than once in the same experiment based on their demographic data and on their internet connection data which is logged by webexp322 results and discussionthe experimental data were normalized by dividing each numerical judgment by the modulus value that the subject had assigned to the reference sentencethis operation creates a common scale for all subjectsthen the data were transformed by taking the decadic logarithmthis transformation ensures that the judgments are normally distributed and is standard practice for magnitude estimation data all further analyses were conducted on the normalized logtransformed judgmentstable 10 shows the descriptive statistics for all six judgment experiments the original experiments by lapata mcdonald and keller and lapata keller and mcdonald for adjectivenoun bigrams and our new ones for nounnoun and verbobject bigramswe used correlation analysis to compare corpus counts and web counts with plausibility judgmentstable 11 lists the correlation coefficients that were obtained when correlating logtransformed web counts and corpus counts with mean plausibility judgments for seen adjectivenoun nounnoun and verbobject bigramsthe results show that both altavista and google counts correlate well with plausibility judgments for seen bigramsthe correlation coefficient for altavista ranges from 641 to 700 for google it ranges from 624 to 692the correlations for the two search engines are very similar which is also what we found in section 31 for the correlations between web counts and corpus countsnote that the web counts consistently achieve a higher correlation with the judgments than the bnc counts which range from 488 to 569we carried out a series of onetailed ttests to determine whether the differences between the correlation coefficients for the web counts and the correlation coefficients for the bnc counts were significantfor the adjectivenoun bigrams the altavista coefficient was significantly higher than the bnc coefficient 176 p 05 whereas the difference between the google coefficient and the bnc coefficient failed to reach significancefor the nounnoun bigrams both the altavista and the google coefficients were significantly higher than the bnc coefficient 311 p 01 and t 295 p 01also for the verbobject bigrams both the altavista coefficient and the google coefficient were significantly higher than the bnc coefficient 264 p 01 and t 232 p 05a similar picture was observed for the nantc countsagain the web counts outperformed the corpus counts in predicting plausibilityfor the adjectivenoun bigrams both the altavista and the google coefficient were significantly higher than the nantc coefficient 197 p 05 t 181 p 05for the nounnoun bigrams the altavista coefficient was higher than the nantc coefficient 164 p 05 but the google coefficient was not significantly different from the nantc coefficientfor verbobject bigrams the difference was significant for both search engines 274 p 01 t 238 p 01in sum for all three types of bigrams the correlation coefficients achieved with altavista were significantly higher than the ones achieved by either the bnc or the nantcgoogle counts outperformed corpus counts for all bigrams with the exception of adjectivenoun counts from the bnc and nounnoun counts from the nantcthe bottom panel of table 11 shows the correlation coefficients obtained by comparing logtransformed judgments with logtransformed web counts for unseen adjectivenoun nounnoun and verbobject bigramswe observe that the web counts consistently show a significant correlation with the judgments with the coefficient ranging from 480 to 578 for altavista counts and from 473 to 595 for the google countstable 11 also provides the correlations between plausibility judgments and counts recreated using classbased smoothing which we will discuss in section 33an important question is how well humans agree when judging the plausibility of adjectivenoun nounnoun and verbnoun bigramsintersubject agreement gives an upper bound for the task and allows us to interpret how well our webbased method performs in relation to humansto calculate intersubject agreement we used leaveoneout resamplingthis technique is a special case of nfold crossvalidation and has been previously used for measuring how well humans agree in judging semantic similarity for each subject group we divided the set of the subjects responses with size n into a set of size n 1 and a set of size 1 we then correlated the mean ratings of the former set with the ratings of the latterthis was repeated n times the mean of the correlation coefficients for the seen and unseen bigrams is shown in table 11 in the rows labeled agreement for both seen and unseen bigrams we found no significant difference between the upper bound and the correlation coefficients obtained using either altavista or google countsthis finding holds for all three types of bigramsthe same picture emerged for the bnc and nantc counts these correlation coefficients were not significantly different from the upper limit for all three types of bigrams both for seen and for unseen bigramsto conclude our evaluation demonstrated that web counts reliably predict human plausibility judgments both for seen and for unseen predicateargument bigramsaltavista counts for seen bigrams are a better predictor of human judgments than bnc and nantc countsthese results show that our heuristic method yields valid frequencies the simplifications we made in obtaining the web counts as well as the fact that web data are noisy seem to be outweighed by the fact that the web is up to a thousand times larger than the bncthe evaluation in the last two sections established that web counts are useful for approximating corpus counts and for predicting plausibility judgmentsas a further step in our evaluation we correlated web counts with counts recreated by applying a classbased smoothing method to the bncwe recreated cooccurrence frequencies for predicateargument bigrams using a simplified version of resniks selectional association measure proposed by lapata keller and mcdonald in a nutshell this measure replaces resniks informationtheoretic approach with a simpler measure that makes no assumptions with respect to the contribution of a semantic class to the total quantity of information provided by the predicate about the semantic classes of its argumentit simply substitutes the argument occurring in the predicateargument bigram with the concept by which it is represented in the wordnet taxonomypredicateargument cooccurrence frequency is estimated by counting the number of times the concept corresponding to the argument is observed to cooccur with the predicate in the corpusbecause a given word is not always represented by a single class in the taxonomy lapata keller and mcdonald constructed the frequency counts for a predicateargument bigram for each conceptual class by dividing the contribution from the argument by the number of classes to which it belongsthey demonstrate that the counts recreated using this smoothing technique correlate significantly with plausibility judgments for adjectivenoun bigramsthey also show that this classbased approach outperforms distanceweighted averaging a smoothing method that recreates unseen word cooccurrences on the basis of distributional similarity in predicting plausibilityin the current study we used the smoothing technique of lapata keller and mcdonald to recreate not only adjectivenoun bigrams but also nounnoun and verbobject bigramsas already mentioned in section 21 it was assumed that the noun is the predicate in adjectivenoun bigrams for nounnoun bigrams we treated the right noun as the predicate and for verbobject bigrams we treated the verb as the predicatewe applied lapata keller and mcdonalds technique to the unseen bigrams for all three bigram typeswe also used it on the seen bigrams which we were able to treat as unseen by removing all instances of the bigrams from the training corpusto test the claim that web frequencies can be used to overcome data sparseness we correlated the frequencies recreated using classbased smoothing on the bnc with the frequencies obtained from the webthe correlation coefficients for both seen and unseen bigrams are shown in table 12in all cases a significant correlation between web counts and recreated counts is obtainedfor seen bigrams the correlation coefficient ranged from 344 to 362 for altavista counts and from 330 to 349 for google countsfor unseen bigrams the correlations were somewhat higher ranging from 386 to 439 for altavista counts and from 397 to 444 for google countsfor both seen and unseen bigrams there was only a very small difference between the correlation coefficients obtained with the two search enginesit is also interesting to compare the performance of classbased smoothing and web counts on the task of predicting plausibility judgmentsthe correlation coefficients are listed in table 11the recreated frequencies are correlated significantly with all three types of bigrams both for seen and unseen bigramsfor the seen bigrams we found that the correlation coefficients obtained using smoothed counts were significantly lower than the upper bound for all three types of bigrams 301 p 01 t 323 p 01 t 343 p 01this result also held for the unseen bigrams the correlations obtained using smoothing were significantly lower than the upper bound for all three types of bigrams 186 p 05 t 197 p 05 t 336 p 01recall that the correlation coefficients obtained using the web counts were not found to be significantly different from the upper bound which indicates that web counts are better predictors of plausibility than smoothed countsthis fact was confirmed by further significance testing for seen bigrams we found that the altavista correlation coefficients were significantly higher than correlation coefficients obtained using smoothing for all three types of bigrams 331 p 01 t 411 p 01 t 432 p 01this also held for google counts 316 p 01 t 402 p 01 t 403 p 01for unseen bigrams the altavista coefficients and the coefficients obtained using smoothing were not significantly different for adjectivenoun bigrams but the difference reached significance for nounnoun and verbobject bigrams 208 p 05 t 253 p 01for google counts the difference was again not significant for adjectivenoun bigrams but it reached significance for nounnoun and verbobject bigrams 234 p 05 t 215 p 05finally we conducted a small study to investigate the validity of the counts that were recreated using classbased smoothingwe correlated the recreated counts for the seen bigrams with their actual bnc and nantc frequenciesthe correlation coefficients are reported in tables 8 and 9we found that the correlation between recreated counts and corpus counts was significant for all three types of bigrams for both corporathis demonstrates that the smoothing technique we employed generates realistic corpus counts in the sense that the recreated counts are correlated with the actual countshowever the correlation coefficients obtained using web counts were always substantially higher than those obtained using smoothed countsthese differences were significant for the bnc counts for altavista 838 p 01 t 500 p 01 t 503 p 01 and google 835 p 01 t 500 p 01 t 503 p 01they were also significant for the nantc counts for altavista 412 p 01 t 372 p 01 t 658 p 01 and google 408 p 01 t 306 p 01 t 647 p 01to summarize the results presented in this section indicate that web counts are indeed a valid way of obtaining counts for bigrams that are unseen in a given corpus they correlate reliably with counts recreated using classbased smoothingfor seen bigrams we found that web counts correlate with counts that were recreated using smoothing techniques for the task of predicting plausibility judgments we were able to show that web counts outperform recreated counts both for seen and for unseen bigramsfinally we found that web counts for seen bigrams correlate better than recreated counts with the real corpus countsit is beyond the scope of the present study to undertake a full comparison between web counts and frequencies recreated using all available smoothing techniques the smoothing method discussed above is simply one type of classbased smoothingother more sophisticated classbased methods do away with the simplifying assumption that the argument cooccurring with a given predicate is distributed evenly across its conceptual classes and attempt to find the right level of generalization in a concept hierarchy by discounting for example the contribution of very general classes other smoothing approaches such as discounting and distanceweighted averaging recreate counts of unseen word combinations by exploiting only corpusinternal evidence without relying on taxonomic informationour goal was to demonstrate that frequencies retrieved from the web are a viable alternative to conventional smoothing methods when data are sparse we do not claim that our webbased method is necessarily superior to smoothing or that it should be generally preferred over smoothing methodshowever the next section will present a smallscale study that compares the performance of several smoothing techniques with the performance of web counts on a standard task from the literaturein the smoothing literature recreated frequencies are typically evaluated using pseudodisambiguation the aim of the pseudodisambiguation task is to decide whether a given algorithm recreates frequencies that make it possible to distinguish between seen and unseen bigrams in a given corpusa set of pseudobigrams is constructed according to a set of criteria that ensure that they are unattested in the training corpusthen the seen bigrams are removed from the training data and the smoothing method is used to recreate the frequencies of both the seen bigrams and the pseudobigramsthe smoothing method is then evaluated by comparing the frequencies it recreates for both types of bigramswe evaluated our web counts by applying the pseudodisambiguation procedure that rooth et al prescher riezler and rooth and clark and weir employed for evaluating recreated verbobject bigram countsin this procedure the noun n from a verbobject bigram that is seen in a given corpus is paired with a randomly chosen verb v that does not take n as its object within the corpusthis results in an unseen verbobject bigram the seen bigram is now treated as unseen and the frequencies of both the seen and the unseen bigram are recreated the task is then to decide which of the two verbs v and v takes the noun n as its objectfor this the recreated bigram frequency is used the bigram with the higher recreated frequency is taken to be the seen bigramif this bigram is really the seen one then the disambiguation is correctthe overall percentage of correct disambiguations is a measure of the quality of the recreated frequencies in the following we will first describe in some detail the experiments that rooth et al and clark and weir conductedwe will then discuss how we replicated their experiments using the web as an alternative smoothing methodrooth et al used pseudodisambiguation to evaluate a classbased model that is derived from unlabeled data using the expectation maximization algorithmfrom a data set of 1280712 pairs they randomly selected 3000 pairs with each pair containing a fairly frequent verb and noun for each pair a fairly frequent verb v was randomly chosen such that the pair did not occur in the data setgiven the set of triples the task was to decide whether or was the correct pair by comparing the probabilities p and pthe probabilities were recreated using rooth et als thembased clustering model on a training set from which all seen pairs had been removedan accuracy of 80 on the pseudodisambiguation task was achieved prescher riezler and rooth evaluated rooth et als thembased clustering model again using pseudodisambiguation but on a separate data set using a slightly different method for constructing the pseudobigramsthey used a set of 298 bnc triples in which was chosen as in rooth et al but paired with a randomly chosen noun ngiven the set of triples the task was to decide whether or was the correct pair in each tripleprescher riezler and rooth reported pseudodisambiguation results with two clustering models rooth et als clustering approach which models the semantic fit between a verb and its argument and a refined version of this approach that models only the fit between a verb and its object disregarding other arguments of the verbthe results of the two models on the pseudodisambiguation task are shown in table 14at this point it is important to note that neither rooth et al nor prescher riezler and rooth used pseudodisambiguation for the final evaluation of their modelsrather the performance on the pseudodisambiguation task was used to optimize the model parametersthe results in tables 13 and 14 show the pseudodisambiguation performance achieved for the best parameter settingsin other words these results were obtained on the development set not on a completely unseen test setthis procedure is welljustified in the context of rooth et als and prescher riezler and rooths work which aimed at building models of lexical semantics not of pseudodisambiguationtherefore they carried out their final evaluations on unseen test sets for the tasks of lexicon induction and target language disambiguation once the model parameters had been fixed using the pseudodisambiguation development set8 clark and weir use a setting similar to that of rooth et al and prescher riezler and rooth here pseudodisambiguation is employed to evaluate the performance of a classbased probability estimation methodin order to address the problem of estimating conditional probabilities in the face of sparse data clark and weir define probabilities in terms of classes in a semantic hierarchy and propose hypothesis testing as a means of determining a suitable level of generalization in the hierarchyclark and weir report pseudodisambiguation results on two data sets with an experimental setup similar to that of rooth et al for the first data set 3000 pairs were randomly chosen from 13 million tuples extracted from the bnc the selected pairs contained relatively frequent verbs the data sets were constructed as proposed by rooth et al the procedure for creating the second data set was identical but this time only verbs that occurred between 100 and 1000 times were consideredclark and weir further compared their approach with resniks selectional association model and li and abes tree cut model on the same data setsthese methods are directly comparable as they can be used for classbased probability estimation and address the question of how to find a suitable level of generalization in a hierarchy the results of the three methods used on the two data sets are shown in table 15we employed the same pseudodisambiguation method to test whether webbased frequencies can be used for distinguishing between seen and artificially constructed unseen bigramswe obtained the data sets of rooth et al prescher riezler and rooth and clark and weir described abovegiven a set of triples the task was to decide whether or was the correct pairwe obtained altavista counts for f f f and f as described in section 239 then we used two models for pseudodisambiguation the joint probability model compared the joint probability estimates f and f and predicted that the bigram with the highest estimate is the seen onethe conditional probability model compared the conditional probability estimates f f and f f and again selected as the seen bigram the one with the highest estimate 10 the same two models were used to perform pseudodisambiguation for the triples where we have to choose between and here the probability estimates f and f were used for the joint probability model and f f and f f for the conditional probability modelthe results for rooth et als data set are given in table 13the conditional probability model achieves a performance of 712 correct for subjects and 852 correct for objectsthe performance on the whole data set is 777 which is below the performance of 800 reported by rooth et al however the difference is not found to be significant using a chisquare test comparing the number of correct and incorrect classifications 202 p 16the joint probability model performs consistently worse than the conditional probability model it achieves an overall accuracy of 727 which is significantly lower than the accuracy of the rooth et al model 1950 p 01a similar picture emerges with regard to prescher riezler and rooths data set the conditional probability model achieves an accuracy of 667 for subjects and 705 for objectsthe combined performance of 685 is significantly lower than the performance of both the va model 778 p 01 and the vo model 3328 p 01 reported by prescher riezler and rooth again the joint probability model performs worse than the conditional probability model achieving an overall accuracy of 624we also applied our webbased method to the pseudodisambiguation data set of clark and weir here the conditional probability model reached a performance of 839 correct on the lowfrequency data setthis is significantly higher than the highest performance of 724 reported by clark and weir on the same data set 11550 p 01the joint probability model performs worse than the conditional model at 811however this is still significantly better than the best result of clark and weir 6314 p 01the same pattern is observed for the highfrequency data set on which the conditional probability model achieves 877 correct and thus significantly outperforms clark and weir who obtained 739 28373 p 01the joint probability model achieved 853 on this data set also significantly outperforming clark and weir 11935 p 01to summarize we demonstrated that the simple webbased approach proposed in this article yields results for pseudodisambiguation that outperform classbased smoothing techniques such as the ones proposed by resnik li and abe and clark and weir we were also able to show that a webbased approach is able to achieve the same performance as an thembased smoothing model proposed by rooth et al however the webbased approach was not able to outperform the more sophisticated thembased model of prescher riezler and rooth another result we obtained is that webbased models that use conditional probabilities generally outperform a more simpleminded approach that relies directly on bigram frequencies for pseudodisambiguationthere are a number of reasons why our results regarding pseudodisambiguation have to be treated with some cautionfirst of all the two smoothing methods were not evaluated on the same data set and therefore the two results are not directly comparablefor instance clark and weirs data set is substantially less noisy than rooth et als and prescher riezler and rooths as it contains only words and nouns that occur in wordnetfurthermore stephen clark points out that wordnetbased approaches are at a disadvantage when it comes to pseudodisambiguationpseudodisambiguation assumes that the correct pair is unseen in the training data this makes the task deliberately hard because some of the pairs might be frequent enough that reliable corpus counts can be obtained without having to use wordnet another problem with wordnetbased approaches is that they offer no systematic treatment of word sense ambiguity which puts them at a disadvantage with respect to approaches that do not rely on a predefined inventory of word sensesfinally recall that the results for the thembased approaches in tables 13 and 14 were obtained on the development set it is possible that this fact inflates the performance values for the thembased approaches this article explored a novel approach to overcoming data sparsenessif a bigram is unseen in a given corpus conventional approaches recreate its frequency using techniques such as backoff linear interpolation classbased smoothing or distanceweighted averaging the approach proposed here does not recreate the missing counts but instead retrieves them from a corpus that is much larger than any existing corpus it launches queries to a search engine in order to determine how often the bigram occurs on the webwe systematically investigated the validity of this approach by using it to obtain frequencies for predicateargument bigrams we first applied the approach to seen bigrams randomly sampled from the bncwe found that the counts obtained from the web are highly correlated with the counts obtained from the bncwe then obtained bigram counts from nantc a corpus that is substantially larger than the bncagain we found that web counts are highly correlated with corpus countsthis indicates that web queries can generate frequencies that are comparable to the ones obtained from a balanced carefully edited corpus such as the bnc but also from a large news text corpus such as nantcsecondly we performed an evaluation that used the web frequencies to predict human plausibility judgments for predicateargument bigramsthe results show that web counts correlate reliably with judgments for all three types of predicateargument bigrams tested both seen and unseenfor the seen bigrams we showed that the web frequencies correlate better with judged plausibility than corpus frequenciesto substantiate the claim that the web counts can be used to overcome data sparseness we compared bigram counts obtained from the web with bigram counts recreated using a classbased smoothing technique we found that web frequencies and recreated frequencies are reliably correlated and that web frequencies are better at predicting plausibility judgments than smoothed frequenciesthis holds both for unseen bigrams and for seen bigrams that are treated as unseen by omitting them from the training corpusfinally we tested the performance of our frequencies in a standard pseudodisambiguation taskwe applied our method to three data sets from the literaturethe results show that web counts outperform counts recreated using a number of classbased smoothing techniqueshowever counts recreated using an thembased smoothing approach yielded better pseudodisambiguation performance than web countsto summarize we have proposed a simple heuristic method for obtaining bigram counts from the webusing four different types of evaluation we demonstrated that this simple heuristic method is sufficient to obtain valid frequency estimatesit seems that the large amount of data available outweighs the problems associated with using the web as a corpus a number of questions arise for future research are web frequencies suitable for probabilistic modeling in particular since web counts are not perfectly normalized as zhu and rosenfeld have shown how can existing smoothing methods benefit from web counts how do the results reported in this article carry over to languages other than english what is the effect of the noise introduced by our heuristic approachthe last question could be assessed by reproducing our results using a snapshot of the web from which argument relations can be extracted more accurately using pos tagging and chunking techniquesfinally it will be crucial to test the usefulness of webbased frequencies for realistic nlp taskspreliminary results are reported by lapata and keller who use web counts successfully for a range of nlp tasks including candidate selection for machine translation contextsensitive spelling correction bracketing and interpretation of compounds adjective ordering and pp attachmentthis work was conducted while both authors were at the department of computational linguistics saarland university saarbruckenthe work was inspired by a talk that gregory grefenstette gave in saarbrucken in 2001 about his research on using the web as a corpusthe present article is an extended and revised version of keller lapata and ourioupina stephen clark and stefan riezler provided valuable comments on this researchwe are also grateful to four anonymous reviewers for computational linguistics their feedback helped to substantially improve the present articlespecial thanks are due to stephen clark and detlef prescher for making their pseudodisambiguation data sets available
J03-3005
using the web to obtain frequencies for unseen bigramsthis article shows that the web can be employed to obtain frequencies for bigrams that are unseen in a given corpuswe describe a method for retrieving counts for adjectivenoun nounnoun and verbobject bigrams from the web by querying a search enginewe evaluate this method by demonstrating a high correlation between web frequencies and corpus frequencies a reliable correlation between web frequencies and plausibility judgments a reliable correlation between web frequencies and frequencies recreated using classbased smoothing a good performance of web frequencies in a pseudodisambiguation taskour study reveals that the large amount of data available for the web counts could outweigh the noisy problems
headdriven statistical models for natural language parsing this article describes three statistical models for natural language parsing the models extend methods from probabilistic contextfree grammars to lexicalized grammars leading to approaches in which a parse tree is represented as the sequence of decisions corresponding to a headcentered topdown derivation of the tree independence assumptions then lead to parameters that encode the xbar schema subcategorization ordering of complements placement of adjuncts bigram dependencies and preferences for close attachment all of these preferences are expressed by probabilities conditioned on lexical heads the models are evaluated on the penn wall street journal treebank showing that their accuracy is competitive with other models in the literature to gain a better understanding of the models we also give results on different constituent types as well as a breakdown of precisionrecall results in recovering various types of dependencies we analyze various characteristics of the models through experiments on parsing accuracy by collectingfrequencies ofvarious structures in the treebank and through linguistically motivated examples finally we compare the models to others that have been applied to parsing the treebank aiming to give some explanation of the difference in performance of the various models this article describes three statistical models for natural language parsingthe models extend methods from probabilistic contextfree grammars to lexicalized grammars leading to approaches in which a parse tree is represented as the sequence of decisions corresponding to a headcentered topdown derivation of the treeindependence assumptions then lead to parameters that encode the xbar schema subcategorization ordering of complements placement of adjuncts bigram lexical dependencies whmovement and preferences for close attachmentall of these preferences are expressed by probabilities conditioned on lexical headsthe models are evaluated on the penn wall street journal treebank showing that their accuracy is competitive with other models in the literatureto gain a better understanding of the models we also give results on different constituent types as well as a breakdown of precisionrecall results in recovering various types of dependencieswe analyze various characteristics of the models through experiments on parsing accuracy by collectingfrequencies ofvarious structures in the treebank and through linguistically motivated examplesfinally we compare the models to others that have been applied to parsing the treebank aiming to give some explanation of the difference in performance of the various modelsambiguity is a central problem in natural language parsingcombinatorial effects mean that even relatively short sentences can receive a considerable number of parses under a widecoverage grammarstatistical parsing approaches tackle the ambiguity problem by assigning a probability to each parse tree thereby ranking competing trees in order of plausibilityin many statistical models the probability for each candidate tree is calculated as a product of terms each term corresponding to some substructure within the treethe choice of parameterization is essentially the choice of how to represent parse treesthere are two critical questions regarding the parameterization of a parsing approach in this article we explore these issues within the framework of generative models more precisely the historybased models originally introduced to parsing by black et al in a historybased model a parse tree is represented as a sequence of decisions the decisions being made in some derivation of the treeeach decision has an associated probability and the product of these probabilities defines a probability distribution over possible derivationswe first describe three parsing models based on this approachthe models were originally introduced in collins the current article1 gives considerably more detail about the models and discusses them in greater depthin model 1 we show one approach that extends methods from probabilistic contextfree grammars to lexicalized grammarsmost importantly the model has parameters corresponding to dependencies between pairs of headwordswe also show how to incorporate a distance measure into these models by generalizing the model to a historybased approachthe distance measure allows the model to learn a preference for close attachment or rightbranching structuresin model 2 we extend the parser to make the complementadjunct distinction which will be important for most applications using the output from the parsermodel 2 is also extended to have parameters corresponding directly to probability distributions over subcategorization frames for headwordsthe new parameters lead to an improvement in accuracyin model 3 we give a probabilistic treatment of whmovement that is loosely based on the analysis of whmovement in generalized phrase structure grammar the output of the parser is now enhanced to show trace coindexations in whmovement casesthe parameters in this model are interesting in that they correspond directly to the probability of propagating gpsgstyle slash features through parse trees potentially allowing the model to learn island constraintsin the three models a parse tree is represented as the sequence of decisions corresponding to a headcentered topdown derivation of the treeindependence assumptions then follow naturally leading to parameters that encode the xbar schema subcategorization ordering of complements placement of adjuncts lexical dependencies whmovement and preferences for close attachmentall of these preferences are expressed by probabilities conditioned on lexical headsfor this reason we refer to the models as headdriven statistical modelswe describe evaluation of the three models on the penn wall street journal treebank model 1 achieves 877 constituent precision and 875 consituent recall on sentences of up to 100 words in length in section 23 of the treebank and models 2 and 3 give further improvements to 883 constituent precision and 880 constituent recallthese results are competitive with those of other models that have been applied to parsing the penn treebankmodels 2 and 3 produce trees with information about whmovement or subcategorizationmany nlp applications will need this information to extract predicateargument structure from parse treesthe rest of the article is structured as followssection 2 gives background material on probabilistic contextfree grammars and describes how rules can be lexicalized through the addition of headwords to parse treessection 3 introduces the three probabilistic modelssection 4 describes various refinments to these modelssection 5 discusses issues of parameter estimation the treatment of unknown words and also the parsing algorithmsection 6 gives results evaluating the performance of the models on the penn wall street journal treebank section 7 investigates various aspects of the models in more detailwe give a detailed analysis of the parsers performance on treebank data including results on different constituent typeswe also give a breakdown of precision and recall results in recovering various types of dependenciesthe intention is to give a better idea of the strengths and weaknesses of the parsing modelssection 7 goes on to discuss the distance features in the models the implicit assumptions that the models make about the treebank annotation style and the way that contextfree rules in the original treebank are broken down allowing the models to generalize by producing new rules on test data exampleswe analyze these phenomena through experiments on parsing accuracy by collecting frequencies of various structures in the treebank and through linguistically motivated examplesfinally section 8 gives more discussion by comparing the models to others that have been applied to parsing the treebankwe aim to give some explanation of the differences in performance among the various modelsprobabilistic contextfree grammars are the starting point for the models in this articlefor this reason we briefly recap the theory behind nonlexicalized pcfgs before moving to the lexicalized casefollowing hopcroft and ullman we define a contextfree grammar g as a 4tuple where n is a set of nonterminal symbols e is an alphabet a is a distinguished start symbol in n and r is a finite set of rules in which each rule is of the form x β for some x e n β e the grammar defines a set of possible strings in the language and also defines a set of possible leftmost derivations under the grammareach derivation corresponds to a treesentence pair that is well formed under the grammara probabilistic contextfree grammar is a simple modification of a contextfree grammar in which each rule in the grammar has an associated probability pthis can be interpreted as the conditional probability of xs being expanded using the rule x β as opposed to one of the other possibilities for expanding x listed in the grammarthe probability of a derivation is then a product of terms each term corresponding to a rule application in the derivationthe probability of a given treesentence pair derived by n applications of contextfree rules lhsi rhsi 1 two of its children x and y are separated by a comma then the last word in y must be directly followed by a comma or must be the last word in the sentencein training data 96 of commas follow this rulethe rule has the benefit of improving efficiency by reducing the number of constituents in the chartit would be preferable to develop a probabilistic analog of this rule but we leave this to future research the treebank annotates sentences with empty subjects with an empty none element under subject position in training this null element is removed in models 2 and 3 sentences without subjects are changed to have a nonterminal sgtable 1 shows the various levels of backoff for each type of parameter in the modelnote that we decompose pl c p p h w t lc into the product where e1 e2 and e3 are maximumlikelihood estimates with the context at levels 1 2 and 3 in the table and a1 a2 and a3 are smoothing parameters where 0 ai 1we use the smoothing method described in bikel et al which is derived from a method described in witten and bell first say that the most specific estimate e1 n1 f1 that is f1 is the value of the denominator count in the relative frequency estimatesecond define u1 to be the number of distinct outcomes seen in the f1 events in training datathe variable u1 can take any value from one to f1 inclusivethen we set analogous definitions for f2 and u2 lead to a2 f2 f25u2 the coefficient five was chosen to maximize accuracy on the development set section 0 of the treebank all words occurring less than six times14 in training data and words in test data that have never been seen in training are replaced with the unknown tokenthis allows the model to handle robustly the statistics for rare or new wordswords in test data that have not been seen in training are deterministically assigned the pos tag that is assigned by the tagger described in ratnaparkhi as a preprocessing step the tagger is used to decode each test data sentenceall other words are tagged during parsing the output from ratnaparkhis tagger being ignoredthe pos tags allowed for each word are limited to those that have been seen in training data for that word the model is fully integrated in that partofspeech tags are statistically generated along with words in the models so that the parser will make a statistical decision as to the most likely tag for each known word in the sentencethe parsing algorithm for the models is a dynamic programming algorithm which is very similar to standard chart parsing algorithms for probabilistic or weighted grammarsthe algorithm has complexity o where n is the number of words in the stringin practice pruning strategies can improve efficiency a great dealthe appendices of collins give a precise description of the parsing algorithms an analysis of their computational complexity and also a description of the pruning methods that are employedsee eisner and satta for an o algorithm for lexicalized grammars that could be applied to the models in this papereisner and satta also describe an o algorithm for a restricted class of lexicalized grammars it is an open question whether this restricted class includes the models in this articlethe parser was trained on sections 221 of the wall street journal portion of the penn treebank and tested on section 23 we use the parseval measures to compare performance number of correct constituents in proposed parse number of constituents in proposed parse number of correct constituents in proposed parse number of constituents in treebank parse crossing brackets number of constituents that violate constituent boundaries with a constituent in the treebank parse for a constituent to be correct it must span the same set of words and have the same label15 as a constituent in the treebank parsetable 2 shows the results for models 1 2 and 3 and a variety of other models in the literaturetwo models outperform models 2 and 3 on section 23 of the treebankcollins uses a technique based on boosting algorithms for machine learning that reranks nbest output from model 2 in this articlecharniak describes a series of enhancements to the earlier model of charniak the precision and recall of the traces found by model 3 were 938 and 901 respectively where three criteria must be met for a trace to be correct it must be an argument to the correct headword it must be in the correct position in relation to that headword 15 magerman collapses advp and prt into the same label for comparison we also removed this distinction when calculating scoresresults on section 23 of the wsj treebanklrlp labeled recallprecisioncbs is the average number of crossing brackets per sentence0 cbs 2 cbs are the percentage of sentences with 0 or 2 crossing brackets respectivelyall the results in this table are for models trained and tested on the same data using the same evaluation metric the main model changes were the improved treatment of punctuation together with the addition of the pp and pcc parameters and it must be dominated by the correct nonterminal labelfor example in figure 7 the trace is an argument to bought which it follows and it is dominated by a vpof the 437 cases 341 were stringvacuous extraction from subject position recovered with 963 precision and 988 recall and 96 were longer distance cases recovered with 814 precision and 594 recall16this section discusses some aspects of the models in more detailsection 71 gives a much more detailed analysis of the parsers performancein section 72 we examine 16 we exclude infinitival relative clauses from these figures the algorithm scored 41 precision and 18 recall on the 60 cases in section 23but infinitival relatives are extremely difficult even for human annotators to distinguish from purpose clauses the distance features in the modelin section 73 we examine how the model interacts with the penn treebank style of annotationfinally in section 74 we discuss the need to break down contextfree rules in the treebank in such a way that the model will generalize to give nonzero probability to rules not seen in trainingin each case we use three methods of analysisfirst we consider how various aspects of the model affect parsing performance through accuracy measurements on the treebanksecond we look at the frequency of different constructions in the treebankthird we consider linguistically motivated examples as a way of justifying various modeling choicesin this section we look more closely at the parser by evaluating its performance on specific constituents or constructionsthe intention is to get a better idea of the parsers strengths and weaknessesfirst table 3 has a breakdown of precision and recall by constituent typealthough somewhat useful in understanding parser performance a breakdown of accuracy by constituent type fails to capture the idea of attachment accuracyfor this reason we also evaluate the parsers precision and recall in recovering dependencies between wordsthis gives a better indication of the accuracy on different kinds of attachmentsa dependency is defined as a triple with the following elements recall and precision for different constituent types for section 0 of the treebank with model 2label is the nonterminal label proportion is the percentage of constituents in the treebank section 0 that have this label count is the number of constituents that have this labela tree and its associated dependenciesnote that in normalizing dependencies all pos tags are replaced with tag and the npc parent in the fifth relation is replaced with npin addition the relation is normalized to some extentfirst all pos tags are replaced with the token tag so that postagging errors do not lead to errors in dependencies17 second any complement markings on the parent or head nonterminal are removedfor example is replaced by this prevents parsing errors where a complement has been mistaken to be an adjunct leading to more than one dependency erroras an example in figure 12 if the np the man with the telescope was mistakenly identified as an adjunct then without normalization this would lead to two dependency errors both the pp dependency and the verbobject relation would be incorrectwith normalization only the verbobject relation is incorrectunder this definition goldstandard and parseroutput trees can be converted to sets of dependencies and precision and recall can be calculated on these dependenciesdependency accuracies are given for section 0 of the treebank in table 4table 5 gives a breakdown of the accuracies by dependency typetable 6 shows the dependency accuracies for eight subtypes of dependency that together account for 94 of all dependencies complement or where is any complement except vpc the most frequent verb complements subjectverb and objectverb are recovered with over 95 precision and 92 recalla conclusion to draw from these accuracies is that the parser is doing very well at recovering the core structure of sentences complements sentential heads and basenp relationships are all recovered with over 90 accuracythe main sources of errors are adjunctscoordination is especially difficult for the parser most likely because it often involves a dependency between two content words leading to very sparse statisticsthe distance measure whose implementation was described in section 311 deserves more discussion and motivationin this section we consider it from three perspectives its influence on parsing accuracy an analysis of distributions in training data that are sensitive to the distance variables and some examples of sentences in which the distance measure is useful in discriminating among competing analyses721 impact of the distance measure on accuracytable 7 shows the results for models 1 and 2 with and without the adjacency and verb distance measuresit is clear that the distance measure improves the models accuracywhat is most striking is just how badly model 1 performs without the distance measurelooking at the parsers output the reason for this poor performance is that the adjacency condition in the distance measure is approximating subcategorization informationin particular in phrases such as pps and sbars that almost always take exactly one complement to the right of their head the adjacency feature encodes this monovalency through parameters p 0 and p 1figure 13 shows some particularly bad structures returned by model 1 with no distance variablesanother surprise is that subcategorization can be very useful but that the distance measure has masked this utilityone interpretation in moving from the least parameterized model to the fully parameterized model is that the adjacency condition adds around 11 in accuracy the verb condition adds another 15 and subcategorization finally adds a mere 08under this interpretation subcategorization information is not all that useful but under another interpretation subcategorization is very useful in moving from model 1 to model 2 we see a 10 improvement as a result of subcategorization parameters adjacency then adds a 15 improvement and the verb condition adds a final 1 improvementfrom an engineering point of view given a choice of whether to add just distance or subcategorization to the model distance is preferablebut linguistically it is clear that adjacency can only approximate subcategorization and that subcategorization is distribution of nonterminals generated as postmodifiers to an np at various distances from the heada true means the modifier is adjacent to the head v true means there is a verb between the head and the modifierdistributions were calculated from the first 10000 events for each of the three cases in sections 221 of the treebank more correct in some sensein freewordorder languages distance may not approximate subcategorization at all well a complement may appear to either the right or left of the head confusing the adjacency condition722 frequencies in training datatables 8 and 9 show the effect of distance on the distribution of modifiers in two of the most frequent syntactic environments np and verb modificationthe distribution varies a great deal with distancemost striking is the way that the probability of stop increases with increasing distance from 71 to 89 to 98 in the np case from 8 to 60 to 96 in the verb caseeach modifier probability generally decreases with distancefor example the probability of seeing a pp modifier to an np decreases from 177 to 557 to 093distribution of nonterminals generated as postmodifiers to a verb within a vp at various distances from the heada true means the modifier is adjacent to the head v true means there is a verb between the head and the modifierthe distributions were calculated from the first 10000 events for each of the distributions in sections 221auxiliary verbs were excluded from these statistics components of the distance measure allow the model to learn a preference for rightbranching structuresfirst consider the adjacency conditionfigure 14 shows some examples in which rightbranching structures are more frequentusing the statistics from tables 8 and 9 the probability of the alternative structures can be calculatedthe results are given belowthe rightbranching structures get higher probability if the distance variables were not conditioned on the product of terms for the two alternatives would be identical and the model would have no preference for one structure over anotherprobabilities for the two alternative pp structures in figure 14 are as follows some alternative structures for the same surface sequence of chunks in which the adjacency condition distinguishes between the two structuresthe percentages are taken from sections 221 of the treebankin both cases rightbranching structures are more frequent 0177 x 00557 x 08853 x 07078 0006178 probabilities for the sbar case in figure 14 assuming the sbar contains a verb are as follows some alternative structures for the same surface sequence of chunks in which the verb condition in the distance measure distinguishes between the two structuresin both cases the lowattachment analyses will get higher probability under the model because of the low probability of generating a pp modifier involving a dependency that crosses a verb ples in which the verb condition is important in differentiating the probability of two structuresin both cases an adjunct can attach either high or low but high attachment results in a dependencys crossing a verb and has lower probabilityan alternative to the surface string feature would be a predicate such as were any of the previous modifiers in x where x is a set of nonterminals that are likely to contain a verb such as vp sbar s or sgthis would allow the model to handle cases like the first example in figure 15 correctlythe second example shows why it is preferable to condition on the surface stringin this case the verb is invisible to the top level as it is generated recursively below the np object725 structural versus semantic preferencesone hypothesis would be that lexical statistics are really what is important in parsing that arriving at a correct interpretation for a sentence is simply a matter of finding the most semantically plausible analysis and that the statistics related to lexical dependencies approximate this notion of plausibilityimplicitly we would be just as well off if statistics were calculated between items at the predicateargument level with no reference to structurethe distance preferences under this interpretation are just a way of mitigating sparsedata problems when the lexical statistics are too sparse then falling back on some structural preference is not ideal but is at least better than chancethis hypothesis is suggested by previous work on specific cases of attachment ambiguity such as pp attachment which has showed that models will perform better given lexical statistics and that a straight structural preference is merely a fallbackbut some examples suggest this is not the case that in fact many sentences have several equally semantically plausible analyses but that structural preferences distinguish strongly among themtake the following example surprisingly this sentence has two analyses bill can be the deep subject of either believed or shotyet people have a very strong preference for bill to be doing the shooting so much so that they may even miss the second analysisas evidence that structural preferences can even override semantic plausibility take the following example this sentence is a garden path the structural preference for yesterday to modify the most recent verb is so strong that it is easy to miss the semantically plausible interpretation paraphrased as flip said yesterday that squeaky will do the workthe model makes the correct predictions in these casesin example the statistics in table 9 show that a pp is nine times as likely to attach low as to attach high when two verbs are candidate attachment points in example the probability of seeing an np modifier to do in a nonadjacent but nonverbcrossing environment is 211 in sections 221 of the treebank in contrast the chance of seeing an np adjunct modifying said across a verb is 0026 the two probabilities differ by a factor of almost 80figures 16 and 17 show some alternative styles of syntactic annotationthe penn treebank annotation style tends to leave trees quite flat typically with one level of structure for each xbar level at the other extreme are completely binarybranching representationsthe two annotation styles are in some sense equivalent in that it is easy to define a onetoone mapping between thembut crucially two different annotation styles may lead to quite different parsing accuracies for a given model even if the two representations are equivalent under some onetoone mappinga parsing model does not need to be tied to the annotation style of the treebank on which it is trainedthe following procedure can be used to transform trees in both training and test data into a new representation alternative annotation styles for a sentence s with a verb head v left modifiers x1 x2 and right modifiers y1 y2 the penn treebank style of analysis an alternative but equivalent binary branching representationalternative annotation styles for a noun phrase with a noun head n left modifiers x1 x2 and right modifiers y1 y2 the penn treebank style of analysis an alternative but equivalent binary branching representation our modification of the penn treebank style to differentiate recursive and nonrecursive nps as long as there is a onetoone mapping between the treebank and the new representation nothing is lost in making such a transformationgoodman and johnson both suggest this strategygoodman converts the treebank into binarybranching treesjohnson considers conversion to a number of different representations and discusses how this influences accuracy for nonlexicalized pcfgsthe models developed in this article have tacitly assumed the penn treebank style of annotation and will perform badly given other representations this section makes this point more explicit describing exactly what annotation style is suitable for the models and showing how other annotation styles will cause problemsthis dependence on penn treebankstyle annotations does not imply that the models are inappropriate for a treebank annotated in a different style in this case we simply recommend transforming the trees into flat onelevelperxbarlevel trees before training the model as in the threestep procedure outlined aboveother models in the literature are also very likely to be sensitive to annotation stylecharniaks models will most likely perform quite differently with binarybranching trees the models of magerman and ratnaparkhi use contextual predicates that would most likely need to be modified given a different annotation stylegoodmans models are the exception as he already specifies that the treebank should be transformed into his chosen representation binarybranching trees resentations in figures 16 and 17 have the same lexical dependencies the difference between the representations involves structural preferences such as the rightbranching preferences encoded by the distance measureapplying the models in this article to treebank analyses that use this type of headcentered bb binarybranching structures flat penn treebank style annotationsin each case the binarybranching annotation style prevents the model from learning that these structures should receive low probability because of the long distance dependency associated with the final pp binarybranching tree will result in a distance measure that incorrectly encodes a preference for rightbranching structuresto see this consider the examples in figure 18in each binarybranching example the generation of the final modifying pp is blind to the distance between it and the head that it modifiesat the top level of the tree it is apparently adjacent to the head crucially the closer modifier the other pp in is hidden lower in the tree structureso the model will be unable to differentiate generation of the pp in adjacent versus nonadjacent or nonverbcrossing versus verbcrossing environments and the structures in figure 18 will be assigned unreasonably high probabilitiesthis does not mean that distance preferences cannot be encoded in a binarybranching pcfggoodman achieves this by adding distance features to the nonterminalsthe spirit of this implementation is that the toplevel rules vp vp pp and np np pp would be modified to vp vp pp and np np pp respectively where means a phrase in which the head has a verb in its right modifiers and means a phrase that has at least one right modifier to the headthe model will learn from training data that p ppvp p ppvp that is that a prepositionalphrase modification is much more likely when it does not cross a verb shows the modification to the penn treebank annotation to relabel basenps as npbit also illustrates a problem that arises if a distinction between the two is not made structures such as that in figure 19 are assigned high probabilities even if they examples of other phrases in the penn treebank in which nonrecursive and recursive phrases are not differentiated are never seen in training datathe model is fooled by the binarybranching style into modeling both pps as being adjacent to the head of the noun phrase so 19 will be assigned a very high probabilitythis problem does not apply only to nps other types of phrases such as adjectival phrases or adverbial phrases also have nonrecursive and recursive levels which are not differentiated in the penn treebankideally these cases should be differentiated too we did not implement this change because it is unlikely to make much difference in accuracy given the relative infrequency of these cases the parsing approaches we have described concentrate on breaking down contextfree rules in the treebank into smaller componentslexicalized rules were initially broken down to barebones markov processes then increased dependency on previously generated modifiers was built back up through the distance measure and subcategorizationeven with this additional context the models are still able to recover rules in test data that have never been seen in training dataan alternative proposed in charniak is to limit parsing to those contextfree rules seen in training dataa lexicalized rule is predicted in two stepsfirst the whole contextfree rule is generatedsecond the lexical items are filled inthe probability of a rule is estimated as19 the estimation technique used in charniak for the cf rule probabilities interpolates several estimates the lowest being p pany rules not seen in training data will be assigned zero probability with this modelparse trees in test data will be limited to include rules seen in traininga problem with this approach is coverageas shown in this section many test data sentences will require rules that have not been seen in trainingthis gives motivation for breaking down rules into smaller componentsthis section motivates the need to break down rules from four perspectivesfirst we discuss how the penn treebank annotation style leads to a very large number of grammar rulessecond we assess the extent of the coverage problem by looking at rule frequencies in training datathird we conduct experiments to assess the impact of the coverage problem on accuracyfourth we discuss how breaking rules down may improve estimation as well as coverage the penn treebank annotation style has already been discussed in section 73the flatness of the trees leads to a very large number of rules primarily because the number of adjuncts to a head is potentially unlimited for example there can be any number of pp adjuncts to a head verba binarybranching grammar can generate an unlimited number of adjuncts with very few rulesfor example the following grammar generates any sequence vp v in contrast the penn treebank style would create a new rule for each number of pps seen in training datathe grammar would be and so on other adverbial adjuncts such as adverbial phrases or adverbial sbars can also modify a verb several times and all of these different types of adjuncts can be seen together in the same rulethe result is a combinatorial explosion in the number of rulesto give a flavor of this here is a random sample of rules of the format vp vb modifier that occurred only once in sections 221 of the penn treebank it is not only verb phrases that because this kind of combinatorial explosion other phrases in particular nonrecursive noun phrases also contribute a huge number of rulesthe next section considers the distributional properties of the rules in more detailnote that there is good motivation for the penn treebanks decision to represent rules in this way rather than with rules expressing chomsky adjunction and first it allows the argumentadjunct distinction for pp modifiers to verbs to be left undefined this distinction was found to be very difficult for annotatorssecond in the surface ordering adjuncts are often found closer to the head than complements thereby yielding structures that fall outside the chomsky adjunction schemafor example a rule such as is found very frequently in the penn treebank sbar complements nearly always extrapose over adjuncts742 quantifying the coverage problemto quantify the coverage problem rules were collected from sections 221 of the penn treebankpunctuation was raised as high as possible in the tree and the rules did not have complement markings or the distinction between basenps and recursive npsunder these conditions 939382 rule tokens were collected there were 12409 distinct rule typeswe also collected the count for each ruletable 10 shows some statistics for these rulesa majority of rules in the grammar occur only oncethese rules account for 072 of rules by tokenthat is if one of the 939382 rule tokens in sections 221 of the treebank were drawn at random there would be a 072 chance of its being the only instance of that rule in the 939382 tokenson the other hand if a rule were drawn at random from the 12409 rules in the grammar induced from those sections there would be a 545 chance of that rules having occurred only oncethe percentage by token of the onecount rules is an indication of the coverage problemfrom this estimate 072 of all rules required in test data would never have been seen in trainingit was also found that 150 of all sentences have at least one rule that occurred just oncethis gives an estimate that roughly 1 in 667 sentences in test data will not be covered by a grammar induced from 40000 sentences in the treebankif the complement markings are added to the nonterminals and the basenpnonrecursive np distinction is made then the coverage problem is made worsetable 11 gives the statistics in this caseby our counts 171 of all sentences contain at least 1 onecount rule the impact of the coverage problem on parsing accuracysection 0 of the treebank was parsed with models 1 and 2 as before but the parse trees were restricted to include rules already seen in training datatable 12 shows the resultsrestricting the rules leads to a 05 decrease in recall and a 16 decrease in precision for model 1 and a 09 decrease in recall and a 20 decrease in precision for model 2 only motivation for breaking down rulesthe method may also improve estimationto see this consider the rules headed by told whose counts are shown in table 13estimating the probability p using charniaks method would interpolate two maximumlikelihood estimates λpml pml estimation interpolates between the specific lexically sensitive distribution in table 13 and the nonlexical estimate based on just the parent nonterminal vpthere are many different rules in the more specific distribution and there are several onecount rules from these statistics λ would have to be relatively lowthere is a high chance that a new rule for told will be required in test data therefore a reasonable amount of probability mass must be left to the backedoff estimate pmlthis estimation method is missing a crucial generalization in spite of there being many different rules the distribution over subcategorization frames is much sharpertold is seen with only five subcategorization frames in training data the large number of rules is almost entirely due to adjuncts or punctuation appearing after or between complementsthe estimation method in model 2 effectively estimates the probability of a rule as the left and right subcategorization frames lc and rc are chosen firstthe entire rule is then generated by markov processesonce armed with the pl and pr parameters the model has the ability to learn the generalization that told appears with a quite limited sharp distribution over subcategorization framessay that these parameters are again estimated through interpolation for example in this case λ can be quite highonly five subcategorization frames have been seen in the 147 casesthe lexically specific distribution pml can therefore be quite highly trustedrelatively little probability mass is left to the backedoff estimatein summary from the distributions in table 13 the model should be quite uncertain about what rules told can appear withit should be relatively certain however about the subcategorization frameintroducing subcategorization parameters allows the model to generalize in an important way about ruleswe have carefully isolated the core of rulesthe subcategorization framethat the model should be certain aboutwe should note that charniaks method will certainly have some advantages in estimation it will capture some statistical properties of rules that our independence assumptions will lose unfortunately because of space limitations it is not possible to give a complete review of previous work in this articlein the next two sections we give a detailed comparison of the models in this article to the lexicalized pcfg model of charniak and the historybased models of jelinek et al magerman and ratnaparkhi for discussion of additional related work chapter 4 of collins attempts to give a comprehensive review of work on statistical parsing up to around 1998of particular relevance is other work on parsing the penn wsj treebank eisner describes several dependencybased models that are also closely related to the models in this articlecollins also describes a dependencybased model applied to treebank parsinggoodman describes probabilistic feature grammars and their application to parsing the treebankchelba and jelinek describe an incremental historybased parsing approach that is applied to language modeling for speech recognitionhistorybased approaches were introduced to parsing in black et al roark describes a generative probabilistic model of an incremental parser with good results in terms of both parse accuracy on the treebank and also perplexity scores for language modelingearlier work that is of particular relevance considered the importance of relations between lexical heads for disambiguation in parsingsee hindle and rooth for one of the earliest pieces of research on this topic in the context of prepositionalphrase attachment ambiguityfor work that uses lexical relations for parse disambiguation all with very promising resultssee sekine et al jones and eisner and alshawi and carter statistical models of lexicalized grammatical formalisms also lead to models with parameters corresponding to lexical dependenciessee resnik schabes and schabes and waters for work on stochastic treeadjoining grammarsjoshi and srinivas describe an alternative supertagging model for treeadjoining grammarssee alshawi for work on stochastic headautomata and lafferty sleator and temperley for a stochastic version of link grammarde marcken considers stochastic lexicalized pcfgs with specific reference to them methods for unsupervised trainingseneff describes the use of markov models for rule generation which is closely related to the markovstyle rules in the models in the current articlefinally note that not all machinelearning methods for parsing are probabilisticsee brill and hermjakob and mooney for rulebased learning systemsin recent work chiang has shown that the models in the current article can be implemented almost unchanged in a stochastic treeadjoining grammarbikel has developed generative statistical models that integrate word sense information into the parsing processeisner develops a sophisticated generative model for lexicalized contextfree rules making use of a probabilistic model of lexicalized transformations between rulesblaheta and charniak describe methods for the recovery of the semantic tags in the penn treebank annotations a significant step forward from the complementadjunct distinction recovered in model 2 of the current articlecharniak gives measurements of perplexity for a lexicalized pcfggildea reports on experiments investigating the utility of different features in bigram lexicaldependency models for parsingmiller et al develop generative lexicalized models for information extraction of relationsthe approach enhances nonterminals in the parse trees to carry semantic labels and develops a probabilistic model that takes these labels into accountcollins et al describe how the models in the current article were applied to parsing czechcharniak describes a parsing model that also uses markov processes to generate rulesthe model takes into account much additional context through a maximumentropyinspired modelthe use of additional features gives clear improvements in performancecollins shows similar improvements through a quite different model based on boosting approaches to reranking an initial modelin fact model 2 described in the current articleis used to generate nbest outputthe reranking approach attempts to rerank the nbest lists using additional features that are not used in the initial modelthe intention of this approach is to allow greater flexibility in the features that can be included in the modelfinally bod describes a very different approach that gives excellent results on treebank parsing comparable to the results of charniak and collins we now give a more detailed comparison of the models in this article to the parser of charniak the model described in charniak has two types of parameters for example the dependency parameter for an np headed by profits which is the subject of the verb rose would be pthis nonterminal could expand with any of the rules s 0 in the grammarthe rule probability is defined as pso the rule probability depends on the nonterminal being expanded its headword and also its parentthe next few sections give further explanation of the differences between charniaks models and the models in this article features of charniaks modelfirst the rule probabilities are conditioned on the parent of the nonterminal being expandedour models do not include this information although distinguishing recursive from nonrecursive nps can be considered a reduced form of this informationsecond charniak uses wordclass information to smooth probabilities and reports a 035 improvement from this featurefinally charniak uses 30 million words of text for unsupervised traininga parser is trained from the treebank and used to parse this text statistics are then collected from this machineparsed text and merged with the treebank statistics to train a second modelthis gives a 05 improvement in performancecharniaks dependency parameters are conditioned on less informationas noted previously whereas our parameters are pl2 charniaks parameters in our notation would be pl2the additional information included in our models is as follows h the head nonterminal label at first glance this might seem redundant for example an s will usually take a vp as its headin some cases however the head label can vary for example an s can take another s as its head in coordination cases lti t the pos tags for the head and modifier wordsinclusion of these tags allows our models to use pos tags as word class informationcharniaks model may be missing an important generalization in this respectcharniak shows that using the pos tags as word class information in the model is important for parsing accuracy c the coordination flagthis distinguishes for example coordination cases from appositives charniaks model will have the same parameterpin both of these cases p lcrc the punctuation distance and subcategorization variablesit is difficult to tell without empirical tests whether these features are important model are effectively decomposed into our l1 parameters the head parameters andin models 2 and 3the subcategorization and gap parametersthis decomposition allows our model to assign probability to rules not seen in training data see section 74 for an extensive discussion tures to encode preferences for rightbranching structurescharniaks model does not represent this information explicitly but instead learns it implicitly through rule probabilitiesfor example for an np pp pp sequence the preference for a rightbranching structure is encoded through a much higher probability for the rule np np pp than for the rule np np pp ppthis strategy does not encode all of the information in the distance measurethe distance measure effectively penalizes rules np npb np pp where the middle np contains a verb in this case the pp modification results in a dependency that crosses a verbcharniaks model is unable to distinguish cases in which the middle np contains a verb from those in which it does notwe now make a detailed comparison of our models to the historybased models of ratnaparkhi jelinek et al and magerman a strength of these models is undoubtedly the powerful estimation techniques that they use maximumentropy modeling or decision trees a weakness we will argue in this section is the method of associating parameters with transitions taken by bottomup shiftreducestyle parserswe give examples in which this method leads to the parameters unnecessarily fragmenting the training data in some cases or ignoring important context in other casessimilar observations have been made in the context of tagging problems using maximumentropy models we first analyze the model of magerman through three common examples of ambiguity pp attachment coordination and appositivesin each case a word sequence s has two competing structures t1 and t2 with associated decision sequences and respectivelythus the probability of the two structures can be written as it will be useful to isolate the decision between the two structures to a single probability termlet the value j be the minimum value of i such that di eithen we can rewrite the two probabilities as follows the first thing to note is that 11i1j1 p 11i1j1 p so that these probability terms are irrelevant to the decision between the two structureswe make one additional assumption that this is justified for the examples in this section because once the jth decision is made the following decisions are practically deterministicequivalently we are assuming that p p 1 that is that very little probability mass is lost to trees other than t1 or t2given these two equalities we have isolated the decision between the two structures to the parameters p and pfigure 21 shows a case of pp attachmentthe first thing to note is that the pp attachment decision is made before the pp is even builtthe decision is linked to the np preceding the preposition whether the arc above the np should go left or rightthe next thing to note is that at least one important feature the verb falls outside of the conditioning contextthis could be repaired by considering additional context but there is no fixed bound on how far the verb can be from the decision pointnote also that in other cases the method fragments the data in unnecessary wayscases in which the verb directly precedes the np or is one place farther to the left are treated separatelyfigure 22 shows a similar example np coordination ambiguityagain the pivotal decision is made in a somewhat counterintuitive location at the np preceding the coordinatorat this point the np following the coordinator has not been built and its head noun is not in the contextual windowfigure 23 shows an appositive example in which the head noun of the appositive np is not in the contextual window when the decision is madethese last two examples can be extended to illustrate another problemthe np after the conjunct or comma could be the subject of a following clausefor example and are two candidate structures for the same sequence of words shows the first decision in which the two structures differthe arc above the np can go either left of the appositive phrase or right of the appositive phrase in john likes mary and bill loves jill the decision not to coordinate mary and bill is made just after the np mary is builtat this point the verb loves is outside the contextual window and the model has no way of telling that bill is the subject of the following clausethe model is assigning probability mass to globally implausible structures as a result of points of local ambiguity in the parsing processsome of these problems can be repaired by changing the derivation order or the conditioning contextratnaparkhi has an additional chunking stage which means that the head noun does fall within the contextual window for the coordination and appositive casesthe models in this article incorporate parameters that track a number of linguistic phenomena bigram lexical dependencies subcategorization frames the propagation of slash categories and so onthe models are generative models in which parse trees are decomposed into a number of steps in a topdown derivation of the tree and the decisions in the derivation are modeled as conditional probabilitieswith a careful choice of derivation and independence assumptions the resulting model has parameters corresponding to the desired linguistic phenomenain addition to introducing the three parsing models and evaluating their performance on the penn wall street journal treebank we have aimed in our discussion to give more insight into the models their strengths and weaknesses the effect of various features on parsing accuracy and the relationship of the models to other work on statistical parsingin conclusion we would like to highlight the following points subcategorization parameters performs very poorly suggesting that the adjacency feature is capturing some subcategorization information in the model 1 parserthe results in table 7 show that the subcategorization adjacency and verbcrossing features all contribute significantly to model 2s performance section 73 described how the three models are wellsuited to the penn treebank style of annotation and how certain phenomena may fail to be modeled correctly given treebanks with different annotation stylesthis may be an important point to bear in mind when applying the models to other treebanks or other languagesin particular it may be important to perform transformations on some structures in treebanks with different annotation styles section 74 gave evidence showing the importance of the models ability to break down the contextfree rules in the treebank thereby generalizing to produce new rules on test examplestable 12 shows that precision on section 0 of the treebank decreases from 890 to 870 and recall decreases from 888 to 879 when the model is restricted to produce only those contextfree rules seen in training datajelinek et al and magerman although certainly similar to charniaks model the three models in this article have some significant differences which are identified in section 81section 82 showed that the parsing models of ratnaparkhi jelinek et al and magerman can suffer from very similar problems to the label bias or observation bias problem observed in tagging models as described in lafferty mccallum and pereira and klein and manning my phd thesis is the basis of the work in this article i would like to thank mitch marcus for being an excellent phd thesis adviser and for contributing in many ways to this researchi would like to thank the members of my thesis committeearavind joshi mark liberman fernando pereira and mark steedmanfor the remarkable breadth and depth of their feedbackthe work benefited greatly from discussions with jason eisner dan melamed adwait ratnaparkhi and paola merlothanks to dimitrios samaras for giving feedback on many portions of the worki had discussions with many other people at ircs university of pennsylvnia which contributed quite directly to this research supervision was the beginning of this researchfinally thanks to the anonymous reviewers for their comments
J03-4003
headdriven statistical models for natural language parsingthis article describes three statistical models for natural language parsingthe models extend methods from probabilistic contextfree grammars to lexicalized grammars leading to approaches in which a parse tree is represented as the sequence of decisions corresponding to a headcentered topdown derivation of the treeindependence assumptions then lead to parameters that encode the xbar schema subcategorization ordering of complements placement of adjuncts bigram lexical dependencies whmovement and preferences for close attachmentall of these preferences are expressed by probabilities conditioned on lexical headsthe models are evaluated on the penn wall street journal treebank showing that their accuracy is competitive with other models in the literatureto gain a better understanding of the models we also give results on different constituent types as well as a breakdown of precisionrecall results in recovering various types of dependencieswe analyze various characteristics of the models through experiments on parsing accuracy by collecting frequencies of various structures in the treebank and through linguistically motivated examplesfinally we compare the models to others that have been applied to parsing the treebank aiming to give some explanation of the difference in performance of the various modelswe propose to generate the head of a phrase first and then generate its sisters using markovian processes thereby exploiting headsisterdependencies
disambiguating nouns verbs and adjectives using automatically acquired selectional preferences selectional preferences have been used by word sense disambiguation systems as one source of disambiguating information we evaluate wsd using selectional preferences acquired for english adjectivenoun subject and direct object grammatical relationships with respect to a standard test corpus the selectional preferences are specific to verb or adjective classes rather than individual word forms so they can be used to disambiguate the cooccurring adjectives and verbs rather than just the nominal argument heads we also investigate use of the onesenseperdiscourse heuristic to propagate a sense tag for a word to other occurrences of the same word within the current document in order to increase coverage although the preferences perform well in comparison with other unsupervised wsd systems on the same corpus the results show that for many applications further knowledge sources would be required to achieve an adequate level of accuracy and coverage in addition to quantifying performance we analyze the results to investigate the situations in which the selectional preferences achieve the best precision and in which the onesenseperdiscourse heuristic increases performance selectional preferences have been used by word sense disambiguation systems as one source of disambiguating informationwe evaluate wsd using selectional preferences acquired for english adjectivenoun subject and direct object grammatical relationships with respect to a standard test corpusthe selectional preferences are specific to verb or adjective classes rather than individual word forms so they can be used to disambiguate the cooccurring adjectives and verbs rather than just the nominal argument headswe also investigate use of the onesenseperdiscourse heuristic to propagate a sense tag for a word to other occurrences of the same word within the current document in order to increase coveragealthough the preferences perform well in comparison with other unsupervised wsd systems on the same corpus the results show that for many applications further knowledge sources would be required to achieve an adequate level of accuracy and coveragein addition to quantifying performance we analyze the results to investigate the situations in which the selectional preferences achieve the best precision and in which the onesenseperdiscourse heuristic increases performancealthough selectional preferences are a possible knowledge source in an automatic word sense disambiguation system they are not a panaceaone problem is coverage most previous work has focused on acquiring selectional preferences for verbs and applying them to disambiguate nouns occurring at subject and direct object slots in normal running text however a large proportion of word tokens do not fall at these slotsthere has been some work looking at other slots and on using nominal arguments as disambiguators for verbs but the problem of coverage remainsselectional preferences can be used for wsd in combination with other knowledge sources but there is a need to ascertain when they work well so that they can be utilized to their full advantagethis article is aimed at quantifying the disambiguation performance of automatically acquired selectional preferences in regard to nouns verbs and adjectives with respect to a standard test corpus and evaluation setup and to identify strengths and weaknessesalthough there is clearly a limit to coverage using preferences alone because preferences are acquired only with respect to specific grammatical roles we show that when dealing with running text rather than isolated examples coverage can be increased at little cost in accuracy by using the onesenseperdiscourse heuristicwe acquire selectional preferences as probability distributions over the wordnet noun hyponym hierarchythe probability distributions are conditioned on a verb or adjective class and a grammatical relationshipa noun is disambiguated by using the preferences to give probability estimates for each of its senses in wordnet that is for wordnet synsetsverbs and adjectives are disambiguated by using the probability distributions and bayes rule to obtain an estimate of the probability of the adjective or verb class given the noun and the grammatical relationshippreviously we evaluated noun and verb disambiguation on the english allwords task in the senseval2 exercise we now present results also using preferences for adjectives again evaluated on the senseval2 test corpus the results are encouraging given that this method does not rely for training on any handtagged data or frequency distributions derived from such dataalthough a modest amount of english sensetagged data is available we nevertheless believe it is important to investigate methods that do not require such data because there will be languages or texts for which sensetagged data for a given word is not available or relevantthe goal of this article is to assess the wsd performance of selectional preference models for adjectives verbs and nouns on the senseval2 test corpusthere are two applications for wsd that we have in mind and are directing our researchthe first application is text simplification as outlined by carroll minnen pearce et al one subtask in this application involves substituting words with thier more frequent synonyms for example substituting letter for missiveour motivation for using wsd is to filter out inappropriate senses of a word token so that the substituting synonym is appropriate given the contextfor example in the following sentence we would like to use strategy rather than dodge as a substitute for scheme a recent government study singled out the scheme as an example to otherswe are also investigating the disambiguation of verb senses in running text before subcategorization information for the verbs is acquired in order to produce a subcategorization lexicon specific to sense for example if subcategorization were acquired specific to sense rather than verb form then distinct senses of fire could have different subcategorization entries selectional preferences could also then be acquired automatically from sensetagged data in an iterative approach we acquire selectional preferences from automatically preprocessed and parsed text during a training phasethe parser is applied to the test data as well in the runtime phase to identify grammatical relations among nouns verbs and adjectivesthe acquired selectional preferences are then applied to the nounverb and nounadjective pairs in these grammatical constructions for disambiguationthe overall structure of the system is illustrated in figure 1we describe the individual components in sections 3133 and 4the preprocessor consists of three modules applied in sequence a tokenizer a partofspeech tagger and a lemmatizerthe tokenizer comprises a small set of manually developed finitestate rules for identifying word and sentence boundariesthe tagger uses a bigram hidden markov model augmented with a statistical unknown word guesserwhen applied to the training data for selectional preference acquisition it produces the single highestranked pos tag for each wordin the runtime phase it returns multiple tag hypotheses each with an associated forwardbackward probability to reduce the impact of tagging errorsthe lemmatizer reduces inflected verbs and nouns to their base formsit uses a set of finitestate rules expressing morphological regularities and subregularities together with a list of exceptions for specific word formsthe parser uses a widecoverage unificationbased shallow grammar of english pos tags and punctuation and performs disambiguation using a contextsensitive probabilistic model recovering from extragrammaticality by returning partial parsesthe output of the parser is a set of grammatical relations specifying the syntactic dependency between each head and its dependent taken from the phrase structure tree that is returned from the disambiguation phasefor selectional preference acquisition we applied the analysis system to the 90 million words of the written portion of the british national corpus the parser produced complete analyses for around 60 of the sentences and partial analyses for over 95 of the remainderboth in the acquisition phase and at run time we extract from the analyser output subjectverb verbdirect object and nounadjective modifier dependencies1 we did not use the senseval2 penn treebankstyle bracketings supplied for the test datathe preferences are acquired for grammatical relations involving nouns and grammatically related adjectives or verbswe use wordnet synsets to define our sense inventoryour method exploits the hyponym links given for nouns the troponym links for verbs 2 and the similarto relationship given for adjectives the preference models are modifications of the tree cut models originally proposed by li and abe the main differences between that work and ours are that we acquire adjective as well as verb models and also that our models are with respect to verb and adjective classes rather than formswe acquire models for classes because we are using the models for wsd whereas li and abe used them for structural disambiguationwe define a tcm as followslet nc be the set of noun synsets in wordnet nc nc e wordnet and ns be the set of noun senses 3 in wordnet ns ns e wordneta tcm is a set of noun classes that partition ns disjointlywe use p to refer to such a set of classes in a tcma tcm is defined by p and a probability distribution the probability distribution is conditioned by the grammatical contextin this work the probability distribution associated with a tcm is conditioned on a verb class and either the subject or directobject relation or an adjective class and the adjectivenoun relationlet vc be the set of verb synsets in wordnet vc vc e wordnetlet ac be the set of adjective classes thus the tcms define a probability distribution over ns that is conditioned on a verb class or adjective class and a particular grammatical relation acquisition of a tcm for a given vc and gr proceeds as followsthe data for acquiring the preference are obtained from a subset of the tuples involving verbs in the synset or troponym synsetsnot all verbs that are troponyms or direct members of the synset are used in trainingwe take the noun argument heads occurring with verbs that have no more than 10 senses in wordnet and a frequency of 20 or more occurrences in the bnc data in the specified grammatical relationshipthe threshold of 10 senses removes some highly polysemous verbs having many sense distinctions that are rather subtleverbs that have more than 10 senses include very frequent verbs such as be and do that do not select strongly for their argumentsthe frequency threshold of 20 is intended to remove noisy datawe set the threshold by examining a plot of bnc frequency and the percentage of verbs at particular frequencies that are not listed in wordnet using 20 as a threshold for the subject slot results in only 5 verbs that are not found in wordnet whereas 73 of verbs with fewer than 20 bnc occurrences are not present in wordnet4 the selectionalpreference models for adjectivenoun relations are conditioned on an aceach ac comprises a group of adjective wordnet synsets linked by the similarto relationthese groups are formed such that they partition all adjective synsetsthus ac ac e wordnet adjective synsets linked by similartofor example figure 3 shows the adjective classes that include the adjective fundamental and that are formed in this way5 for selectionalpreference models conditioned on adjective classes we use only those adjectives that have 10 synsets or less in wordnet and have 20 or more occurrences in the bncthe set of ncs in p are selected from all the possibilities in the hyponym hierarchy according to the minimum description length principle as used by li and abe mdl finds the best tcm by considering the cost of describing both the model and the argument head data encoded in the modelthe cost for a tcm is calculated according to equation the number of parameters of the model is given by k which is the number of ncs in p minus onen is the sample of the argument head datathe cost of describing each noun argument head is calculated by the log of the probability estimate for that noun description length model description length data description length adjective classes that include fundamentalthe probability estimate for each n is obtained using the estimates for all the nss that n haslet cn be the set of ncs that include n as a direct member cn nc ncn nclet nc be a hypernym of nc on p and let nsnc ns nc then the estimate p is obtained using the estimates for the hypernym classes on p for all the cn that n belongs to the probability at any particular nc is divided by nsnc to give the estimate for each p under that ncthe probability estimates for the nc p or p are obtained from the tuples from the data of nouns cooccurring with verbs belonging to the conditioning vc in the specified grammatical relationship the frequency credit for a tuple is divided by cn for any n and by the number of synsets of v cv a hypernym nc includes the frequency credit attributed to all its hyponyms this ensures that the total frequency credit at any p across the hyponym hierarchy equals the credit for the conditioning vcthis will be the sum of the frequency credit for all verbs that are direct members or troponyms of the vc divided by the number of other senses of each of these verbs tcms for the directobject slot of two verb classes that include the verb seizeto ensure that the tcm covers all ns in wordnet we modify li and abes original scheme by creating hyponym leaf classes below all wordnets internal classes in the hyponym hierarchyeach leaf holds the ns previously held at the internal classfigure 4 shows portions of two tcmsthe tcms are similar as they both contain the verb seize but the tcm for the class that includes clutch has a higher probability for the entity noun class compared to the class that also includes assume and usurpthis example includes only toplevel wordnet classes although the tcm may use more specific noun classesnouns adjectives and verbs are disambiguated by finding the sense with the maximum probability estimate in the given contextthe method disambiguates nouns and verbs to the wordnet synset level and adjectives to a coarsegrained level of wordnet synsets linked by the similarto relation as described previouslynouns are disambiguated when they occur as subjects or direct objects and when modified by adjectiveswe obtain a probability estimate for each nc to which the target noun belongs using the distribution of the tcm associated with the cooccurring verb or adjective and the grammatical relationshipli and abe used tcms for the task of structural disambiguationto obtain probability estimates for noun senses occurring at classes beneath hypernyms on the cut li and abe used the probability estimate at the nc on the cut divided by the number of ns descendants as we do when finding r during training so the probability estimate is shared equally among all nouns in the nc as in equation one problem with doing this is that in cases in which the tcm is quite high in the hierarchy for example at the entity class the probability of any nss occurring under this nc on the tcm will be the same and does not allow us to discriminate among senses beneath this levelfor the wsd task we compare the probability estimates at each nc e cn so if a noun belongs to several synsets we compare the probability estimates given the context of these synsetswe obtain estimates for each nc by using the probability of the hypernym nc on r rather than assume that all synsets under a given nc on r have the same likelihood of occurrence we multiply the probability estimate for the hypernym nc by the ratio of the prior frequency of the nc that is p for which we seek the estimate divided by the prior frequency of the hypernym nc these prior estimates are taken from populating the noun hyponym hierarchy with the prior frequency data for the gr irrespective of the cooccurring verbsthe probability at the hypernym nc will necessarily total the probability at all hyponyms since the frequency credit of hyponyms is propagated to hypernymsthus to disambiguate a noun occurring in a given relationship with a given verb the nc e cn that gives the largest estimate for p is taken where the verb class is that which maximizes this estimate from cvthe tcm acquired for each vc of the verb in the given gr provides an estimate for p and the estimate for nc is obtained as in equation for example one target noun was letter which occurred as the direct object of sign in our parses of the senseval2 datathe tcm that maximized the probability estimate for p is shown in figure 5the noun letter is disambiguated by comparing the probability estimates on the tcm above the five senses of letter multiplied by the proportion of that probability mass attributed to that synsetalthough entity has a higher probability on the tcm compared to matter which is above the correct sense of letter6 the ratio of prior probabilities for the synset containing letter7 under entity is 0001 whereas that for the synset under matter is 0226this gives a probability of 0009 x 0226 0002 for the noun class probability given the verb class and grammatical contextthis is the highest probability for any of the synsets of letter and so in this case the correct sense is selectedverbs and adjectives are disambiguated using tcms to give estimates for p and p respectivelythese are combined with prior estimates for p and p using bayes rule to give and for adjectivenoun relations the prior distributions for p p and p are obtained during the training phasefor the prior distribution over nc the frequency credit of each noun in the specified gr in the training data is divided by cnthe frequency credit attached to a hyponym is propagated to the superordinate hypernyms and the frequency of a hypernym totals the frequency at its hyponyms the distribution over vc is obtained similarly using the troponym relationfor the distribution over ac the frequency credit for each adjective is divided by the number of synsets to which the adjective belongs and the credit for an ac is the sum over all the synsets that are members by virtue of the similarto wordnet linkto disambiguate a verb occurring with a given noun the vc from cv that gives the largest estimate for p is takenthe nc for the cooccurring noun is the nc from cn that maximizes this estimatethe estimate for p is taken as in equation but selecting the vc to maximize the estimate for p rather than pan adjective is likewise disambiguated to the ac from all those to which the adjective belongs using the estimate for p and selecting the nc that maximizes the p estimatethere is a significant limitation to the word tokens that can be disambiguated using selectional preferences in that they are restricted to those that occur in the specified grammatical relations and in argument head positionmoreover we have tcms only for adjective and verb classes in which there was at least one adjective or verb member that met our criteria for training we chose not to apply tcms for disambiguation where we did not have tcms for one or more classes for the verb or adjectiveto increase coverage we experimented with applying the onesenseperdiscourse heuristic with this heuristic a sense tag for a given word is propagated to other occurrences of the same word within the current document in order to increase coveragewhen applying the ospd heuristic we simply applied a tag for a noun verb or adjective to all the other instances of the same word type with the same part of speech in the discourse provided that only one possible tag for that word was supplied by the selectional preferences for that discoursesenseval2 english allwords task resultswe evaluated our system using the senseval2 test corpus on the english allwords task we entered a previous version of this system for the senseval2 exercise in three variants under the names sussexsel sussexselospd and sussexselospdana 8 for senseval2 we used only the direct object and subject slots since we had not yet dealt with adjectivesin figure 6 we show how our system fared at the time of senseval2 compared to other unsupervised systems9 we have also plotted the results of the supervised systems and the precision and recall achieved by using the most frequent sense 10 in the work reported here we attempted disambiguation for head nouns and verbs in subject and direct object relationships and for adjectives and nouns in adjectivenoun relationshipsfor each test instance we applied subject preferences before direct object preferences and direct object preferences before adjectivenoun preferenceswe also propagated sense tags to test instances not in these relationships by applying the onesenseperdiscourse heuristicwe did not use the senseval2 coarsegrained classification as this was not available at the time when we were acquiring the selectional preferenceswe therefore do not include in the following the coarsegrained results they are just slightly better than the finegrained results which seems to be typical of other systemsour latest overall results are shown in table 1in this table we show the results both with and without the ospd heuristicthe results for the english senseval2 tasks were generally much lower than those for the original senseval competitionat the time of the senseval2 workshop this was assumed to be due largely to the use of wordnet as the inventory as opposed to hector but palmer trang dang and fellbaum have subsequently shown that at least for the lexical sample tasks this was due to a harder selection of words with a higher average level of polysemyfor three of the most polysemous verbs that overlapped between the english lexical sample for senseval and senseval2 the performance was comparabletable 2 shows our precision results including use of the ospd heuristic broken down by part of speechalthough the precision for nouns is greater than that for verbs the difference is much less when we remove the trivial monosemous casesnouns verbs and adjectives all outperform their random baseline for precision and the difference is more marked when monosemous instances are droppedtable 3 shows the precision results for polysemous words given the slot and the disambiguation sourceoverall once at least one word token has been disambiguated by the preferences the ospd heuristic seems to perform better than the selectional preferenceswe can see however that although this is certainly true for the nouns the difference for the adjectives is less marked and the preferences outperform ospd for the verbsit seems that verbs obey the ospd principle much less than nounsalso verbs are best disambiguated by their direct objects whereas nouns appear to be better disambiguated as subjects and when modified by adjectivesthe precision of our system compares well with that of other unsupervised systems on the senseval2 english allwords task despite the fact that these other systems use a number of different sources of information for disambiguation rather than selectional preferences in isolationlight and greiff summarize some earlier wsd results for automatically acquired selectional preferencesthese results were obtained for three systems on a training and test data set constructed by resnik containing nouns occurring as direct objects of 100 verbs that select strongly for their objectsboth the test and training sets were extracted from the section of the brown corpus within the penn treebank and used the treebank parsesthe test set comprised the portion of this data within semcor containing these 100 verbs and the training set comprised 800000 words from the penn treebank parses of the brown corpus not within semcorall three systems obtained higher precision than the results we report here with ciaramita and johnsons bayesian belief networks achieving the best accuracy at 514these results are not comparable with ours however for three reasonsfirst our results for the directobject slot are for all verbs in the english allwords task as opposed to just those selecting strongly for their direct objectswe would expect that wsd results using selectional preferences would be better for the latter class of verbssecond we do not use manually produced parses but the output from our fully automatic shallow parserthird and finally the baselines reported for resniks test set were higher than those for the allwords taskfor resniks test data the random baseline was 285 whereas for the polysemous nouns in the directobject relation on the allwords task it was 239the distribution of senses was also perhaps more skewed for resniks test set since the first sense heuristic was 828 whereas it was 536 for the polysemous direct objects in the allwords taskalthough our results do show that the precision for the tcms compares favorably with that of other unsupervised systems on the english allwords task it would be worthwhile to compare other selectional preference models on the same dataalthough the accuracy of our system is encouraging given that it does not use handtagged data the results are below the level of stateoftheart supervised systemsindeed a system just assigning to each word its most frequent sense as listed in wordnet would do better than our preference models the firstsense heuristic however assumes the existence of sensetagged data that are able to give a definitive first sensewe do not use any firstsense informationalthough a modest amount of sensetagged data is available for english for other languages with minimal sensetagged resources the heuristic is not applicablemoreover for some words the predominant sense varies depending on the domain and text typeto quantify this we carried out an analysis of the polysemous nouns verbs and adjectives in semcor occurring in more than one semcor file and found that a large proportion of words have a different first sense in different files and also in different genres for adjectives there seems to be a lot less ambiguity and cancer that did better than average but whether or not they did better than the firstsense heuristic depends of course on the sense in which they are usedfor example all 10 occurrences of cancer are in the first sense so the first sense heuristic is impossible to beat in this casefor the test items that are not in their first sense we beat the firstsense heuristic but on the other hand we failed to beat the random baselineour performance on these items is low probably because they are lowerfrequency senses for which there is less evidence in the untagged training corpus we believe that selectional preferences would perform best if they were acquired from similar training data to that for which disambiguation is requiredin the future we plan to investigate our models for wsd in specific domains such as sport and financethe senses and frequency distribution of senses for a given domain will in general be quite different from those in a balanced corpusthere are individual words that are not used in the first sense on which our tcm preferences do well for example sound but there are not enough data to isolate predicates or arguments that are good disambiguators from those that are notwe intend to investigate this issue further with the senseval2 lexical sample data which contains more instances of a smaller number of wordsperformance of selectional preferences depends not just on the actual word being disambiguated but the cohesiveness of the tuple we have therefore investigated applying a threshold on the probability of the class before disambiguationfigure 7 presents a graph of precision against threshold applied to the probability estimate for the highestscoring classwe show alongside this the random baseline and the firstsense heuristic for these itemsselectional preferences appear to do better on items for which the probability predicted by our model is higher but the firstsense heuristic does even better on thesethe first sense heuristic with respect to semcor outperforms the selectional preferences when it is averaged over a given textthat seems to be the case overall but there will be some words and texts for which the first sense from semcor is not relevant and use of a threshold on probability and perhaps a differential between probability of the topranked senses suggested by the model should increase precisionthresholding the probability estimate for the highestscoring classin these experiments we applied the ospd heuristic to increase coverageone problem in doing this when using a finegrained classification like wordnet is that although the ospd heuristic works well for homonyms it is less accurate for related senses and this distinction is not made in wordnetwe did however find that in semcor for the majority of polysemous11 lemma and file combinations there was only one sense exhibited we refrained from using the ospd in situations in which there was conflicting evidence regarding the appropriate sense for a word type occurring more than once in an individual filein our experiments the ospd heuristic increased coverage by 7 and recall by 3 at a cost of only a 1 decrease in precisionwe quantified coverage and accuracy of sense disambiguation of verbs adjectives and nouns in the senseval2 english allwords test corpus using automatically acquired selectional preferenceswe improved coverage and recall by applying the onesenseperdiscourse heuristicthe results show that disambiguation models using only selectional preferences can perform with accuracy well above the random baseline although accuracy would not be high enough for applications in the absence of other knowledge sources the results compare well with those for other systems that do not use sensetagged training dataselectional preferences work well for some word combinations and grammatical relationships but not well for otherswe hope in future work to identify the situations in which selectional preferences have high precision and to focus on these at the expense of coverage on the assumption that other knowledge sources can be used where there is not strong evidence from the preferencesthe firstsense heuristic based on sensetagged data such as that available in semcor seems to beat unsupervised models such as oursfor many words however the predominant sense varies across domains and so we contend that it is worth concentrating on detecting when the first sense is not relevant and where the selectionalpreference models provide a high probability for a secondary sensein these cases evidence for a sense can be taken from multiple occurrences of the word in the document using the onesenseperdiscourse heuristicthis work was supported by uk epsrc project grn36493 robust accurate statistical parsing and eu fw5 project ist200134460 meaning we are grateful to rob koeling and three anonymous reviewers for their helpful comments on earlier draftswe would also like to thank david weir and mark mclauchlan for useful discussions
J03-4004
disambiguating nouns verbs and adjectives using automatically acquired selectional preferencesselectional preferences have been used by word sense disambiguation systems as one source of disambiguating informationwe evaluate wsd using selectional preferences acquired for english adjectivenoun subject and direct object grammatical relationships with respect to a standard test corpusthe selectional preferences are specific to verb or adjective classes rather than individual word forms so they can be used to disambiguate the cooccurring adjectives and verbs rather than just the nominal argument headswe also investigate use of the onesenseperdiscourse heuristic to propagate a sense tag for a word to other occurrences of the same word within the current document in order to increase coveragealthough the preferences perform well in comparison with other unsupervised wsd systems on the same corpus the results show that for many applications further knowledge sources would be required to achieve an adequate level of accuracy and coveragein addition to quantifying performance we analyze the results to investigate the situations in which the selectional preferences achieve the best precision and in which the onesenseperdiscourse heuristic increases performancewe report that the wordclass model performs well in unsupervised wsd
cormet a computational corpusbased conventional metaphor extraction system cormet is a corpusbased system for discovering metaphorical mappings between concepts it does this by finding systematic variations in domainspecific selectional preferences which are inferred from large dynamically mined internet corpora metaphors transfer structure from a source domain to a target domain making some concepts in the target domain metaphorically equivalent to concepts in the source domain the verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domain this regularity detectable with a shallow linguistic analysis is used to find the metaphorical interconcept mappings which can then be used to infer the existence of higherlevel conventional metaphors most other computational metaphor systems use small handcoded semantic knowledge bases and work on a few examples although cormets only knowledge base is wordnet it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappings cormet is tested on its ability to find a subset of the cormet is a corpusbased system for discovering metaphorical mappings between conceptsit does this by finding systematic variations in domainspecific selectional preferences which are inferred from large dynamically mined internet corporametaphors transfer structure from a source domain to a target domain making some concepts in the target domain metaphorically equivalent to concepts in the source domainthe verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domainthis regularity detectable with a shallow linguistic analysis is used to find the metaphorical interconcept mappings which can then be used to infer the existence of higherlevel conventional metaphorsmost other computational metaphor systems use small handcoded semantic knowledge bases and work on a few examplesalthough cormets only knowledge base is wordnet it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappingscormet is tested on its ability to find a subset of the master metaphor list lakoff argues that rather than being a rare form of creative language some metaphors are ubiquitous highly structured and relevant to cognitionto date there has been no robust broadly applicable computational metaphor interpretation system a gap this article is intended to take a first step toward fillingmost computational models of metaphor depend on handcoded knowledge bases and work on a few examplescormet is designed to work on a larger class of metaphors by extracting knowledge from large corpora without drawing on any handcoded knowledge sources besides wordneta method for computationally interpreting metaphorical language would be useful for nlpalthough metaphorical word senses can be cataloged and treated as just another part of the lexicon this kind of representation ignores regularities in polysemya conventional metaphor may have a very large number of linguistic manifestations which makes it useful to model the metaphors underlying mechanismscormet is not capable of interpreting any manifestation of conventional metaphor but is a step toward such a systemcormet analyzes large corpora of domainspecific documents and learns the selectional preferences of the characteristic verbs of each domaina selectional preference is a verbs predilection for a particular type of argument in a particular rolefor instance the object of the verb pour is generally a liquidany noun that pour takes as an an object is likely to be intended as a liquid either metaphorically or literallycormet finds conventional metaphors by finding systematic differences in selectional preferences between domainsfor instance if cormet were to find a sentence like funds poured into his bank account in a document from the finance domain it could infer that in that domain pour has a selection preference for financial assets in its subjectby comparing this selectional preference with pours selectional preferences in the lab domain cormet can infer a metaphorical mapping from money to liquidsby finding sets of cooccuring interconcept mappings cormet can articulate the higherorder structure of conceptual metaphorsnote that cormet is designed to detect higherorder conceptual metaphors by finding some of the sentences embodying some of the interconcept mappings constituting the metaphor of interest but is not designed to be a tool for reliably detecting all instances of a particular metaphorcormets domainspecific corpora are obtained from the internetin this context a domain is a set of related concepts and a domainspecific corpus is a set of documents relevant to those conceptscormets input parameters are two domains between which to search for interconcept mappings and for each domain a set of characteristic keywordscormet is tested on its ability to find a subset of the master metaphor list a manually compiled catalog of metaphorcormet works on domains that are specific and concrete cormets discrimination is relatively coarse it measures trends in selectional preferences across many documents so common mappings are discerniblecormet considers the selectional preferences only of verbs on the theory that they are generally more selectively restrictive than nouns or adjectivesit is worth noting that wordnet cormets primary knowledge source implicitly encodes some of the metaphors cormet is intended to find peters and peters use wordnet to find many artifactcognition metaphorsalso wordnet enumerates some metaphorical senses of some verbscormet does not use any of wordnets information about verbs and ignores regularities in the distribution of noun homonyms that could be used to find some metaphorsthe article is organized as follows section 2 describes the mechanisms by which conventional metaphors are detectedsection 3 walks through cormets process in two examplessection 4 describes how the systems performance is evaluated against the master metaphor list and section 5 covers select related workideally cormet could draw on a large quantity of manually vetted highly representative domainspecific documentsthe precompiled corpora available online do not span enough subjectsother online data sources include the internets hierarchically structured indices such as yahoos ontology and googles each index entry contains a small number of highquality links to relevant web pages but this is not helpful because cormet requires many documents and those documents need not be of more than moderate qualitysearching the internet for domainspecific text seems to be the only way to obtain sufficiently large diverse corporacormet obtains documents by submitting queries to the google search enginethere are two types of queries one to fetch any domainspecific documents and another to fetch domainspecific documents that contain a particular verbthe first kind of query consists of a conjunction of from two to five randomly selected domain keywordsdomain keywords are words characteristic of a domain supplied by the user as an inputfor the finance domain a reasonable set of keywords is stocks bonds nasdaq dow investment financeeach query incorporates only a few keywords in order to maximize the number of distinct possible queriesqueries for domainspecific documents containing a particular verb are composed of a conjunction of domainspecific terms and a disjunction of forms of the verb that are more likely to be verbs than other parts of speechfor the verb attack for instance acceptable forms are attacked and attacking but not attack and attacks which are more likely to be nounsthe syntactic categories in which a word form appears are determined by reference to wordnetsome queries for the verb attack in the finance domain are queries return links to up to 10000 documents of which cormet fetches and analyzes no more than 3000in the 13 domains studied about 75 of these documents are relevant to the domain of interest so the noise is substantialthe documents are processed to remove embedded scripts and html tagsthe mined documents are parsed with the apple pie parser case frames are extracted from parsed sentences using templates for instance is used to extract roles for passive agentless sentences learning the selectional preferences for a verb in a domain is expensive in terms of time so it is useful to find a small set of important verbs in each domaincormet seeks information about verbs typical of a domain because these verbs are more likely to figure in metaphors in which that domain is the metaphors sourcebesiege for instance is characteristic of the military domain and appears in many instances of the military medicine mapping such as the antigens besieged the virusto find domaincharacteristic verbs cormet dynamically obtains a large sample of domainrelevant documents decomposes them into a bagofwords representation stems the words with an implementation of the porter stemmer and finds the ratio of occurrences of each word stem to the total number of stems in the domain corpusthe frequency of each stem in the corpus is compared to its frequency in general english the 400 verb stems with the highest relative frequency are considered characteristiccormet treats any word form that may be a verb as though it is a verb which biases cormet toward verbs with common nominal homonymsword stems that have high relative frequency in more than one domain like email and download are eliminated on the suspicion that they are more characteristic of documents on the internet in general than of a substantive domaintable 1 lists the 20 highestscoring stems in the lab and finance domainsthere are three constraints on cormets selectionalpreferencelearning algorithmfirst it must tolerate noise because complex sentences are often misparsed and the case frame extractor is error pronesecond it should be able to work around wordnets lacunaefinally there should be a reasonable metric for comparing the similarity between selectional preferencescormet first uses the selectionalpreferencelearning algorithm described in resnik then clustering over the resultsresniks algorithm takes a set of words observed in a case slot and finds the wordnet nodes that best characterize the selectional preferences of that slota case slot has a preference for a wordnet node to the extent that that node or one of its descendants is more likely to appear in that case slot than it is to appear at randoman overall measure of the choosiness of a case slot is selectionalpreference strength sr defined as the relative entropy of the posterior probability p and the prior probability p is the a priori probability of the appearance of a wordnet node c or one of its descendants and p is the probability of that node or one of its descendants appearing in a case slot p recall that the relative entropy of two distributions x and y d is the inefficiency incurred by using an encoding optimal for y to encode xthe degree to which a case slot selects for a particular node is measured by selectional associationin effect the selectional associations divide up the selectional preference strength for a case slot among that slots possible fillersselectional association is defined as to compute λr what is needed is a distribution over word classes but what is observed in the corpus is a distribution over word formsresniks algorithm works around this problem by approximating a word class distribution from the word form distributionfor each word form observed filling a case slot credit is divided evenly among all of that word forms possible senses although resniks algorithm makes no explicit attempt at sense disambiguation greater activation tends to accumulate in those nodes that best characterize a predicates selectional preferencescormet uses resniks algorithm to learn domainspecific selection preferencesit often finds different selectional preferences for predicates whose preferences should intuitively be the samein the military domain the object of assault selects strongly for fortification but not social group whereas the selectional preferences for the object of attack are the oppositetaking the cosine of the selectional preferences of these two case slots gives a surprisingly low scorein order to facilitate more accurate judgments of selectionalpreference similarity cormet finds clusters of wordnet nodes that although not as accurate allow more meaningful comparisons of selectional preferencesclusters are built using the nearestneighbor clustering algorithm a predicates selectional preferences are represented as vectors whose nth element represents the selectional association of the nth wordnet node for that predicatethe similarity function used is the dot product of the two selectionalpreference vectorsempirically the level of granularity obtained by running nearestneighbor clustering twice produces the most conceptually coherent clustersthere are typically fewer than 100 secondorder clusters per domainin the lab domain there are 54 secondorder clusters and in the finance domain there are 67the time complexity of searching for metaphorical interconcept mappings between two domains is proportional to the number of pairs of salient domain objects so it is more efficient to search over pairs of salient clusters than over the more numerous individual salient nodestable 2 shows a military clusterthese clusters are helpful for finding verbs with similar but not identical selectional preferencesalthough attack for instance does not select for fortification it does select for other elements of fortifications cluster such as building and defensive structurethe fundamental limitation of wordnet with respect to selectionalpreference learning is that it fails to exhaust all possible lexical relationshipswordnet can hardly be blamed the task of recording all possible relationships between all english words is prohibitively large if not infinitenevertheless there are many words that intuitively should have a common parent but do notfor instance liquid body substance and water should both be hyponyms of liquid but in wordnet their shallowest common ancestor is substanceone of the descendants of substance is solid so there is no single node that represents all liquidsli and abe describe another method of corpusdriven selectionalpreference learning that finds a tree cut of wordnet for each case slota tree cut is a set of the elements of a cluster of wordnet nodes characteristic of the military domain nodes that specifies a partition of the ontologys leaf nodes where a node stands for all the leaf nodes descended from itthe method chooses among possible tree cuts according to minimumdescriptionlength criteriathe description length of a tree cut representation is the sum of the size of the tree cut itself and the space required for representing the observed data with that tree cutfor cormets purposes the problem with this approach is that it is difficult to find clusters of nodes representing a selectional preference using its results there are similar objections to similar approaches such as that of carroll and mccarthy polarity is a measure of the directionality and magnitude of structure transfer between two concepts or two domainsnonzero polarity exists when language characteristic of a concept from one domain is used in a different domain of a different conceptthe kind of characteristic language cormet can detect is limited to verbal selectional preferencessay cormet is searching for a mapping between the concepts liquids and assets as illustrated in figure 1there are verbs in lab that strongly select for liquids such as pour flow and freezein finance these verbs select for assetsin finance there are verbs that strongly select for assets such as spend invest and taxin the lab domain these verbs select for nothing in particularthis suggests that liquid is the source concept and asset is the target concept which implies that lab and finance are the source and target domains respectivelycormet computes the overall polarity between two domains by summing over the polarity between each pair of highsalience concepts from the two domains of interestinterconcept polarity is defined as follows let α be the set of case slots in domain x with the strongest selectional preference for the node cluster alet β be the set of case slots in domain y with the strongest selectional preferences for the node cluster bthe degree of structure flow from a in x to b in y is computed as the degree to which the predicates α select for the nodes b in y or selection strengthstructure flow in the opposite direction is selection strengththe definition of selection strength is the average of the selectionalpreference strengths of the predicates in case slots for the nodes in node cluster in domainthe polarity for α and β is the difference in the two quantitiesif the polarity is near zero there is not much structure flow and no evidence for a metaphoric mappingin some cases a difference in selectional preferences between domains does not indicate the presence of a metaphorto take a fictitious but illustrative example say asymmetric structure transfer between lab and financepredicates from lab that select for liquids are transferred to finance and select for moneyon the other hand predicates from finance that select for money are transferred to lab and do not select for liquids that in the lab domain the subject of sit has a preference for chemists whereas in the finance domain it has a preference for investment bankersthe difference in selectional preferences is caused by the fact that chemists are the kind of person more likely to appear in lab documents and investment bankers in finance onesinstances like this are easy to filter out because their polarity is zeroa verb is treated as characteristic of a domain x if it is at least twice as frequent in the domain corpus as it is in general english and it is at least one and a half times as frequent in domain x as in the contrasting domain y pour for instance occurs three times as often in finance and twentythree times as often in lab as it does in general englishsince it is nearly eight times as frequent in lab as in finance it is considered characteristic of the formerthis heuristic resolves the confusion than can be caused by the ubiquity of certain conventional metaphorsthe high density of metaphorical uses of pour in finance could otherwise make it seem as though pour is characteristic of that domaina verb with weak selectional preferences is a bad choice for a characteristic predicate even if it occurs disproportionately often in a domainhighly selective verbs are more useful because violations of their selectional preferences are more informativefor this reason a predicates salience to a domain is defined as its selectionalpreference strength times the ratio of its frequency in the domain to its frequency in englishliteral and metaphorical selectional preferences may coexist in the same domainconsider the selectional preferences of pour in the chemical and financial domainsin the lab domain pour is mostly used literally people pour liquidsthere are occasional metaphorical uses but the literal sense is more commonin finance pour is mostly used metaphorically although there are occasionally literal uses algorithms 13 show pseudocode for finding metaphoric mappings between concepts comment find mappings from concepts in domain1 to concepts in domain2 or vice versa domain 1 clusters get best clusters domain 2 clusters get best clusters for each concept 1 e domain 1 clusters for each concept 2 e domain 2 clusters polarity from 1 to 2 inter concept polarity polarity from 2 to 1 inter concept polarity if absolute value c2 and polarity from 2 to 1 c2 according to the thematicrelation hypothesis many domains are conceived of in terms of physical objects moving along paths between locations in spacein the money domain assets are mapped to objects and asset holders are mapped to locationsin the idea domain ideas are mapped to objects minds are mapped to locations and communications are mapped to pathsaxioms of inference from the target domain usually become available for reasoning about the source domain unless there is an aspect of the source domain that specifically contradicts themfor instance in the domain of material objects a thing moved from point x to point y is no longer at x but in the idea domain it exists at both locationsthematically related metaphors may consistently cooccur in the same sentencesfor example the metaphors liquid money and containers institutions often cooccur as in the sentence capital flowed into the new companyconversely cooccurring metaphors are often components of a single metaphorical conceptualizationa metaphorical mapping is therefore more credible when it is a component of a system of mappingsin cormet systematicity measures a metaphorical mappings tendency to cooccur with other mappingsthe systematicity score for a mapping x is defined as the number of strong distinct mappings cooccurring with xthis measure goes only a little way toward capturing the extent to which a metaphor exhibits the structure described in the thematicrelations hypothesis but extending cormet to find the entities that correspond to objects locations and paths is beyond the scope of this articlecormet computes a confidence measure for each metaphor it discoversconfidence is a function of three thingsthe more verbs mediating a metaphor the more credible it isstrongly unidirectional structure flow from source domain to target makes a mapping more crediblefinally a mapping is more likely to be correct if it systematically cooccurs with other mappingsthe confidence measure should not be interpreted as a probability of correctness the data available for calibrating such a distribution are inadequatethe weights of each factor empirically assigned plausible values are given in table 3the confidence measure is intended to wrap all the available evidence about a metaphors credibility into one numbera principled way of doing this is desirable but unfortunately there are not enough data to make meaningful use of machinelearning techniques to find the best set of components and weightsthere is substantial arbitrariness in the confidence rating the components used and the weights they are assigned could easily be different and are best considered guesses that give reasonable resultsthis section provides a walkthrough of the derivation and analysis of the concept mapping liquid money and components of the interconcept mapping war medicinein the interests of brevity only representative samples of cormets data are shownsee mason for a more detailed accountcormets inputs are two domain sets of characteristic keywords for each domain the keywords must characterize a cluster in the space of internet documents but cormet is relatively insensitive to the particular keywordsit is difficult to find keywords characterizing a cluster centering on money alone so keywords for a more general domain finance are providedit is also difficult to characterize a cluster of documents mostly about liquidschemicalengineering articles and hydrographic encyclopedias tend to pertain to the highly technical aspects of liquids instead of their everyday behaviordocuments related to laboratory work are targeted on the theory that most references to liquids in a corpus dedicated to the manipulation and transformation of different states of matter are likely to be literal and will not necessarily be highly technicaltables 5 and 6 show the top 20 characteristic verbs for lab and finance respectivelycormet finds the selectional preferences of all of the characteristic predicates case slotsa sample of the selectional preferences of the top 20 verbs in lab and finance are shown in tables 7 and 8 respectivelythe leftmost columns of these two tables have the characteristic verb and the thematic role characterizedthe righthand sides have clusters of characteristic nodesthe numbers associated with the nodes are the bits of uncertainty about the identity of a word x resolved by the fact that x fills the given case slot or p p all of the 400 possible mappings between the top 20 concepts from the two domains are examinedeach possible mapping is evaluated in terms of polarity the number of frames instantiating the mapping and the systematic cooccurrence of that mapping with different highly salient mappingsthe best mappings for lab x finance are shown in table 9mappings are expressed in abbreviated form for clarity with only the most recognizable node of each concept displayedthe foremost mapping characterizes money in terms of liquid the mapping for which the two domains were selectedthe second represents a somewhat less intuitive mapping from liquids to institutionsthis metaphor is driven primarily by institutions capacity to dissolveof course this mapping is incorrect insofar as solids undergo dissolution not liquidscormet made this mistake because of faulty thematicrole identification it frequently failed to distinguish between the different thematic roles played by the subjects in sentences like the company dissolved and the acid dissolved the compoundthe third mapping characterizes communication as a liquidthis was not the mapping the author had in mind when he chose the domains but it is intuitively plausible one speaks of information flowing as readily as of money flowingthat this mapping appears in a search not targeted to it reflects this metaphors strengthit also illustrates a source of error in inferring the existence of conventional metaphors between domains from the existence of interconcept mappingsthe fourth mapping is from containers to organizationsthis mapping complements the first one as liquids flow into containers so money flows into organizationsanother good mapping not present here is money flows into equities and investmentscormet misses this mapping because at the level of concepts money and equities are conflatedthis happens because they are near relatives in the wordnet ontology and because there is very high overlap between the predicates selecting for themcompare the mappings cormet derived with the master metaphor lists characterization of the money is a liquid metaphor the master metaphor list also describes investments are containers for money as exemplified in the following cormet has found mappings that can reasonably be construed as corresponding to these metaphorscompare the mappings from the master metaphor list with frames mined by this system and identified as instantiating liquid income shown in table 10it is important to note that although cormet can list the case frames that have driven the derivation of a particular highlevel mapping it is designed to discover highlevel mappings not interpret or even recognize particular instances of metaphorical languagejust as in the master metaphor list there are frames in the cormet listing in which money and equities are characterized as liquids are moved as liquids and change state as liquids this subsection describes the search for mappings between the medicine and military domainsthe domain keywords for medicine and military are shown in table 11the characteristic verbs of the military and medicine domains are given in tables 12 and 13 respectivelytheir selectional preferences are given in tables 14 and 15 respectivelythe highestquality mappings between the military and medicine domains are shown in table 16this pair of domains produces more mappings than the the lab and finance pairmany source concepts from the military domain are mapped to body partsthe heterogeneity of the source concepts seems to be driven by the heterogeneity of possible military targetssimilarly many source concepts are mapped to drugsthe case frames supporting this mapping suggest that this is because of the heterogeneity of military aggressors these mappings can be interpreted as indicating that things that are attacked map to body parts and things that attack map to drugsthe mapping fortification illness represents the mapping of targetable strongholds to diseaseillnesses are conceived of as fortifications besieged by treatmentcompare this with the master metaphor lists characterization of treating illness is fighting a war cormets results can reasonably be interpreted as matching all of the mappings from the master metaphor list except winningisacure and defeatisdyingcormets failure to find this mapping is caused by the fact that win lose and their synonyms do not have high salience in the military domain which may be a reflection of the ubiquity of win and lose outside of that domaintable 17 shows sample frames from which the body part fortification vehicle military action region skilled worker mapping was derivedthis section describes the evaluation of cormet against a gold standard specifically by determining how many of the metaphors in a subset of the master metaphor list can be discovered by cormet given a characterization of the relevant source and target domainsthe final evaluation of the correspondence between the mappings cormet discovers and the master metaphor list entry is necessarily done by handthis is a highly subjective method of evaluation a formal objective evaluation of correctness would be preferable but at present no such metric is availablethe master metaphor list is the basis for evaluation because it is composed of manually verified metaphors common in englishthe test set is restricted to those elements of the master metaphor list with concrete source and target domainsthis requirement excludes many important conventional metaphors such as events are actionsabout a fifth of the master metaphor list meets this constraintthis fraction is surprisingly small it turns out that the bulk of the master metaphor list consists of subtle refinements of a few highly abstract metaphorsthe concept pairs and corresponding domain pairs for the target metaphors in the master metaphor list are given in table 18a mapping discovered by cormet is considered correct if submappings specified in the master metaphor list are nearly all present with high salience and incorrect submappings are present with comparatively low saliencethe mappings discovered that best represent the targeted metaphors are shown in table 19some of these test cases are marked successesfor instance economic harm is physical injury seems to be captured by the mapping from the loss3 cluster to the harm1 clustercormet found reasonable mappings in 10 of 13 cases attemptedthis implies 77 accuracy although in light of the small test and the subjectivity of judgment this number must not be taken too seriouslysome test cases were disappointingcormet found no mapping between theory and architecturethis seems to be an artifact of the lowquality corpora obtained for these domainsthe documents intended to be relevant to architecture were often about zoning or building policy not the structure of buildingsfor theory many documents were calls for papers or about university department policyit is unsurprising that there are no particular mappings between two sets of miscellaneous administrative and policy documentsthe weakness of the architecture corpus also prevented cormet from discovering any body architecture mappingsaccuracy could be improved by refining the process by which domainspecific corpora are obtained to eliminate administrative documents or by requiring documents to have a higher density of domainrelevant termsis it meaningful when cormet finds a mapping or will it find a mapping between any pair of domainsto answer this question cormet was made to search for mappings between randomly selected pairs of domainstable 20 lists a set of arbitrarily selected domain pairs and the strength of the polarization between themin all cases the polarization is zerothis can be interpreted as an encouraging lack of false positivesanother perspective is that cormet should have found mappings between some of these pairs such as medicine and society on the theory that societies can be said to sicken die or healalthough this is certainly a valid conventional metaphor it seems to be less prominent than those metaphors that cormet did discovertwo of the most broadly effective computational models of metaphor are fass and martin in both of which metaphors are detected through selectionalpreference violations and interpreted using an ontologythey are distinguished from cormet in that they work on both novel and conventional metaphors and rely on declarative handcoded knowledge basesfass describes met a system for interpreting nonliteral language that builds on wilks and wilks met discriminates among metonymic metaphorical literal and anomalous languageit is a component of collative semantics a semantics for natural language processing that has been implemented in the program meta5 met treats metonymy as a way of referring to one thing by means of another and metaphor as a way of revealing an interesting relationship between two entitiesin met a verbs selectional preferences are represented as a vector of typesthe verb drinks preference for an animal subject and a liquid object are represented as metaphorical interpretations are made by finding a sense vector in mets knowledge base whose elements are hypernyms of both the preferred argument types and the actual argumentsfor example the car drinks gasoline maps to the vector but car is not a hypernym of animal so met searches for a metaphorical interpretation coming up with martin describes the metaphor interpretation denotation and acquisition system a computational model of metaphor interpretationmidas has been integrated with the unix consultant a program that answers english questions about using unixuc tries to find a literal answer to each question with which it is presentedif violations of literal selectional preference make this impossible uc calls on midas to search its hierarchical library of conventional metaphors for one that explains the anomalyif no such metaphor is found midas tries to generalize a known conventional metaphor by abstracting its components to the mostspecific senses that encompass the questions anomalous languagemidas then records the most concrete metaphor descended from the new general metaphor that provides an explanation for the querys languagemidas is driven by the idea that novel metaphors are derived from known existing onesthe hierarchical structure of conventional metaphor is a regularity not captured by other computational approachesalthough midas can quickly understand novel metaphors that are the descendants of metaphors in its memory it cannot interpret compound metaphors or detect intermetaphor relationships besides inheritanceinvestments containers and money water for instance are clearly related but not in a way that midas can representsince not all novel metaphors are descendants of common conventional metaphors midass coverage is limitedmetabank is an empirically derived knowledge base of conventional metaphors designed for use in natural language applicationsmetabank starts with a knowledge base of metaphors based on the master metaphor listmetabank can search a corpus for one metaphor or scan a large corpus for any metaphorical contentthe search for a target metaphor is accomplished by choosing a set of probe words associated with that metaphor and finding sentences with those words which are then manually sorted as literal examples of the target metaphor examples of a different metaphor unsystematic homonyms or something elsemetabank compiles statistics on the frequency of conventional metaphors and the usefulness of the probe wordsmetabank has been used to study container metaphors in a corpus of unixrelated email and to study metaphor distributions in the wall street journalpeters and peters mine wordnet for patterns of systematic polysemy by finding pairs of wordnet nodes at a relatively high level in the ontology whose descendants share a set of common word formsthe nodes publication and publisher for instance have paper newspaper and magazine as common descendantsthis is a metonymic relationship the system can also capture metaphoric relationships as in the nodes supporting structure and theory among whose common descendants are framework foundation and basepeters and peters system found many metaphoric relationships between node pairs that were descendants of the unique beginners artifact and cognitiongoatly describes a set of linguistic cues of metaphoricality beyond selectionalpreference violations such as metaphorically speaking and surprisingly literallythese cues are generally ambiguous but could usefully be incorporated into computational approaches to metaphorcormet embodies a method for semiautomatically finding metaphoric mappings between concepts which can then be used to infer conventionally metaphoric relationships between domainsit can sometimes identify metaphoric language if it manifests as a common selectionalpreference gradient between domains but is far from being able to recognize metaphoric language in generalcormet differs from other computational approaches to metaphor in requiring no manually compiled knowledge base besides wordnetit has successfully found some of the conventional metaphors on the master metaphor listcormet uses gradients in selectional preferences learned from dynamically mined domainspecific corpora to identify metaphoric mappings between conceptsit is reasonably accurate despite the noisiness of many of its componentscormet demonstrates the viability of a computational corpusbased approach to conventional metaphor but requires more work before it can constitute a viable nlp tool
J04-1002
cormet a computational corpusbased conventional metaphor extraction systemcormet is a corpusbased system for discovering metaphorical mappings between conceptsit does this by finding systematic variations in domainspecific selectional preferences which are inferred from large dynamically mined internet corporametaphors transfer structure from a source domain to a target domain making some concepts in the target domain metaphorically equivalent to concepts in the source domainthe verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domainthis regularity detectable with a shallow linguistic analysis is used to find the metaphorical interconcept mappings which can then be used to infer the existence of higherlevel conventional metaphorsmost other computational metaphor systems use small handcoded semantic knowledge bases and work on a few examplesalthough cormets only knowledge base is wordnet it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappingscormet is tested on its ability to find a subset of the master metaphor list the cormet system dynamically mines domain specific corpora to find less frequent usages and identifies conceptual metaphorswe show how statistical analysis can automatically detect and extract conventional metaphors from corpora though creative metaphors still remain a tantalizing challenge
the kappa statistic a second look dialogue structure coding scheme 231331 cicchetti domenic v and alvan r feinstein 1990 high agreement but low kappa ii resolving the paradoxes of clinical 43551558 cohen jacob 1960 a coefficient of for nominal scales in recent years the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasksin this squib we highlight issues that affect κ and that the community has largely neglectedfirst we discuss the assumptions underlying different computations of the expected agreement component of κsecond we discuss how prevalence and bias affect the κ measurein the last few years coded corpora have acquired an increasing importance in every aspect of humanlanguage technologytagging for many phenomena such as dialogue acts requires coders to make subtle distinctions among categoriesthe objectivity of these decisions can be assessed by evaluating the reliability of the tagging namely whether the coders reach a satisfying level of agreement when they perform the same coding taskcurrently the de facto standard for assessing intercoder agreement is the κ coefficient which factors out expected agreement κ had long been used in content analysis and medicine carletta deserves the credit for bringing κ to the attention of computational linguists κ is computed as p p 1 p where p is the observed agreement among the coders and p is the expected agreement that is p represents the probability that the coders agree by chancethe values of κ are constrained to the interval 11a κ value of one means perfect agreement a κ value of zero means that agreement is equal to chance and a κ value of negative one means perfect disagreementthis squib addresses two issues that have been neglected in the computational linguistics literaturefirst there are two main ways of computing p the expected agreement according to whether the distribution of proportions over the categories is taken to be equal for the coders or not clearly the two approaches reflect different conceptualizations of the problemwe believe the distinction between the two is often glossed over because in practice the two computations of p produce very similar outcomes in most cases especially for the highest values of κhowever first we will show that they can indeed result in different values of κ that we will call κco and κsc these different values can lead to contradictory conclusions on intercoder agreementmoreover the assumption of equal distributions over the categories masks the exact source of disagreement among the codersthus such an assumption is detrimental if such systematic disagreements are to be used to improve the coding scheme second κ is affected by skewed distributions of categories and by the degree to which the coders disagree that is for a fixed p the values of κ vary substantially in the presence of prevalence bias or bothwe will conclude by suggesting that κco is a better choice than κsc in those studies in which the assumption of equal distributions underlying κsc does not hold the vast majority if not all of discourse and dialoguetagging effortshowever as κco suffers from the bias problem but κsc does not κsc should be reported too as well as a third measure that corrects for prevalence as suggested in byrt bishop and carlin p is the probability of agreement among coders due to chancethe literature describes two different methods for estimating a probability distribution for random assignment of categoriesin the first each coder has a personal distribution based on that coders distribution of categories in the second there is one distribution for all coders derived from the total proportions of categories assigned by all coders 1 we now illustrate the computation of p according to these two methodswe will then show that the resulting κco and κsc may straddle one of the significant thresholds used to assess the raw κ valuesthe assumptions underlying these two methods are made tangible in the way the data are visualized in a contingency table for cohen and in what we will call an agreement table for the othersconsider the following situationtwo coders2 code 150 occurrences of okay and assign to them one of the two labels accept or ack the two coders label 70 occurrences as accept and another 55 as ackthey disagree on 25 occurrences which one coder labels as ack and the other as acceptin figure 1 this example is encoded by the top contingency table on the left and the agreement table on the rightthe contingency table directly mirrors our descriptionthe agreement table is an n m matrix where n is the number of items in the data set and m is the number of labels that can be assigned to each object in our example n 150 and m 2each entry nij is the number of codings of label j to item ithe agreement table in figure 1 shows that occurrences 1 through 70 have been labeled as accept by both coders 71 through 125 as ack by both coders and 126 to 150 differ in their labels1 to be precise krippendorff uses a computation very similar to siegel and castellans to produce a statistic called alphakrippendorff computes p with a samplingwithoutreplacement methodologythe computations of p and of 1 de show that the difference is negligible cohens contingency tables and siegel and castellans agreement table agreement tables lose informationwhen the coders disagree we cannot reconstruct which coder picked which categoryconsider example 2 in figure 1the two coders still disagree on 25 occurrences of okayhowever one coder now labels 10 of those as accept and the remaining 15 as ack whereas the other labels the same 10 as ack and the same 15 as acceptthe agreement table does not change but the contingency table doesturning now to computing p figure 2 shows for example 1 cohens computation of p on the left and siegel and castellans computation on the rightwe include the computations of kco and ksc as the last stepfor both cohen and siegel and castellan p 125150 08333the observed agreement p is computed as the proportion of items the coders agree on to the total number of items n is the number of items and k the number of coders both kco and ksc are highly significant at the p 05 105 level the difference between kco and ksc in figure 2 is just under 1 however the results of the two k computations straddle the value 067 which for better or worse has been adopted as a cutoff in computational linguisticsthis cutoff is based on the assessment of k values in krippendorff which discounts k 067 and allows tentative conclusions when 067 k 08 and definite conclusions when k 08krippendorffs scale has been adopted without question even though krippendorff himself considers it only a plausible standard that has emerged from his and his colleagues workin fact carletta et al use words of caution against adopting krippendorffs suggestion as a standard the first author has also raised the issue of how to assess k values in di eugenio if krippendorffs scale is supposed to be our standard the example just worked out shows that the different computations of p do affect the assessment of intercoder agreementif lessstrict scales are adopted the discrepancies between the two k computations play a larger role as they have a larger effect on smaller values of k for example rietveld and van hout consider 020 k 040 as indicating fair agreement and 040 k 060 as indicating moderate agreementsuppose that two coders are coding 100 occurrences of okaythe two coders label 40 occurrences as accept and 25 as ackthe remaining 35 are labeled as ack by one coder and as accept by the other kco 0418 but ksc 027these two values are really at oddsstep 1for each category j compute the overall proportion pjl of items assigned to j by each coder l in a contingency table each row and column total divided by n corresponds to one such proportion for the corresponding coderassumption of equal distributions among coders step 1for each category j compute pj the overall proportion of items assigned to jin an agreement table the column totals give the total counts for each category j hence step 3p the likelihood of coders accidentally assigning the same category to a given item is the computation of p and κ according to cohen and to siegel and castellan in the computational linguistics literature r has been used mostly to validate coding schemes namely a good value of r means that the coders agree on the categories and therefore that those categories are real we noted previously that assessing what constitutes a good value for r is problematic in itself and that different scales have been proposedthe problem is compounded by the following obvious effect on r values if p is kept constant varying values for p yield varying values of r what can affect p even if p is constant are prevalence and biasthe prevalence problem arises because skewing the distribution of categories in the data increases pthe minimum value p 1m occurs when the labels are equally distributed among the m categories the maximum value p 1 occurs when the labels are all concentrated in a single categorybut for a given value of p the larger the value of p the lower the value of rexample 3 and example 4 in figure 3 show two coders agreeing on 90 out of 100 occurrences of okay that is p 09however r ranges from 0048 to 080 and from not significant to significant 3 the differences in r are due to the difference in the relative prevalence of the two categories accept and ackin example 3 the distribution is skewed as there are 190 accepts but only 10 acks across the two coders in example 4 the distribution is even as there are 100 accepts and 100 acks respectivelythese results do not depend on the size of the sample that is they are not due to the fact contingency tables illustrating the bias effect on κcoexample 3 and example 4 are smallas the computations of p and p are based on proportions the same distributions of categories in a much larger sample say 10000 items will result in exactly the same κ valuesalthough this behavior follows squarely from κs definition it is at odds with using κ to assess a coding schemefrom both example 3 and example 4 we would like to conclude that the two coders are in substantial agreement independent of the skewed prevalence of accept with respect to ack in example 3the role of prevalence in assessing κ has been subject to heated discussion in the medical literature the bias problem occurs in κco but not κscfor κco p is computed from each coders individual probabilitiesthus the less two coders agree in their overall behavior the fewer chance agreements are expectedbut for a given value of p decreasing p will increase κco leading to the paradox that κco increases as the coders become less similar that is as the marginal totals diverge in the contingency tableconsider two coders coding the usual 100 occurrences of okay according to the two tables in figure 4in example 5 the proportions of each category are very similar among coders at 55 versus 60 accept and 45 versus 40 ackhowever in example 6 coder 1 favors accept much more than coder 2 and conversely chooses ack much less frequently in both cases p is 065 and κsc is stable at 027 but κco goes from 027 to 0418our initial example in figure 1 is also affected by biasthe distribution in example 1 yielded κco 06724 but κsc 06632if the bias decreases as in example 2 κco becomes 06632 the same as κscthe issue that remains open is which computation of κ to choosesiegel and castellans κsc is not affected by bias whereas cohens κco ishowever it is questionable whether the assumption of equal distributions underlying κsc is appropriate for coding in discourse and dialogue workin fact it appears to us that it holds in few if any of the published discourse or dialoguetagging efforts for which κ has been computedit is for example appropriate in situations in which item i may be tagged by different coders than item j however κ assessments for discourse and dialogue tagging are most often performed on the same portion of the data which has been annotated by each of a small number of annotators in fact in many cases the analysis of systematic disagreements among annotators on the same portion of the data can be used to improve the coding scheme to use κco but to guard against bias cicchetti and feinstein suggest that κco be supplemented for each coding category by two measures of agreement positive and negative between the codersthis means a total of 2m additional measures which we believe are too many to gain a general insight into the meaning of the specific κco valuealternatively byrt bishop and carlin suggest that intercoder reliability be reported as three numbers κco and two adjustments of κco one with bias removed the other with prevalence removedthe value of κco adjusted for bias turns out to be κscadjusted for prevalence κco yields a measure that is equal to 2p 1the results for example 1 should then be reported as κco 06724 κsc 06632 2p1 06666 those for example 6 as κco 0418 κsc 027 and 2p1 03for both examples 3 and 4 2p 1 08collectively these three numbers appear to provide a means of better judging the meaning of κ valuesreporting both κ and 2p 1 may seem contradictory as 2p 1 does not correct for expected agreementhowever when the distribution of categories is skewed this highlights the effect of prevalencereporting both κco and κsc does not invalidate our previous discussion as we believe κco is more appropriate for discourse and dialoguetagging in the majority of cases especially when exploiting bias to improve coding this work is supported by grant n000140010640 from the office of naval researchthanks to janet cahn and to the anonymous reviewers for comments on earlier drafts
J04-1005
the kappa statistic a second lookin recent years the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasksin this squib we highlight issues that affect κ and that the community has largely neglectedfirst we discuss the assumptions underlying different computations of the expected agreement component of κsecond we discuss how prevalence and bias affect the κ measure
statistical machine translation with scarce resources using morphosyntactic information in statistical machine translation correspondences between the words in the source and the target language are learned from parallel corpora and often little or no linguistic knowledge is used to structure the underlying models in particular existing statistical systems for machine translation often treat different inflectedforms of the same lemma as if they were independent ofone another the bilingual training data can be better exploited by explicitly taking into account the interdependencies of related inflected forms we propose the construction of hierarchical lexicon models on the basis of equivalence classes of words in addition we introduce sentencelevel restructuring transformations which aim at the assimilation of word order in related sentences we have systematically investigated the amount of bilingual training data required to maintain an acceptable quality of machine translation the combination of the suggested methods for improving translation quality in frameworks with scarce resources has been successfully tested we were able to reduce the amount of bilingual training data to less than 10 of the original corpus while losing only 16 in translation quality the improvement of the translation results is demonstrated on two germanenglish corpora taken from the uerbmobil task and the nespole task in statistical machine translation correspondences between the words in the source and the target language are learned from parallel corpora and often little or no linguistic knowledge is used to structure the underlying modelsin particular existing statistical systems for machine translation often treat different inflectedforms of the same lemma as if they were independent ofone anotherthe bilingual training data can be better exploited by explicitly taking into account the interdependencies of related inflected formswe propose the construction of hierarchical lexicon models on the basis of equivalence classes of wordsin addition we introduce sentencelevel restructuring transformations which aim at the assimilation of word order in related sentenceswe have systematically investigated the amount of bilingual training data required to maintain an acceptable quality of machine translationthe combination of the suggested methods for improving translation quality in frameworks with scarce resources has been successfully tested we were able to reduce the amount of bilingual training data to less than 10 of the original corpus while losing only 16 in translation qualitythe improvement of the translation results is demonstrated on two germanenglish corpora taken from the uerbmobil task and the nespole taskthe statistical approach to machine translation has proved successful in various comparative evaluations since its revival by the work of the ibm research group more than a decade agothe ibm group dispensed with linguistic analysis at least in its earliest publicationsalthough the ibm group finally made use of morphological and syntactic information to enhance translation quality most of todays statistical machine translation systems still consider only surface forms and use no linguistic knowledge about the structure of the languages involvedin many applications only small amounts of bilingual training data are available for the desired domain and language pair and it is highly desirable to avoid at least parts of the costly data collection processthe main objective of the work reported in this article is to introduce morphological knowledge in order to reduce the amount of bilingual data necessary to sufficiently cover the vocabulary expected in testingthis is achieved by explicitly taking into account the interdependencies of related inflected formsin this work a hierarchy of equivalence classes at different levels of abstraction is proposedfeatures from those hierarchy levels are combined to form hierarchical lexicon models which can replace the standard probabilistic lexicon used in most statistical machine translation systemsapart from the improved coverage the proposed lexicon models enable the disambiguation of ambiguous word forms by means of annotation with morphosyntactic tagsthe article is organized as followsafter briefly reviewing the basic concepts of the statistical approach to machine translation we discuss the state of the art and related work as regards the incorporation of morphological and syntactic information into systems for natural language processingsection 2 describes the information provided by morphosyntactic analysis and introduces a suitable representation of the analyzed corpussection 3 suggests solutions for two specific aspects of structural difference namely question inversion and separated verb prefixessection 4 is dedicated to hierarchical lexicon modelsthese models are able to infer translations of word forms from the translations of other word forms of the same lemmafurthermore they use morphosyntactic information to resolve categorial ambiguityin section 5 we describe how disambiguation between different readings and their corresponding translations can be performed when no context is available as is typically the case for conventional electronic dictionariessection 6 provides an overview of our procedure for training model parameters for statistical machine translation with scarce resourcesexperimental results are reported in section 7section 8 concludes the presentation with a discussion of the achievements of this workin statistical machine translation every target language string ei1 e1 ei is assigned a probability pr of being a valid word sequence in the target language and a probability pr of being a translation for the given source language string f1j f1 fjaccording to bayes decision rule the optimal translation for f1j is the target string that maximizes the product of the target language model pr and the string translation model prmany existing systems for statistical machine translation implement models presented by brown della pietra della pietra and mercer the correspondence between the words in the source and the target strings is described by alignments that assign target word positions to each source word positionthe probability that a certain target language word will occur in the target string is assumed to depend basically only on the source words aligned with it131 morphologysome publications have already dealt with the treatment of morphology in the framework of language modeling and speech recognition kanevsky roukos and sedivy propose a statistical language model for inflected languagesthey decompose word forms into stems and affixesmaltese and mancini report that a linear interpolation of word ngrams part of speech ngrams and lemma ngrams yields lower perplexity than pure wordbased modelslarson et al apply a datadriven algorithm for decomposing compound words in compounding languages as well as for recombining phrases to enhance the pronunciation lexicon and the language model for largevocabulary speech recognition systemsas regards machine translation the treatment of morphology is part of the analysis and generation step in virtually every symbolic machine translation systemfor this purpose the lexicon should contain base forms of words and the grammatical category subcategorization features and semantic information in order to enable the size of the lexicon to be reduced and in order to account for unknown word forms that is word forms not present explicitly in the dictionarytodays statistical machine translation systems build upon the work of p f brown and his colleagues at ibmthe translation models they presented in various papers between 1988 and 1993 are commonly referred to as ibm models 15 based on the numbering in brown della pietra della pietra and mercer the underlying lexicon contains only pairs of full formson the other hand brown et al had already suggested word forms be annotated with morphosyntactic information but they did not perform any investigation on the effectset al have dealt with the problem of translation with scarce resourcesalonaizan et al report on an experiment involving tetuntoenglish translation by different groups including one using statistical machine translationalonaizan et al assume the absence of linguistic knowledge sources such as morphological analyzers and dictionariesnevertheless they found that the human mind is very well capable of deriving dependencies such as morphology cognates proper names and spelling variations and that this capability was finally at the basis of the better results produced by humans compared to corpusbased machine translationthe additional information results from complex reasoning and it is not directly accessible from the fullwordform representation in the datathis article takes a different point of view even if full bilingual training data are scarce monolingual knowledge sources like morphological analyzers and data for training the target language model as well as conventional dictionaries may be available and of substantial usefulness for improving the performance of statistical translation systemsthis is especially the case for moreinflecting major languages like germanthe use of dictionaries to augment or replace parallel corpora has already been examined by brown della pietra della pietra and goldsmith and koehn and knight for instancea prerequisite for the methods for improving the quality of statistical machine translation described in this article is the availability of various kinds of morphological and syntactic informationthis section describes the output resulting from morphosyntactic analysis and explains which parts of the analysis are used and how the output is represented for further processingfor obtaining the required morphosyntactic information the following analyzers for german and english were applied gertwol and engtwol for lexical analysis and gercg and engcg for morphological and syntactic disambiguationfor a description of the underlying approach the reader is referred to karlsson tables 1 and 2 give examples of the information provided by these toolsthe examples in tables 1 and 2 demonstrate the capability of the tools to disambiguate among different readings for instance they infer that the word wollen is a verb in the indicative present firstperson plural formwithout any context taken into account sample analysis of a german sentenceinput wir wollen nach dem abendessen nach essen aufbrechenoriginal base form tags wir wir personalpronoun plural first nominative wollen wollen verb indicative present plural first nach nach preposition dative dem das definitearticle singular dative neuter abendessen abendessen noun neuter singular dative nach nach preposition dative essen essen noun name neuter singular dative esse noun feminine plural dative essen noun neuter plural dative essen noun neuter singular dative aufbrechen auflbrechen verb separable infinitive sample analysis of an english sentenceinput do we have to reserve roomsoriginal base form tags do do verb present notsingularthird finite auxiliary we we personalpronoun nominative plural first subject have have verb infinitive notfinite main to to infinitivemarker reserve reserve verb infinitive notfinite main rooms room noun nominative plural object wollen has other readingsit can even be interpreted as derived from an adjective with the meaning made of wool the inflected word forms on the german part of the verbmobil corpus have on average 285 readings 58 of which can be eliminated by the syntactic analyzers on the basis of sentence contextcommon bilingual corpora normally contain full sentences which provide enough context information for ruling out all but one reading for an inflected word formto reduce the remaining uncertainty preference rules have been implementedfor instance it is assumed that the corpus is correctly truecaseconverted beforehand and as a consequence nonnoun readings of uppercase words are droppedfurthermore indicative verb readings are preferred to subjunctive or imperativein addition some simple domainspecific heuristics are appliedthe reading plural of esse for the german word form essen for instance is much less likely in the domain of appointment scheduling and travel arrangements than the readings proper name of the town essen or the german equivalent of the english word mealas can be seen in table 3 the reduction in the number of readings resulting from these preference rules is fairly small in the case of the verbmobil corpusthe remaining ambiguity often lies in those parts of the information which are not used or which are not relevant to the translation taskfor example the analyzers cannot tell accusative from dative case in german but the case information is not essential for the translation task section 24 describes a method for selecting morphosyntactic tags considered relevant for the translation task which results in a further reduction in the number of readings per word form to 106 for german and 101 for englishin these rare cases of ambiguity it is admissible to resort to the unambiguous parts of the readings that is to drop all tags causing mixed interpretationstable 3 summarizes the gradual resolution of ambiguitythe analysis of conventional dictionaries poses some special problems because they do not provide enough context to enable effective disambiguationfor handling this special situation dedicated methods have been implemented these are presented in section 51a full word form is represented by the information provided by the morphosyntactic analysis from the interpretation gehen verb indicative present first singular that is the base form plus part of speech plus the other tags the word form gehe can be restoredit has already been mentioned that the analyzers can disambiguate among different readings on the basis of context informationin this sense the information inherent in the original word forms is augmented by the disambiguating analyzerthis can be useful for choosing the correct translation of ambiguous wordsof course these disambiguation clues result in an enlarged vocabularythe vocabulary of the new representation of the german part of the verbmobil corpus for example in which full word forms are replaced by base form plus morphological and syntactic tags is one and a half times as large as the vocabulary of the original corpuson the other hand the information in the lemmatag representation can be accessed gradually and ultimately reduced for example certain instances of words can be considered equivalentthis fact is used to better exploit the bilingual training data along two directions detecting and omitting unimportant information and constructing hierarchical translation models to summarize the lemmatag representation of a corpus has the following main advantages it makes context information locally available and it allows information to be explicitly accessed at different levels of abstractioninflected word forms in the input language often contain information that is not relevant for translationthis is especially true for the task of translating from a more inflecting language like german into english for instance in parallel germanenglish corpora the german part contains many more distinct word forms than the english part it is useful for the process of statistical machine translation to define equivalence classes of word forms which tend to be translated by the same target language word the resulting statistical translation lexicon becomes smoother and the coverage is considerably improvedsuch equivalence classes are constructed by omitting those items of information from morphosyntactic analysis which are not relevant for translationthe lemmatag representation of the corpus helps to identify the unimportant informationthe definition of relevant and unimportant information respectively depends on many factors like the languages involved the translation direction and the choice of the modelswe detect candidates for equivalence classes of words automatically from the probabilistic lexicon trained for translation from german to englishfor this purpose those inflected forms of the same base form which result in the same translation are inspectedfor each set of tags t the algorithm counts how often an additional tag t1 can be replaced with a certain other tag t2 without effect on the translationas an example let t blauadjective t1 masculine and t2 femininethe two entries and are hints for detecting gender as nonrelevant when translating adjectives into englishtable 4 lists some of the most frequently identified candidates to be ignored while translating the gender of nouns is irrelevant for their translation as are the cases nominative dative accusativefor verbs the candidates number and person were found the translation of the firstperson singular form of a verb for example is often the same as the translation of the thirdperson plural formignoring those tags most often identified as irrelevant for translation results in the building of equivalence classes of wordsdoing so results in a smaller vocabulary one about 655 the size of the vocabulary of the full lemmatag representation of the verbmobil corpus for exampleit is even smaller than the vocabulary of the original fullform corpusthe information described in this section is used to improve the quality of statistical machine translation and to better exploit the available bilingual resourcesdifference in sentence structure is one of the main sources of errors in machine translationit is thus promising to harmonize the word order in corresponding sentencesthe presentation in this section focuses on the following aspects question inversion and separated verb prefixesfor a more detailed discussion of restructuring for statistical machine translation the reader is referred to nieben and ney in many languages the sentence structure of questions differs from the structure in declarative sentences in that the order of the subject and the corresponding finite verb is invertedfrom the perspective of statistical translation this behavior has some disadvantages the algorithm for training the parameters of the target language model pr which is typically a standard ngram model cannot deduce the probability of a word sequence in an interrogative sentence from the corresponding declarative formthe same reasoning is valid for the lexical translation probabilities of multiwordphrase pairsto harmonize the word order of questions with the word order in declarative sentences the order of the subject and the corresponding finite verb is invertedin english questions supporting dos are removedthe application of the described preprocessing step in the bilingual training corpus implies the necessity of restoring the correct forms of the translations produced by the machine translation algorithmthis procedure was suggested by brown et al for the language pair english and french but they did not report on experimental results revealing the effect of the restructuring on the translation qualitygerman prefix verbs consist of a main part and a detachable prefix which can be shifted to the end of the clausefor the automatic alignment process it is often difficult to associate one english word with more than one word in the corresponding german sentence namely the main part of the verb and the separated prefixto solve the problem of separated prefixes all separable word forms of verbs are extracted from the training corpusthe resulting list contains entries of the form prefixmainin all clauses containing a word matching a main part and a word matching the corresponding prefix part occurring at the end of the clause the prefix is prepended to the beginning of the main partin general the probabilistic lexicon resulting from training the translation model contains all word forms occurring in the training corpus as separate entries not taking into account whether or not they are inflected forms of the same lemmabearing in mind that typically more than 40 of the word forms are seen only once in training it is obvious that for many words learning the correct translations is difficultfurthermore new input sentences are expected to contain unknown word forms for which no translation can be retrieved from the lexiconthis problem is especially relevant for moreinflecting languages like german texts in german contain many more distinct word forms than their english translationstable 5 also reveals that these words are often generated via inflection from a smaller set of base formsas mentioned in section 23 the lemmatag representation of the information from morphosyntactic analysis makes it possible to gradually access information with different grades of abstractionconsider for example the german verb form ankomme which is the indicative present firstperson singular form of the lemma ankommen and can be translated into english by arrivethe lemmatag representation provides an observation tuple consisting of in the following ti0 t0 ti denotes the representation of a word where the base form t0 and i additional tags are taken into accountfor the example above t0 ankommen t1 verb and so onthe hierarchy of equivalence classes f0 fn is as follows where n is the maximum number of morphosyntactic tagsthe mapping from the full lemmatag representation back to inflected word forms is generally unambiguous thus fn contains only one element namely ankommefn1 contains the forms ankomme ankommst and ankommt in fn2 the number is ignored and so onthe largest equivalence class contains all inflected forms of the base form ankommen1 section 42 introduces the concept of combining information at different levels of abstractionin modeling for statistical machine translation a hidden variable aj1 denoting the hidden alignment between the words in the source and target languages is usually introduced into the string translation probability in the following tj tn j denotes the lemmatag representation of the jth word in 0 the input sentencethe sequence tj1 stands for the sequence of readings for the word sequence fj1 and can be introduced as a new hidden variable 1 the order of omitting tags can be defined in a natural way depending on the part of speechin principle this decision can also be left to the maximumentropy training when features for all possible sets of tags are defined but this would cause the number of parameters to explodeas the experiments in this work have been carried out only with up to three levels of abstraction as defined in section 42 the set of tags of the intermediate level is fixed and thus the priority of the tags needs not be specifiedthe relation between this equivalence class hierarchy and the suggestions in section 24 is clear choosing candidates for morphosyntactic tags not relevant for translation amounts to fixing a level in the hierarchythis is exactly what has been done to define the intermediate level in section 42nießen and ney smt with scarce resources let t be the set of interpretations which are regarded valid readings of fj by the morphosyntactic analyzers on the basis of the wholesentence context fj1we assume that the probability functions defined above yield zero for all other readings that is when tj v t under the usual independence assumption which states that the probability of the translation of words depends only on the identity of the words associated with each other by the word alignment we get as has been argued in section 22 the number of readings t per word form can be reduced to one for the tasks for which experimental results are reported herethe elements in equation are the joint probabilities p off and the readings t of f given the target language word e the maximumentropy principle recommends choosing for p the distribution which preserves as much uncertainty as possible in terms of maximizing the entropy while requiring p to satisfy constraints which represent facts known from the datathese constraints are encoded on the basis of feature functions hm and the expectation of each feature hm over the model p is required to be equal to the observed expectationthe maximumentropy model can be shown to be unique and to have an exponential form involving a weighted sum over the feature functions hm in equation the notation tn0 is used again for the lemmatag representation of an input word for notational simplicity where λ am is the set of model parameters with one weight am for each feature function hmthese model parameters can be trained using converging iterative training procedures like the ones described by darroch and ratcliff or della pietra della pietra and lafferty in the experiments presented in this article the sum over the word forms f and the readings tn0 in the denominator of equation is restricted to the readings of word forms having the same base form and partial reading as a word form fquot aligned at least once to e the new lexicon model pλ can now replace the usual lexicon model p over which it has the following main advantages f amounts to making context information from the complete sentence f1j locally available the sentence context was taken into account by the morphosyntactic analyzer which chose the valid readings t 421 definition of feature functionsthere are numerous possibilities for defining feature functionswe do not need to require that they all have the same parametric form or that the components be disjoint and statistically independentstill it is necessary to restrict the number of parameters so that optimizing them is practicalwe used the following types of feature functions which have been defined on the basis of the lemmatag representation first level m ile where l is the base form second level m it le with subsets t of cardinality n of morphosyntactic tags considered relevant in terms of the hierarchy introduced in section 41 this means that information at three different levels in the hierarchy is combinedthe subsets t of relevant tags mentioned previously fix the intermediate level2 this choice of the types of features as well as the choice of the subsets t is reasonable but somewhat arbitraryalternatively one can think of defining a much more general set of features and applying some method of feature selection as has been done for example by foster who compared different methods for feature selection within the task of translation modeling for statistical machine translationnote that the loglinear model introduced here uses one parameter per featurefor the verbmobil task for example there are approximately 162 000 parameters 47800 for the firstorder features 55700 for the secondorder features and 58500 for the thirdorder featuresno feature selection or threshold was applied all features seen in training were used lexicon models is depicted in figure 1this figure includes the possibility of using restructuring operations as suggested in section 3 in order to deal with structural differences between the languages involvedthis can be especially advantageous in the case of multiword phrases which jointly fulfill a syntactic function not merging them 2 of course there is not only one set of relevant tags but at least one per part of speechin order to keep the notation as simple as possible this fact is not accounted for in the formulas and the textual descriptionstraining and test with hierarchical lexicon restructuring analyze and annotation all require morphosyntactic analysis of the transformed sentences would raise the question of how to distribute the syntactic tags which have been associated with the whole phrasein section 52 we describe a method of learning multiword phrases using conventional dictionariesthe alignment on the training corpus is trained using the original source language corpus containing inflected word formsthis alignment is then used to count the cooccurrences of the annotated words in the lemmatag representation of the source language corpus with the words in the target language corpusthese event counts are used for the maximumentropy training of the model parameters λthe probability mass is distributed over the source language word forms to be supported for test the only precondition is that the firing features for these unseen events are knownthis vocabulary supported in test as it is called in figure 1 can be a predefined closed vocabulary as is the case in verbmobil in which the output of a speech recognizer with limited output vocabulary is to be translatedin the easiest case it is identical to the vocabulary found in the source language part of the training corpusthe other extreme would be an extended vocabulary containing all automatically generated inflected forms of all base forms occurring in the training corpusthis vocabulary is annotated with morphosyntactic tags ideally under consideration of all possible readings of all word formsto enable the application of the hierarchical lexicon model the source language input sentences in test have to be analyzed and annotated with their lemmatag representation before the actual translation processso far the sum over the readings in equation has been ignored because when the techniques for reducing the amount of ambiguity described in section 22 and the disambiguated conventional dictionaries resulting from the approach presented in section 51 are applied there remains almost always only one reading per word formconventional dictionaries are often used as additional evidence to better train the model parameters in statistical machine translationthe expression conventional dictionary here denotes bilingual collections of word or phrase pairs predominantly collected by hand usually by lexicographers as opposed to the probabilistic lexica which are learned automaticallyapart from the theoretical problem of how to incorporate external dictionaries in a mathematically sound way into a statistical framework for machine translation there are also some pragmatic difficulties as discussed in section 22 one of the disadvantages of these conventional dictionaries as compared to full bilingual corpora is that their entries typically contain single words or short phrases on each language sideconsequently it is not possible to distinguish among the translations for different readings of a wordin normal bilingual corpora the words can often be disambiguated by taking into account the sentence context in which they occurfor example from the context in the sentence ich werde die zimmer buchen it is possible to infer that zimmer in this sentence is plural and has to be translated by rooms in english whereas the correct translation of zimmer in the sentence ich hatte gerne ein zimmer is the singular form roomthe dictionary used by our research group for augmenting the bilingual data contains two entries for zimmer and the approach described in this section is based on the observation that in many of the cases of ambiguous entries in dictionaries the second part of the entrythat is the otherlanguage sidecontains the information necessary to decide upon the interpretationin some other cases the same kind of ambiguity is present in both languages and it would be possible and desirable to associate the corresponding readings with one anotherthe method proposed here takes advantage of these facts in order to disambiguate dictionary entriesfigure 2 sketches the procedure for the disambiguation of a conventional dictionary d in addition to d a bilingual corpus c1 of the same language pair is required to train the probability model for tag sequence translationsthe word forms in c1 need not match those in d c1 is not necessarily the training corpus for the translation task in which the disambiguated version of d will be usedit does not even have to be taken from the same domaina word alignment between the sentences in c1 is trained with some automatic alignment algorithmthen the words in the bilingual corpus are replaced by a reduced form of their lemmatag representation in which only a subset of their morphosyntactic tags is retainedeven the base form is droppedthe remaining subset of tags in the following denoted by tf for the source language and te for the target language consists of tags considered relevant for the task of aligning corresponding readingsthis is not necessarily the same set of tags considered relevant for the task of translation which was used for example to fix the intermediate level for the loglinear lexicon disambiguation of conventional dictionarieslearn phrases analyze and annotation require morphosyntactic analysis of the transformed sentences combination in section 421in the case of the verbmobil corpus the maximum length of a tag sequence is fivethe alignment is used to count the frequency of a certain tag sequence tf in the source language to be associated with another tag sequence te in the target language and to compute the tag sequence translation probabilities p as relative frequenciesfor the time being these tag sequence translation probabilities associate readings of words in one language with readings of words in the other language multiword sequences are not accounted forto alleviate this shortcoming it is possible and advisable to automatically detect and merge multiword phrasesas will be described in section 52 the conventional bilingual dictionary itself can be used to learn and validate these phrasesthe resulting multiword phrases pe for the target language and pf for the source language are afterwards concatenated within d to form entries consisting of pairs of units the next step is to analyze the word forms in d and generate all possible readings of all entriesit is also possible to ignore those readings that are considered unlikely for the task under consideration by applying the domainspecific preference rules proposed in section 22the process of generating all readings includes replacing word forms with their lemmatag representation which is thereafter reduced by dropping all morphosyntactic tags not contained in the tag sets tf and teusing the tag sequence translation probabilities p the readings in one language are aligned with readings in the other languagethese alignments are applied to the full lemmatag representation of the expanded dictionary containing one entry per reading of the original word formsthe highestranking aligned readings according to p for each lemma are preservedthe resulting disambiguated dictionary contains two entries for the german word zimmer and the target language part is then reduced to the surface forms and note that this augmented dictionary in the following denoted by d has more entries than d as a result of the step of generating all readingsthe two entries and for example produce three new entries and some recent publications deal with the automatic detection of multiword phrases these methods are very useful but they have one drawback they rely on sufficiently large training corpora because they detect the phrases from automatically learned word alignmentsin this section a method for detecting multiword phrases is suggested which merely requires monolingual syntactic analyzers and a conventional dictionarysome multiword phrases which jointly fulfill a syntactic function are provided by the analyzersthe phrase irgend etwas for example may form either an indefinite determiner or an indefinite pronoun irgendetwas is merged by the analyzer in order to form one single vocabulary entryin the german part of the verbmobil training corpus 26 different nonidiomatic multiword phrases are merged while there are 318 phrases suggested for the english partin addition syntactic information like the identification of infinitive markers determiners modifying adjectives premodifying adverbials and premodifying nouns are used for detecting multiword phraseswhen applied to the english part of the verbmobil training corpus these hints suggest 7225 different phrasesaltogether 26 phrases for german and about 7500 phrases for english are detected in this wayit is quite natural that there are more multiword phrases found for english as german unlike english uses compoundingbut the experiments show that it is not advantageous to use all these phrases for englishelectronic dictionaries can be useful for detecting those phrases which are important in a statistical machine translation context a multiword phrase is considered useful if it is translated into a single word or a distinct multiword phrase in another languagethere are 290 phrases chosen in this way for the english languagetaking into account the interdependencies of inflected forms of the same base form is especially relevant when inflected languages like german are involved and when training data are sparsein this situation many of the inflected word forms to account for in test do not occur during trainingsparse bilingual training data also make additional conventional dictionaries especially importantenriching the dictionaries by aligning corresponding readings is particularly useful when the dictionaries are used in conjunction with a hierarchical lexicon which can access the information necessary to distinguish readings via morphosyntactic tagsthe restructuring operations described in section 3 also help in coping with the data sparseness problem because they make corresponding sentences more similarthis section proposes a procedure for combining all these methods in order to improve the translation quality despite sparseness of datafigure 3 sketches the proposed proceduretraining with scarce resourcesrestructuring learn phrases and annotation all require morphosyntactic analysis of the transformed sentencestwo different bilingual corpora c1 and c2 one monolingual target language corpus and a conventional bilingual dictionary d can contribute in various ways to the overall resultit is important to note here that c1 and c2 can but need not be distinct and that the monolingual corpus can be identical to the target language part of c2furthermore these corpora can be taken from different domains and c1 can be smallonly c2 has to represent the domain and the vocabulary for which the translation system is built and only the size of c2 and the monolingual corpus have a substantial effect on the translation qualityit is interesting to note though that a basic statistical machine translation system with an accuracy near 50 can be built without any domainspecific bilingual corpus c2 solely on the basis of a disambiguated dictionary and the hierarchical lexicon models as table 9 shows can be comparatively small given the limited number of tag sequence pairs for which translation probabilities must be provided in the verbmobil training corpus for example there are only 261 different german and 110 different english tag sequences in the next step the second bilingual corpus c2 and d are combined and a word alignment a for both is trainedc2 d and a are presented as input to the maximumentropy training of a hierarchical lexicon model as described in section 42 the language model can be trained on a separate monolingual corpusas monolingual data are much easier and cheaper to compile this corpus might be larger than the target language part of c2tests were carried out on verbmobil data and on nespole dataas usual the sentences from the test sets were not used for trainingthe training corpora were used for training the parameters of ibm model 4711 verbmobilverbmobil was a project for automatic translation of spontaneously spoken dialoguesa detailed description of the statistical translation system within verbmobil is given by ney et al and by och table 5 summarizes the characteristics of the english and german parallel corpus used for training the parameters of ibm model 4a conventional dictionary complements the training corpus the vocabulary in verbmobil was considered closed there are official lists of word forms which can be produced by the speech recognizerssuch lists exist for german and english table 8 lists the characteristics of the two test sets test and develop taken from the endtoend evaluation in verbmobil the development part being meant to tune system parameters on a heldout corpus different from the training as well as the test corpusas no parameters are optimized on the development set for the methods described in this article most of the experiments were carried out on a joint set containing both test sets712 nespolenespole is a research project that ran from january 2000 to june 2002it aimed to provide multimodel support for negotiation table 5 summarizes the corpus statistics of the nespole training settable 8 provides the corresponding figures for the test set used in this workfor testing we used the alignment template translation system described in och tillmann and ney training the parameters for this system entails training of ibm model 4 parameters in both translation directions and combining the resulting alignments into one symmetrized alignmentfrom this symmetrized alignment the lexicon probabilities as well as the socalled alignment templates are extractedthe latter are translation patterns which capture phraselevel translation pairsthe following evaluation criteria were used in the experiments bleu this score proposed by papineni et al is based on the notion of modified ngram precision with n 14 all candidate unigram bigram trigram and fourgram counts are collected and clipped against their corresponding maximum reference countsthe reference ngram counts are calculated on a corpus of reference translations for each input sentencethe clipped candidate counts are summed and normalized by the total number of candidate ngramsthe geometric mean of the modified precision scores for a test corpus is calculated and multiplied by an exponential brevity penalty factor to penalize tooshort translationsbleu is an accuracy measure while the others are error measures mwer for each test sentence there is a set of reference translationsfor each translation hypothesis the edit distance to the most similar reference is calculatedsser each translated sentence is judged by a human examiner according to an error scale from 00 to 10 iser the test sentences are segmented into information items for each of these items the translation candidates are assigned either ok or an error classif the intended information is conveyed the translation of an information item is considered correct even if there are slight syntactic errors which do not seriously deteriorate the intelligibilityfor evaluating the sser and the iser we have used the evaluation tool evaltrans which is designed to facilitate the work of manually judging evaluation quality and to ensure consistency over time and across evaluatorsit is a costly and timeconsuming task to compile large texts and have them translated to form bilingual corpora suitable for training the model parameters for statistical machine translationas a consequence it is important to investigate the amount of data necessary to sufficiently cover the vocabulary expected in testingfurthermore we want to examine to what extent the incorporation of morphological knowledge sources can reduce this amount of necessary datafigure 4 shows the relation between the size of a typical german corpus and the corresponding number of different full formsat the size of 520000 words the size of the verbmobil corpus used for training this curve still has a high growth rateto investigate the impact of the size of the bilingual corpus available for training on translation quality three different setups for training the statistical lexicon on verbmobil data have been defined the language model is always trained on the full english corpusthe argument for this is that monolingual corpora are always easier and less expensive to obtain than bilingual corporaa conventional dictionary is used in all three setups to complement impact of corpus size on vocabulary size for the german part of the verbmobil corpus the bilingual corpusin the last setup the lexicon probabilities are trained exclusively on this dictionary as table 9 shows the quality of translation drops significantly when the amount of bilingual data available during training is reduced when the training corpus is restricted to 5000 sentences the sser increases by about 7 and the iser by about 3as could be expected the translations produced by the system trained exclusively on a conventional dictionary are very poor the sser jumps over 60751 results on the verbmobil taskas was pointed out in section 4 the hierarchical lexicon is expected to be especially useful in cases in which many of the inflected word forms to be accounted for in test do not occur during trainingto systematically investigate the models generalization capability it has been applied on the three different setups described in section 74the training procedure was the one proposed in section 6 which includes restructuring transformations in training and testtable 9 summarizes the improvement achieved for all three setupstraining on 58000 sentences plus conventional dictionary compared to the effect of restructuring the additional improvement achieved with the hierarchical lexicon is relatively small in this setupthe combination of all methods results in a relative improvement in terms of sser of almost 13 and in terms of information iser of more than 16 as compared to the baselinetraining on 5000 sentences plus conventional dictionary restructuring alone can improve the translation quality from 373 to 336the benefit from the hierarchical lexicon is larger in this setup and the resulting in sser is 318this is a relative improvement of almost 15the relative improvement in terms of iser is almost 22note that by applying the methods proposed here the corpus for training can be reduced to less than 10 of the original size while increasing the sser only from 302 to 318 compared to the baseline when using the full corpustraining only on conventional dictionary in this setup the impact of the hierarchical lexicon is clearly larger than the effect of the restructuring methods because here the data sparseness problem is much more important than the word order problemthe overall relative reduction in terms of sser is 137 and in terms of iser 191an error rate of about 52 is still very poor but it is close to what might be acceptable when only the gist of the translated document is needed as is the case in the framework of document classification or multilingual information retrievalexamples taken from the verbmobil eval2000 test set are given in table 10smoothing the lexicon probabilities over the inflected forms of the same lemma enables the translation of sind as would instead of arethe smoothed lexicon contains the translation convenient for any inflected form of bequemthe comparative more convenient would be the completely correct translationthe last two examples in the table demonstrate the effect of the disambiguating analyzer which on the basis of the sentence context identifies zimmer as plural and das as an article to be translated by the instead of a pronoun which would be translated as thatthe last example demonstrates that overfitting on domainspecific training can be problematic in some cases generally because is a good translation for the coordinating conjunction denn but in the appointmentscheduling domain denn is often an adverb and it often occurs in the same sentence as dann as in wie ware es denn dannthe translation for this sentence is something like how about thenbecause of the frequency of this domainspecific language use the word form denn is often aligned to then in the training corpusthe hierarchical examples of the effect of the hierarchical lexiconinput sind sie mit einem doppelzimmer einverstandenbaseline are you agree with a double roomhierarchical lexicon would you agree with a double roominput mit dem zug ist es bequemerbaseline by train it is unknownbequemerhierarchical lexicon by train it is convenientinput wir haben zwei zimmerbaseline we have two roomhierarchical lexicon we have two roomsinput ich wurde das hilton vorschlagen denn es ist das bestebaseline i would suggest that hilton then it is the besthierarchical lexicon i would suggest the hilton because it is the best lexicon distinguishes the adverb reading and the conjunction reading and the correct translation because is the highestranking one for the conjunction752 results on the nespoletaskwe were provided with a small germanenglish corpus from the nespole project from table 5 it is obvious that this task is an example of very scarce training data and it is thus interesting to test the performance of the methods proposed in this article on this taskthe same conventional dictionary as was used for the experiments on verbmobil data complemented the small bilingual training corpusfurthermore the english part of the verbmobil corpus was used in addition to the english part of the nespole corpus for training the language modeltable 11 summarizes the resultsinformation items have not been defined for this test setan overall relative improvement of 165 in the sser can be achievedin this article we have proposed methods of incorporating morphological and syntactic information into systems for statistical machine translationthe overall goal was to improve translation quality and to reduce the amount of parallel text necessary to results for hierarchical lexicon model nespolerestructuring entails treatment of question inversion and separated verb prefixes as well as merging of phrases in both languagesthe same conventional dictionary was used as in the experiments the verbmobilthe language model was trained on a combination of the english parts of the nespole corpus and the verbmobil corpus train the model parameterssubstantial improvements on the verbmobil task and the nespole task were achievedsome sentencelevel restructuring transformations have been introduced which are motivated by knowledge about the sentence structure in the languages involvedthese transformations aim at the assimilation of word orders in related sentencesa hierarchy of equivalence classes has been defined on the basis of morphological and syntactic information beyond the surface formsthe study of the effect of using information from either degree of abstraction led to the construction of hierarchical lexicon models which combine different items of information in a loglinear waythe benefit from these combined models is twofold first the lexical coverage is improved because the translation of unseen word forms can be derived by considering information from lower levels in the hierarchysecond category ambiguity can be resolved because syntactical context information is made locally accessible by means of annotation with morphosyntactic tagsas a side effect of the preparative work for setting up the underlying hierarchy of morphosyntactic information those pieces of information inherent in fully inflected word forms that are not relevant for translation are detecteda method for aligning corresponding readings in conventional dictionaries containing pairs of fully inflected word forms has been proposedthe approach uses information deduced from one language side to resolve category ambiguity in the corresponding entry in the other languagethe resulting disambiguated dictionaries have proven to be better suited for improving the quality of machine translation especially if they are used in combination with the hierarchical lexicon modelsthe amount of bilingual training data required to achieve an acceptable quality of machine translation has been systematically investigatedall the methods mentioned previously contribute to a better exploitation of the available bilingual data and thus to improving translation quality in frameworks with scarce resourcesthree setups for training the parameters of the statistical lexicon on verbmobil data have been examined using the full 58000 sentences comprising the bilingual training corpus restricting the corpus to 5000 sentences and using only a conventional dictionaryfor each of these setups a relative improvement in terms of subjective sentence error rate between 13 and 15 as compared to the baseline could be obtained using combinations of the methods described in this articlethe amount of bilingual training data could be reduced to less than 10 of the original corpus while losing only 16 in accuracy as measured by the subjective sentence error ratea relative improvement of 165 in terms of subjective sentence error rate could also be achieved on the nespole taskthis work has been partially supported as part of the verbmobil project by the german federal ministry of education science research and technology and as part of the eutrans project by the european unionfor the provision of the nespole data we thank the nespole consortium listed on the projects home page special thanks to alon lavie lori levin stephan vogel and alex waibel
J04-2003
statistical machine translation with scarce resources using morphosyntactic informationin statistical machine translation correspondences between the words in the source and the target language are learned from parallel corpora and often little or no linguistic knowledge is used to structure the underlying modelsin particular existing statistical systems for machine translation often treat different inflected forms of the same lemma as if they were independent of one anotherthe bilingual training data can be better exploited by explicitly taking into account the interdependencies of related inflected formswe propose the construction of hierarchical lexicon models on the basis of equivalence classes of wordsin addition we introduce sentencelevel restructuring transformations which aim at the assimilation of word order in related sentenceswe have systematically investigated the amount of bilingual training data required to maintain an acceptable quality of machine translationthe combination of the suggested methods for improving translation quality in frameworks with scarce resources has been successfully tested we were able to reduce the amount of bilingual training data to less than 10 of the original corpus while losing only 16 in translation qualitythe improvement of the translation results is demonstrated on two germanenglish corpora taken from the verbmobil task and the nespole taskwe decompose german words into a hierarchical representation using lemmas and morphological tags and use a maxent model to combine the different levels of representation in the translation modelwe describe a method that combines morphologically split verbs in german and also reorders questions in english and german
learning subjective language subjectivity in natural language refers to aspects of language used to express opinions evaluations and speculations there are numerous natural language processing applications for which subjectivity analysis is relevant including information extraction and text categorization the goal of this work is learning subjective language from corpora clues of subjectivity are generated and tested including lowfrequency words collocations and adjectives and verbs identified using distributional similarity the features are also examined working together in concert the features generated from different data sets using different procedures exhibit consistency in performance in that they all do better and worse on the same data sets in addition this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective and it provides the results of an annotation study assessing the subjectivity of sentences with highdensity features finally the clues are used to perform opinion piece recognition to demonstrate the utility of the knowledge acquired in this article subjectivity in natural language refers to aspects of language used to express opinions evaluations and speculationsthere are numerous natural language processing applications for which subjectivity analysis is relevant including information extraction and text categorizationthe goal of this work is learning subjective language from corporaclues of subjectivity are generated and tested including lowfrequency words collocations and adjectives and verbs identified using distributional similaritythe features are also examined working together in concertthe features generated from different data sets using different procedures exhibit consistency in performance in that they all do better and worse on the same data setsin addition this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective and it provides the results of an annotation study assessing the subjectivity of sentences with highdensity featuresfinally the clues are used to perform opinion piece recognition to demonstrate the utility of the knowledge acquired in this articlesubjectivity in natural language refers to aspects of language used to express opinions evaluations and speculations many natural language processing applications could benefit from being able to distinguish subjective language from language used to objectively present factual informationcurrent extraction and retrieval technology focuses almost exclusively on the subject matter of documentshowever additional aspects of a document influence its relevance including evidential status and attitude information extraction systems should be able to distinguish between factual information and nonfactual information questionanswering systems should distinguish between factual and speculative answersmultiperspective question answering aims to present multiple answers to the user based upon speculation or opinions derived from different sources multidocument summarization systems should summarize different opinions and perspectivesautomatic subjectivity analysis would also be useful to perform flame recognition email classification intellectual attribution in text recognition of speaker role in radio broadcasts review mining review classification style in generation and clustering documents by ideological point of view in general nearly any informationseeking system could benefit from knowledge of how opinionated a text is and whether or not the writer purports to objectively present factual materialto perform automatic subjectivity analysis good clues must be founda huge variety of words and phrases have subjective usages and while some manually developed resources exist such as dictionaries of affective language and subjective features in generalpurpose lexicons there is no comprehensive dictionary of subjective languagein addition many expressions with subjective usages have objective usages as well so a dictionary alone would not sufficean nlp system must disambiguate these expressions in contextthe goal of our work is learning subjective language from corporain this article we generate and test subjectivity clues and contextual features and use the knowledge we gain to recognize subjective sentences and opinionated documentstwo kinds of data are available to us a relatively small amount of data manually annotated at the expression level of wall street journal and newsgroup data and a large amount of data with existing documentlevel annotations from the wall street journal both are used as training data to identify clues of subjectivityin addition we crossvalidate the results between the two types of annotation the clues learned from the expressionlevel data are evaluated against the documentlevel annotations and those learned using the documentlevel annotations are evaluated against the expressionlevel annotationsthere were a number of motivations behind our decision to use documentlevel annotations in addition to our manual annotations to identify and evaluate clues of subjectivitythe documentlevel annotations were not produced according to our annotation scheme and were not produced for the purpose of training and evaluating an nlp systemthus they are an external influence from outside the laboratoryin addition there are a great number of these data enabling us to evaluate the results on a larger scale using multiple large test setsthis and crosstraining between the two types of annotations allows us to assess consistency in performance of the various identification proceduresgood performance in crossvalidation experiments between different types of annotations is evidence that the results are not brittlewe focus on three types of subjectivity cluesthe first are hapax legomena the set of words that appear just once in the corpuswe refer to them here as unique wordsthe set of all unique words is a feature with high frequency and significantly higher precision than baseline the second are collocations we demonstrate a straightforward method for automatically identifying collocational clues of subjectivity in textsthe method is first used to identify fixed ngrams such as of the century and get out of hereinterestingly many include noncontent words that are typically on stop lists of nlp systems the method is then used to identify an unusual form of collocation one or more positions in the collocation may be filled by any word that is unique in the test datathe third type of subjectivity clue we examine here are adjective and verb features identified using the results of a method for clustering words according to distributional similarity we hypothesized that two words may be distributionally similar because they are both potentially subjective in addition we use distributional similarity to improve estimates of unseen events a word is selected or discarded based on the precision of it together with its n most similar neighborswe show that the various subjectivity clues perform better and worse on the same data sets exhibiting an important consistency in performance in addition to learning and evaluating clues associated with subjectivity we address disambiguating them in context that is identifying instances of clues that are subjective in context we find that the density of clues in the surrounding context is an important influenceusing two types of annotations serves us well here tooit enables us to use manual judgments to identify parameters for disambiguating instances of automatically identified clueshighdensity clues are high precision in both the expressionlevel and documentlevel datain addition we give the results of a new annotation study showing that most highdensity clues are in subjective text spans finally we use the clues together to perform documentlevel classification to further demonstrate the utility of the acquired knowledge at the end of the article we discuss related work and conclusions subjective language is language used to express private states in the context of a text or conversationprivate state is a general covering term for opinions evaluations emotions and speculations the following are examples of subjective sentences from a variety of document typesthe first two examples are from usenet newsgroup messages the next one is from an editorial we stand in awe of the woodstock generations ability to be unceasingly fascinated by the subject of itself the next example is from a book review the last one is from a news story the cost of health care is eroding our standard of living and sapping industrial strength complains walter maher a chrysler healthandbenefits specialist in contrast the following are examples of objective sentences sentences without significant expressions of subjectivity a particular model of linguistic subjectivity underlies the current and past research in this area by wiebe and colleaguesit is most fully presented in wiebe and rapaport and wiebe it was developed to support nlp research and combines ideas from several sources in fields outside nlp especially linguistics and literary theorythe most direct influences on the model were dolezel uspensky kuroda chatman cohn fodor and especially banfield 1 the remainder of this section sketches our conceptualization of subjectivity and describes the annotation projects it underliessubjective elements are linguistic expressions of private states in contextsubjective elements are often lexical and eroding sapping and complains in they may be single words or more complex expressions purely syntactic or morphological devices may also be subjective elements a subjective element expresses the subjectivity of a source who may be the writer or someone mentioned in the textfor example the source of fascinating in is the writer while the source of the subjective elements in is maher in addition a subjective element usually has a target that is what the subjectivity is about or directed towardin the target is a tale in the target of mahers subjectivity is the cost of health carenote our parenthetical aboveaccording to the writerconcerning mahers subjectivitymaher is not directly speaking to us but is being quoted by the writerthus the source is a nested source which we notate this represents the fact that the subjectivity is being attributed to maher by the writersince sources are not directly addressed by the experiments presented in this article we merely illustrate the idea here with an example to give the reader an idea the foreign ministry said thursday that it was surprised to put it mildly by the yous state departments criticism of russias human rights record and objected in particular to the odious section on chechnya to put it mildly criticism objected odious consider surprised to put it mildlythis refers to a private state of the foreign ministry this is in the context of the foreign ministry said which is in a sentence written by the writerthis gives us the threelevel source the phrase to put it mildly which expresses sarcasm is attributed to the foreign ministry by the writer so its source is the subjective element criticism has a deeply nested source according to the writer the foreign ministry said it is surprised by the yous state departments criticismthe nestedsource representation allows us to pinpoint the subjectivity in a sentencefor example there is no subjectivity attributed directly to the writer in the above sentence at the level of the writer the sentence merely says that someone said something and objected to something if the sentence started the magnificent foreign ministry said then we would have an additional subjective element magnificent with source note that subjective does not mean not trueconsider the sentence john criticized mary for smokingthe verb criticized is a subjective element expressing negative evaluation with nested source but this does not mean that john does not believe that mary smokessimilarly objective does not mean truea sentence is objective if the language used to convey the information suggests that facts are being presented in the context of the discourse material is objectively presented as if it were truewhether or not the source truly believes the information and whether or not the information is in fact true are considerations outside the purview of a theory of linguistic subjectivityan aspect of subjectivity highlighted when we are working with nlp applications is ambiguitymany words with subjective usages may be used objectivelyexamples are sapping and erodingin they are used subjectively but one can easily imagine objective usages in a scientific domain for examplethus an nlp system may not merely consult a list of lexical items to accurately identify subjective language but must disambiguate words phrases and sentences in contextin our terminology a potential subjective element is a linguistic element that may be used to express subjectivitya subjective element is an instance of a potential subjective element in a particular context that is indeed subjective in that context in this article we focus on learning lexical items that are associated with subjectivity and then using them in concert to disambiguate instances of them in our subjectivity annotation projects we do not give the annotators lists of particular words and phrases to look forrather we ask them to label sentences according to their interpretations in contextas a result the annotators consider a large variety of expressions when performing annotationswe use data that have been manually annotated at the expression level the sentence level and the document levelfor diversity we use data from the wall street journal treebank as well as data from a corpus of usenet newsgroup messagestable 1 summarizes the data sets and annotations used in this articlenone of the datasets overlapthe annotation types listed in the table are those used in the experiments presented in this articlein our first subjectivity annotation project a corpus of sentences from the wall street journal treebank corpus was annotated at the sentence level by multiple judgesthe judges were instructed to classify a sentence as subjective if it contained any significant expressions of subjectivity attributed to either the writer or someone mentioned in the text and to classify the sentence as objective otherwiseafter multiple rounds of training the annotators independently annotated a fresh test set of 500 sentences from wsjsethey achieved an average pairwise kappa score of 070 over the entire test set an average pairwise kappa score of 080 for the 85 of the test set for which the annotators were somewhat sure of their judgments and an average pairwise kappa score of 088 for the 70 of the test set for which the annotators were very sure of their judgmentswe later asked the same annotators to identify the subjective elements in wsjsespecifically each annotator was given the subjective sentences he identified in the previous study and asked to put brackets around the words he believed caused the sentence to be classified as subjective2 for example they paid more for for reposting his responseno other instructions were given to the annotators and no training was performed for the expressionlevel taska single round of tagging was performed with no communication between annotatorsthere are techniques for analyzing agreement when annotations involve segment boundaries but our focus in this article is on wordsthus our analyses are at the word level each word is classified as either appearing in a subjective element or notpunctuation and numbers are excluded from the analysesthe kappa value for word agreement in this study is 042another twolevel annotation project was performed in wiebe et al this time involving documentlevel and expressionlevel annotations of newsgroup data in that project we were interested in annotating flames inflammatory messages in newsgroups or listservsnote that inflammatory language is a kind of subjective languagethe annotators were instructed to mark a message as a flame if the main intention of the message is a personal attack and the message contains insulting or abusive languageafter multiple rounds of training three annotators independently annotated a fresh test set of 88 messages from ngfethe average pairwise percentage agreement is 92 and the average pairwise kappa value is 078these results are comparable to those of spertus who reports 98 agreement on noninflammatory messages and 64 agreement on inflammatory messagestwo of the annotators were then asked to identify the flame elements in the entire corpus ngfeflame elements are the subset of subjective elements that are perceived to be inflammatorythe two annotators were asked to do this in the entire corpus even those messages not identified as flames because messages that were not judged to be flames at the document level may contain some individual inflammatory phrasesas above no training was performed for the expressionlevel task and a single round of tagging was performed without communication between annotatorsagreement was measured in the same way as in the subjectiveelement study abovethe kappa value for flame element annotations in corpus ngfe is 046an additional annotation project involved a single annotator who performed subjectiveelement annotations on the newsgroup corpus ngsethe agreement results above suggest that good levels of agreement can be achieved at higher levels of classification but agreement at the expression level is more challengingthe agreement values are lower for the expressionlevel annotations but are still much higher than that expected by chancenote that our wordbased analysis of agreement is a tough measure because it requires that exactly the same words be identified by both annotatorsconsider the following example from wsjse d m played the role jeans a of long hair and of judge d in the example consistently identifies entire phrases as subjective while judge m prefers to select discrete lexical itemsdespite such differences between annotators the expressionlevel annotations proved very useful for exploring hypotheses and generating features as described belowsince this article was written a new annotation project has been completeda 10000sentence corpus of englishlanguage versions of world news articles has been annotated with detailed subjectivity information as part of a project investigating multipleperspective question answering these annotations are much more detailed than the annotations used in this article the interannotator agreement scores for the new corpus are high and are improvements over the results of the studies described above the current article uses existing documentlevel subjective classes namely editorials letters to the editor arts leisure reviews and viewpoints in the wall street journalthese are subjective classes in the sense that they are text categories for which subjectivity is a key aspectwe refer to them collectively as opinion piecesall other types of documents in the wall street journal are collectively referred to as nonopinion piecesnote that opinion pieces are not 100 subjectivefor example editorials contain objective sentences presenting facts supporting the writers argument and reviews contain sentences objectively presenting facts about the product beign reviewedsimilarly nonopinion pieces are not 100 objectivenews reports present opinions and reactions to reported events they often contain segments starting with expressions such as critics claim and supporters arguein addition quotedspeech sentences in which individuals express their subjectivity are often included for concreteness let us consider wsjse which recall has been manually annotated at the sentence levelin wsjse 70 of the sentences in opinion pieces are subjective and 30 are objectivein nonopinion pieces 44 of the sentences are subjective and only 56 are objectivethus while there is a higher concentration of subjective sentences in opinion versus nonopinion pieces there are many subjective sentences in nonopinion pieces and objective sentences in opinion piecesan inspection of some data reveals that some editorial and review articles are not marked as such by the wall street journalfor example there are articles whose purpose is to present an argument rather than cover a news story but they are not explicitly labeled as editorials by the wall street journalthus the opinion piece annotations of data sets op1 and op2 in table 1 have been manually refinedthe annotation instructions were simply to identify any additional opinion pieces that were not marked as suchto test the reliability of this annotation two judges independently annotated two wall street journal files w922 and w933 each containing approximately 160000 wordsthis is an annotation lite task with no training the annotators achieved kappa values of 094 and 095 and each spent an average of three hours per wall street journal filethe goal in this section is to learn lexical subjectivity clues of various types single words as well as collocationssome require no training data some are learned using the expressionlevel subjectiveelement annotations as training data and some are learned using the documentlevel opinion piece annotations as training data all of the clues are evaluated with respect to the documentlevel opinion piece annotationswhile these evaluations are our focus because many more opinion piece than subjectiveelement data exist we do evaluate the clues learned from the opinion piece data on the subjectiveelement data as wellthus we crossvalidate the results both ways between the two types of annotationsthroughout this section we evaluate sets of clues directly by measuring the proportion of clues that appear in subjective documents or expressions seeking those that appear more often than expectedin later sections the clues are used together to find subjective sentences and to perform text categorizationthe following paragraphs give details of the evaluation and experimental design used in this sectionthe proportion of clues in subjective documents or expressions is their precisionspecifically the precision of a set s with respect to opinion pieces is number of instances of members of s in opinion pieces total number of instances of members of s in the data the precision of a set s with respect to subjective elements is number of instances of members of s in subjective elements total number of instances of members of s in the data in the above s is a set of types the counts are of tokens of members of s why use a set rather than individual itemsmany good clues of subjectivity occur with low frequency in fact as we shall see below uniqueness in the corpus is an informative feature for subjectivity classificationthus we do not want to discard lowfrequency clues because they are a valuable source of information and we do not want to evaluate individual lowfrequency lexical items because the results would be unreliableour strategy is thus to identify and evaluate sets of words and phrases rather than individual itemswhat kinds of results may we expectwe cannot expect absolutely high precision with respect to the opinion piece classifications even for strong clues for three reasonsfirst for our purposes the data are noisyas mentioned above while the proportion of subjective sentences is higher in opinion than in nonopinion pieces the proportions are not 100 and 0 opinion pieces contain objective sentences and nonopinion pieces contain subjective sentencessecond we are trying to learn lexical items associated with subjectivity that is psesas discussed above many words and phrases with subjective usages have objective usages as wellthus even in perfect data with no noise we would not expect 100 precisionthird the distribution of opinions and nonopinions is highly skewed in favor of nonopinions only 9 of the articles in the combination of op1 and op2 are opinion piecesin this work increases in precision over a baseline precision are used as evidence that promising sets of pses have been foundour main baseline for comparison is the number of word instances in opinion pieces divided by the total number of word instances baseline precision number of word instances in opinion pieces total number of word instances frequencies and increases in precision of unique words in subjectiveelement databaseline frequency is the total number of words and baseline precision is the proportion of words in subjective elementswords and phrases with higher proportions than this appear more than expected in opinion piecesto further evaluate the quality of a set of pses we also perform the following significance testfor a set of pses in a given data set we test the significance of the difference between the proportion of words in opinion pieces that are pses and the proportion of words in nonopinion pieces that are pses using the zsignificance test for two proportionsbefore we continue there are a few more technical items to mention concerning the data preparation and experimental design in this section we show that lowfrequency words are associated with subjectivity in both the subjectiveelement and opinion piece dataapparently people are creative when they are being opinionatedtable 2 gives results for unique words in subjectiveelement datarecall that unique words are those that appear just once in the corpus that is hapax legomenathe first row of table 2 gives the frequency of unique words in wsjse followed by the percentagepoint improvements in precision over baseline for unique words in subjective elements marked by two annotators the second row gives baseline frequency and precisionsbaseline frequency is the total number of words in wsjsebaseline precision for an annotator is the proportion of words included in subjective elements by that annotatorspecifically consider annotator m the baseline precision of words in subjective elements marked by m is 008 frequencies and increases in precision for words that appear exactly once in the data sets composing op1for each data set baseline frequency is the total number of words and baseline precision is the proportion of words in opinion piecesw904 w910 w922 w933 freq prec freq prec freq prec freq prec unique words 4794 15 4763 16 4274 11 4567 11 baseline 156421 19 156334 18 155135 13 153634 14 but the precision of unique words in these same annotations is 020 012 points higher than the baselinethis is a 150 improvement over the baselinethe number of unique words in opinion pieces is also higher than expectedtable 3 compares the precision of the set of unique words to the baseline precision in the four wsj files composing op1before this analysis was performed numbers were removed from the data the number of words in each data set and baseline precisions are listed at the bottom of the tablethe freq columns give total frequenciesthe prec columns show the percentagepoint improvements in precision over baselinefor example in w910 unique words have precision 034 018 baseline plus an improvement over baseline of 016the difference in the proportion of words that are unique in opinion pieces and the proportion of words that are unique in nonopinion pieces is highly significant with p 028 algorithm for selecting adjective and verb features using distributional similarity motivation for experimenting with it to identify pses was twofoldfirst we hypothesized that words might be distributionally similar because they share pragmatic usages such as expressing subjectivity even if they are not close synonymssecond as shown above lowfrequency words appear more often in subjective texts than expectedwe did not want to discard all lowfrequency words from consideration but cannot effectively judge the suitability of individual wordsthus to decide whether to retain a word as a pse we consider the precision not of the individual word but of the word together with a cluster of words similar to itmany variants of distributional similarity have been used in nlp dekang lins method is used herein contrast to many implementations which focus exclusively on verbnoun relationships lins method incorporates a variety of syntactic relationsthis is important for subjectivity recognition because pses are not limited to verbnoun relationshipsin addition lins results are freely availablea set of seed words begins the processfor each seed si the precision of the set siucin in the training data is calculated where cin is the set of n words most similar to si according to lins methodif the precision of si you cin is greater than a threshold t then the words in this set are retained as psesif it is not neither si nor the words in cin are retainedthe union of the retained sets will be denoted rtn that is the union of all sets si you cin with precision on the training set t in wiebe the seeds were extracted from the subjectiveelement annotations in corpus wsjsespecifically the seeds were the adjectives that appear at least once in a subjective element in wsjsein this article the opinion piece corpus is used to move beyond the manual annotations and small corpus of the earlier work and a much looser criterion is used to choose the initial seeds all of the adjectives in the training data are usedthe algorithm for the process is given in figure 2there is one small difference for adjectives and verbs noted in the figure that is the precision threshold of 028 for adjectives versus 023 for verbsthese thresholds were determined using validation dataseeds and their clusters are assessed on a training set for many parameter settings as mentioned above each parameter pair yields a set of adjectives rtn that is the union of all sets si you cin with precision on the training set t a subset adjpses of those sets is chosen based on precision and frequency in a validation setfinally the adjpses are tested on the test settable 7 shows the results for four opinion piece test setsmultiple trainingvalidation data set pairs are used for each test set as given in table 7the results are for the union of the adjectives chosen for each pairthe freq columns give total frequencies and the prec columns show the improvements in precision from the baselinefor each data set the difference between the proportion of instances of adjpses in opinion pieces and the proportion in nonopinion pieces is significant the same is true for verbpses in the interests of testing consistency table 8 shows the results of assessing the adjective and verb features generated from opinion piece data on the subjectiveelement datathe left side of the table gives baseline figures for each set of subjectiveelement annotationsthe right side of the table gives the average frequencies and increases in precision over baseline for the adjpses and verbpses sets on the subjectiveelement datathe baseline figures in the table are the frequencies and precisions of the sets of adjectives and verbs that appear at least once in a subjective elementsince these sets include words that appear just once in the corpus the baseline precision is a challenging onetesting the verbpses and adjpses on the subjectiveelement data reveals some interesting consistencies for these subjectivity cluesthe precision increases of the verbpses on the subjectiveelement data are comparable to their increases on the opinion piece datasimilarly the precision increases of the adjpses on the subjectiveelement data are as good as or better than the performance of this set of pses on the opinion piece datafinally the precisions increases for the adjpses are higher than for the verbpses on all data setsthis is again consistent with the higher performance of the adjpses sets in the opinion piece data setsin this section we examine the various types of clues used togetherin preparation for this work all instances in op1 and op2 of all of the pses identified as described in section 3 have been automatically identifiedall training to define the pse instances in op1 was performed on data separate from op1 and all training to define the pse instances in op2 was performed on data separate from op2table 9 summarizes the results from previous sections in which the opinion piece data are used for testingthe performance of the various features is consistently good or bad on the same data sets the performance is better for all features on w910 and w904 than on w922 and w933 this is so despite the fact that the features were generated using different procedures and data the algorithm for calculating density in subjectiveelement data adjectives and verbs were generated from wsj documentlevel opinion piece classifications the ngram features were generated from newsgroup and wsj expressionlevel subjectiveelement classifications and the unique unigram feature requires no trainingthis consistency in performance suggests that the results are not brittlein wiebe whether a pse is interpreted to be subjective depends in part on how subjective the surrounding context iswe explore this idea in the current work assessing whether pses are more likely to be subjective if they are surrounded by subjective elementsin particular we experiment with a density feature to decide whether or not a pse instance is subjective if a sufficient number of subjective elements are nearby then the pse instance is considered to be subjective otherwise it is discardedthe density parameters are a window size w and a frequency threshold t in this section we explore the density of manually annotated pses in subjectiveelement data and choose density parameters to use in section 44 in which we apply them to automatically identified pses in opinion piece datathe process for calculating density in the subjectiveelement data is given in figure 3the pses are defined to be all adjectives verbs modals nouns and adverbs that appear at least once in a subjective element with the exception of some stop words note that these pses depend only on the subjectiveelement manual annotations not on the automatically identified features used elsewhere in the article or on the documentlevel opinion piece classespseinsts is the set of pse instances to be disambiguated hidensity will be the subset of pseinsts that are retainedin the loop the density of each pse instance p is calculatedthis is the number of subjective elements that begin or end in the w words preceding or following p p is retained if its density is at least t lines 89 of the algorithm assess the precision of the original and new sets of pse instancesif prec is greater than prec then there is evidence that the number of subjective elements near a pse instance is related to its subjectivity in contextto create more data points for this analysis wsjse was split into two and annotations of the two judges are considered separatelywsjse2d for example refers to ds annotations of wsjse2the process in figure 3 was repeated for different parameter settings on each of the se data setsto find good parameter settings the results for each data set were sorted into fivepoint precision intervals and then sorted by frequency within each intervalinformation for the top three precision intervals for each data set are shown in table 10 specifically the parameter values and the frequency and precision of the most frequent result in each intervalthe intervals are in the rows labeled rangefor example the top three precision intervals for wsjse1m 087092 082087 and 077082 the top of table 10 gives baseline frequencies and precisions which are ipseinstsl and prec respectively in line 8 of figure 3the parameter values exhibit a range of frequencies and precisions with the expected tradeoff between precision and frequencywe choose the following parameters to test in section 44 for each data set for each precision interval whose lower bound is at least 10 percentage points higher than the baseline for that data set the top two pairs yielding the highest frequencies in that interval are chosenamong the five data sets a total of 45 parameter pairs were so selectedthis exercise was completed once without experimenting with different parameter settingsin this section density is exploited to find subjective instances of automatically identified psesthe process is shown in figure 4there are only two differences between the algorithms in figures 3 and 4first in figure 3 density is defined in terms of the number of subjective elements nearbyhowever subjectiveelement annotations are not available in test datathus in figure 4 density is defined in terms of the algorithm for calculating density in opinion piece data number of other pse instances nearby where pseinsts consists of all instances of the automatically identified pses described in section 3 for which results are given in table 9second in figure 4 we assess precision with respect to the documentlevel classes the test data are op1an interesting question arose when we were defining the pse instances what should be done with words that are identified to be pses according to multiple criteriafor example sunny radiant and exhilarating are all unique in corpus op1 and are all members of the adjective pse feature defined for testing on op1collocations add additional complexityfor example consider the sequence and splendidly which appears in the test datathe sequence and splendidly matches the ugen2gram and the word splendidly is uniquein addition a sequence may match more than one ngram featurefor example is it that matches three fixedngram features is it is it that and it thatin the current experiments the more pses a word matches the more weight it is giventhe hypothesis behind this treatment is that additional matches represent additional evidence that a pse instance is subjectivethis hypothesis is realized as follows each match of each member of each type of pse is considered to be a pse instancethus among them there are 11 members in pseinsts for the five phrases sunny radiant exhilarating and splendidly and is it that one for each of the matches mentioned abovethe process in figure 4 was conducted with the 45 parameter pair values chosen from the subjectiveelement data as described in section 43table 11 shows results for a subset of the 45 parameters namely the most frequent parameter pair chosen from the top three precision intervals for each training setthe bottom of the table gives a baseline frequency and a baseline precision in op1 defined as pseinsts and prec respectively in line 7 of figure 4the density features result in substantial increases in precisionof the 45 parameter pairs the minimum percentage increase over baseline is 22fully 24 of the 45 parameter pairs yield increases of 200 or more 38 yield increases between 100 and 199 and 38 yield increases between 22 and 99in addition the increases are significantusing the set of highdensity pses defined by the parameter pair with the least increase over baseline we tested the difference in the proportion of pses in opinion pieces that are highdensity and the proportion of pses in nonopinion pieces that are highdensitythe difference between these two proportions is highly significant notice that except for one blip the precisions decrease and the frequencies increase as we go down each column in table 11the same pattern can be observed with all 45 parameter pairs but the parameter pairs are ordered in table 11 based on performance in the manually annotated subjectiveelement data not based on performance in the test datafor example the entry in the first row first column is the parameter pair giving the highest frequency in the top precision interval of wsjsem thus the relative precisions and frequencies of the parameter pairs are carried over from the training to the test datathis is quite a strong result given that the pses in the training data are from manual annotations while the pses in the test data are our automatically identified featuresto assess the subjectivity of sentences with highdensity pses we extracted the 133 sentences in corpus op2 that contain at least one highdensity pse and manually annotated themwe refer to these sentences as the systemidentified sentenceswe chose the densityparameter pair based on its precision and frequency in op1this parameter setting yields results that have relatively high precision and low frequencywe chose a lowfrequency setting to make the annotation study feasiblethe extracted sentences were independently annotated by two judgesone is a coauthor of this article and the other has performed subjectivity annotation before but is not otherwise involved in this research sentences were annotated according to the coding instructions of wiebe bruce and ohara which recall are to classify a sentence as subjective if there is a significant expression of subjectivity of either the writer or someone mentioned in the text in the sentencein addition to the subjective and objective classes a judge can tag a sentence as unsure if he or she is unsure of his or her rating or considers the sentence to be borderlinean equal number of other sentences were randomly selected from the corpus to serve as controlsthe 133 systemidentified sentences and the 133 control sentences were randomly mixed togetherthe judges were asked to annotate all 266 sentences not knowing which were systemidentified and which were controleach sentence was presented with the sentence that precedes it and the sentence that follows it in the corpus to provide some context for interpretationtable 12 shows examples of the systemidentified sentencessentences classified by both judges as objective are marked oo and those classified by both judges as subjective are marked ssbathed in cold sweat i watched these dantesque scenes holding tightly the damp hand of edek or waldeck who like me were convinced that there was no godthe japanese are amazed that a company like this exists in japan says kimindo kusaka head of the softnomics center a japanese managementresearch organizationand even if drugs were legal what evidence do you have that the habitual drug user would not continue to rob and steal to get money for clothes food or shelterthe moral cost of legalizing drugs is great but it is a cost that apparently lies outside the narrow scope of libertarian policy prescriptionsi doubt that one existsthey were upset at his committees attempt to pacify the program critics by cutting the surtax paid by the more affluent elderly and making up the loss by shifting more of the burden to the elderly poor and by delaying some benefits by a yearjudge 1 classified 103 of the systemidentified sentences as subjective 16 as objective and 14 as unsurejudge 2 classified 102 of the systemidentified sentences as subjective 27 as objective and 4 as unsurethe contingency table is given in table 134 the kappa value using all three classes is 060 reflecting the highly skewed distribution in favor of subjective sentences and the disagreement on the lowerfrequency classes consistent with the findings in wiebe bruce and ohara the kappa value for agreement on the sentences for which neither judge is unsure is very high 086a different breakdown of the sentences is illuminatingfor 98 of the sentences judges 1 and 2 tag the sentence as subjectiveamong the other sentences 20 appear in a block of contiguous systemidentified sentences that includes a member of ssfor example in table 12 and are in ss and is in the same block of subjective sentences as they aresimilarly is in ss and is in the same blockamong the remaining 15 sentences 6 are adjacent to subjective sentences that were not identified by our system all of those sentences contain significant expressions of subjectivity of the writer or someone mentioned in the text the criterion used in this work for classifying a sentence as subjectivesamples are shown in table 14thus 93 of the sentences identified by the system are subjective or are near subjective sentencesall the sentences together with their tags and the sentences adjacent to them are available on the web at wwwcspitteduwiebein this section we assess the usefulness of the pses identified in section 3 and listed in table 9 by using them to perform documentlevel classification of opinion piecesopinionpiece classification is a difficult task for two reasonsfirst as discussed in section 21 both opinionated and factual documents tend to be composed of a mixture of subjective and objective languagesecond the natural distribution of documents in our data is heavily skewed toward nonopinion piecesdespite these hurdles using only our pses we achieve positive results in opinionpiece classification using the basic knearestneighbor algorithm with leaveoneout crossvalidation given a document the basic knn algorithm classifies the document according to the majority classification of the documents k closest neighborsfor our purposes each document is characterized by one feature the count of all pse instances in the document normalized by document length in wordsthe distance between two documents is simply the absolute value of the difference between the normalized pse counts for the two documentswith leaveoneout crossvalidation the set of n documents to be classified is divided into a training set of size n1 and a validation set of size 1the one document in the validation set is then classified according to the majority classification of its k closestneighbor documents in the training setthis process is repeated until every document is classifiedwhich value to use for k is chosen during a preprocessing phaseduring the preprocessing phase we run the knn algorithm with leaveoneout crossvalidation on a separate training set for odd values of k from 1 to 15the value of k that results in the best classification during the preprocessing phase is the one used for later knn classificationfor the classification experiment the data set op1 was used in the preprocessing phase to select the value of k and then classification was performed on the 1222 documents in op2during training on op1 k equal to 15 resulted in the best classificationon the test set op2 we achieved a classification accuracy of 0939 the baseline accuracy for choosing the most frequent class was 0915our classification accuracy represents a 28 reduction in error and is significantly better than baseline according to mcnemars test the positive results from the opinion piece classification show the usefulness of the various pse features when used togetherthere has been much work in other fields including linguistics literary theory psychology philosophy and content analysis involving subjective languageas mentioned in section 2 the conceptualization underlying our manual annotations is based on work in literary theory and linguistics most directly dolezel uspensky kuroda chatman cohn fodor and banfield we also mentioned existing knowledge resources such as affective lexicons and annotations in more generalpurpose lexicons such knowledge may be used in future work to complement the work presented in this article for example to seed the distributionalsimilarity process described in section 34there is also work in fields such as content analysis and psychology on statistically characterizing texts in terms of word lists manually developed for distinctions related to subjectivityfor example hart performs counts on a manually developed list of words and rhetorical devices in political speeches to explore potential reasons for public reactionsanderson and mcmaster use fixed sets of highfrequency words to assign connotative scores to documents and sections of documents along dimensions such as how pleasant acrimonious pious or confident the text iswhat distinguishes our work from work on subjectivity in other fields is that we focus on automatically learning knowledge from corpora automatically performing contextual disambiguation and using knowledge of subjectivity in nlp applicationsthis article expands and integrates the work reported in wiebe and wilson wiebe wilson and bell wiebe et al and wiebe previous work in nlp on the same or related tasks includes sentencelevel and documentlevel subjectivity classificationsat the sentence level wiebe bruce and ohara developed a machine learning system to classify sentences as subjective or objectivethe accuracy of the system was more than 20 percentage points higher than a baseline accuracyfive partofspeech features two lexical features and a paragraph feature were usedthese results suggested to us that there are clues to subjectivity that might be learned automatically from text and motivated the work reported in the current articlethe system was tested in 10fold cross validation experiments using corpus wsjse a small corpus of only 1001 sentencesas discussed in section 1 a main goal of our current work is to exploit existing documentlevel annotations because they enable us to use much larger data sets they were created outside our research group and they allow us to assess consistency of performance by crossvalidating between our manual annotations and the existing documentlevel annotationsbecause the documentlevel data are not annotated at the sentence level sentencelevel classification is not highlighted in this articlethe new sentence annotation study to evaluate sentences with highdensity features uses different data from wsjse because some of the features were identified using wsjse as training dataother previous work in nlp has addressed related documentlevel classificationsspertus developed a system for recognizing inflammatory messagesas mentioned earlier in the article inflammatory language is a type of subjective language so the task she addresses is closely related to oursshe uses machine learning to select among manually developed featuresin contrast the focus in our work is on automatically identifying features from the dataa number of projects investigating genre detection include editorials as one of the targeted genresfor example in karlgren and cutting editorials are one of fifteen categories and in kessler nunberg and schutze editorials are one of sixgiven the goal of these works to perform genre detection in general they use lowlevel features that are not specific to editorialsneither shows significant improvements for editorial recognitionargamon koppel and avneri address a slightly different task though it does involve editorialstheir goal is to distinguish not only for example news from editorials but also these categories in different publicationstheir best results are distinguishing among the news categories of different publications their lowest results involve editorialsbecause we focus specifically on distinguishing opinion pieces from nonopinion pieces our results are better than theirs for those categoriesin addition in contrast to the above studies the focus of our work is on learning features of subjectivitywe perform opinion piece recognition in order to assess the usefulness of the various features when used togetherother previous nlp research has used features similar to ours for other nlp taskslowfrequency words have been used as features in information extraction and text categorization a number of researchers have worked on mining collocations from text to extend lexicographic resources for machine translation and word sense disambiguation in samuel carberry and vijayshankers work on identifying collocations for dialogact recognition a filter similar to ours was used to eliminate redundant ngram features ngrams were eliminated if they contained substrings with the same entropy score as or a better entropy score than the ngramwhile it is common in studies of collocations to omit lowfrequency words and expressions from analysis because they give rise to invalid or unrealistic statistical measures we are able to identify higherprecision collocations by including placeholders for unique words we are not aware of other work that uses such collocations as we dofeatures identified using distributional similarity have previously been used for syntactic and semantic disambiguation and to develop lexical resources from corpora we are not aware of other work identifying and using density parameters as described in this articlesince our experiments other related work in nlp has been performedsome of this work addresses related but different classification tasksthree studies classify reviews as positive or negative the input is assumed to be a review so this task does not include finding subjective documents in the first placethe first study listed above uses a variation of the semantic similarity procedure presented in wiebe the third uses ngram features identified with a variation of the procedure presented in wiebe wilson and bell tong addresses finding sentiment timelines that is tracking sentiments over time in multiple documentsfor clues of subjectivity he uses manually developed lexical rules rather than automatically learning them from corporasimilarly gordon et al use manually developed grammars to detect some types of subjective languageagrawal et al partition newsgroup authors into camps based on quotation linksthey do not attempt to recognize subjective languagethe most closely related new work is riloff wiebe and wilson riloff and wiebe and yu and hatzivassiloglou the first two focus on finding additional types of subjective clues yu and hatzivassiloglou perform opinion text classificationthey also use existing wsj document classes for training and testing but they do not include the entire corpus in their experiments as we dotheir opinion piece class consists only of editorials and letters to the editor and their nonopinion class consists only of business and newsthey report an average fmeasure of 965our result of 94 accuracy on document level classification is almost comparablethey also perform sentencelevel classificationwe anticipate that knowledge of subjective language may be usefully exploited in a number of nlp application areas and hope that the work presented in this article will encourage others to experiment with subjective language in their applicationsmore generally there are many types of artificial intelligence systems for which stateofaffairs types such as beliefs and desires are central including systems that perform plan recognition for understanding narratives for argument understanding for understanding stories from different perspectives and for generating language under different pragmatic constraints knowledge of linguistic subjectivity could enhance the abilities of such systems to recognize and generate expressions referring to such states of affairs in natural textknowledge of subjective language promises to be beneficial for many nlp applications including information extraction question answering text categorization and summarizationthis article has presented the results of an empirical study in acquiring knowledge of subjective language from corpora in which a number of feature types were learned and evaluated on different types of data with positive resultswe showed that unique words are subjective more often than expected and that unique words are valuable clues to subjectivitywe also presented a procedure for automatically identifying potentially subjective collocations including fixed collocations and collocations with placeholders for unique wordsin addition we used the results of a method for clustering words according to distributional similarity to identify adjectival and verbal clues of subjectivitytable 9 summarizes the results of testing all of the above types of psesall show increased precision in the evaluationstogether they show consistency in performancein almost all cases they perform better or worse on the same data sets despite the fact that different kinds of data and procedures are used to learn themin addition pses learned using expressionlevel subjectiveelement data have precisions higher than baseline on documentlevel opinion piece data and vice versahaving a large stable of pses it was important to disambiguate whether or not pse instances are subjective in the contexts in which they appearwe discovered that the density of other potentially subjective expressions in the surrounding context is importantif a clue is surrounded by a sufficient number of other clues then it is more likely to be subjective than if there were notparameter values were selected using training data manually annotated at the expression level for subjective elements and then tested on data annotated at the document level for opinion piecesall of the selected parameters led to increases in precision on the test data and most lead to increases over 100once again we found consistency between expressionlevel and documentlevel annotationspse sets defined by density have high precision in both the subjectiveelement data and the opinion piece datathe large differences between training and testing suggest that our results are not brittleusing a density feature selected from a training set sentences containing highdensity pses were extracted from a separate test set and manually annotated by two judgesfully 93 of the sentences extracted were found to be subjective or to be near subjective sentencesadmittedly the chosen density feature is a highprecision lowfrequency onebut since the process is fully automatic the feature could be applied to more unannotated text to identify regions containing subjective sentencesin addition because the precision and frequency of the density features are stable across data sets lowerprecision but higherfrequency options are availablefinally the value of the various types of pses was demonstrated with the task of opinion piece classificationusing the knearestneighbor classification algorithm with leaveoneout crossvalidation a classification accuracy of 94 was achieved on a large test set with a reduction in error of 28 from the baselinefuture work is required to determine how to exploit density features to improve the performance of text categorization algorithmsanother area of future work is searching for clues to objectivity such as the politeness features used by spertus still another is identifying the type of a subjective expression extending work such as hatzivassiloglou and mckeown on classifying lexemes to the classification of instances in context in addition it would be illuminating to apply our system to data annotated with discourse trees we hypothesize that most objective sentences identified by our system are dominated in the discourse by subjective sentences and that we are moving toward identifying subjective discourse segmentswe thank the anonymous reviewers for their helpful and constructive commentsthis research was supported in part by the office of naval research under grants n000149510776 and n000140110381
J04-3002
learning subjective languagesubjectivity in natural language refers to aspects of language used to express opinions evaluations and speculationsthere are numerous natural language processing applications for which subjectivity analysis is relevant including information extraction and text categorizationthe goal of this work is learning subjective language from corporaclues of subjectivity are generated and tested including lowfrequency words collocations and adjectives and verbs identified using distributional similaritythe features are also examined working together in concertthe features generated from different data sets using different procedures exhibit consistency in performance in that they all do better and worse on the same data setsin addition this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective and it provides the results of an annotation study assessing the subjectivity of sentences with highdensity featuresfinally the clues are used to perform opinion piece recognition to demonstrate the utility of the knowledge acquired in this articlewe show that lowfrequency words and some collocations are a good indicators of subjectivity
the alignment template approach to statistical machine translation a phrasebased statistical machine translation approach the alignment template approach is described this translation approach allows for general manytomany relations between words thereby the context of words is taken into account in the translation model and local changes in word order from source to target language can be learned explicitly the model is described using a loglinear modeling approach which is a generalization of the often used sourcechannel approach thereby the model is easier to extend than classical statistical machine translation systems we describe in detail the process for learning phrasal translations the feature functions used and the search algorithm the evaluation of this approach is performed on three different for the germanenglish speech we analyze the effect of various syscomponents on the frenchenglish canadian the alignment template system obtains significantly better results than a singlewordbased translation model in the chineseenglish 2002 national institute of standards and technology machine translation evaluation it yields statistically significantly better nist scores than all competing research and commercial translation systems a phrasebased statistical machine translation approach the alignment template approach is describedthis translation approach allows for general manytomany relations between wordsthereby the context of words is taken into account in the translation model and local changes in word order from source to target language can be learned explicitlythe model is described using a loglinear modeling approach which is a generalization of the often used sourcechannel approachthereby the model is easier to extend than classical statistical machine translation systemswe describe in detail the process for learning phrasal translations the feature functions used and the search algorithmthe evaluation of this approach is performed on three different tasksfor the germanenglish speech verbmobil task we analyze the effect of various system componentson the frenchenglish canadian hansards task the alignment template system obtains significantly better results than a singlewordbased translation modelin the chineseenglish 2002 national institute of standards and technology machine translation evaluation it yields statistically significantly better nist scores than all competing research and commercial translation systemsmachine translation is a hard problem because natural languages are highly complex many words have various meanings and different possible translations sentences might have various readings and the relationships between linguistic entities are often vaguein addition it is sometimes necessary to take world knowledge into accountthe number of relevant dependencies is much too large and those dependencies are too complex to take them all into account in a machine translation systemgiven these boundary conditions a machine translation system has to make decisions given incomplete knowledgein such a case a principled approach to solving that problem is to use the concepts of statistical decision theory to try to make optimal decisions given incomplete knowledgethis is the goal of statistical machine translationthe use of statistical techniques in machine translation has led to dramatic improvements in the quality of research systems in recent yearsfor example the statistical approaches of the verbmobil evaluations or the yous national institute of standards and technology tides mt evaluations 2001 through 20031 obtain the best resultsin addition the field of statistical machine translation is rapidly progressing and the quality of systems is getting better and betteran important factor in these improvements is definitely the availability of large amounts of data for training statistical modelsyet the modeling training and search methods have also improved since the field of statistical machine translation was pioneered by ibm in the late 1980s and early 1990s this article focuses on an important improvement namely the use of phrases instead of just single words as the core elements of the statistical translation modelwe describe in section 2 the basics of our statistical translation modelwe suggest the use of a loglinear model to incorporate the various knowledge sources into an overall translation system and to perform discriminative training of the free model parametersthis approach can be seen as a generalization of the originally suggested sourcechannel modeling framework for statistical machine translationin section 3 we describe the statistical alignment models used to obtain a word alignment and techniques for learning phrase translations from word alignmentshere the term phrase just refers to a consecutive sequence of words occurring in text and has to be distinguished from the use of the term in a linguistic sensethe learned bilingual phrases are not constrained by linguistic phrase boundariescompared to the wordbased statistical translation models in brown et al this model is based on a phrase lexicon instead of a singlewordbased lexiconlooking at the results of the recent machine translation evaluations this approach seems currently to give the best results and an increasing number of researchers are working on different methods for learning phrase translation lexica for machine translation purposes our approach to learning a phrase translation lexicon works in two stages in the first stage we compute an alignment between words and in the second stage we extract the aligned phrase pairsin our machine translation system we then use generalized versions of these phrases called alignment templates that also include the word alignment and use word classes instead of the words themselvesin section 4 we describe the various components of the statistical translation modelthe backbone of the translation model is the alignment template feature function which requires that a translation of a new sentence be composed of a set of alignment templates that covers the source sentence and the produced translationother feature functions score the wellformedness of the produced target language sentence the number of produced words or the order of the alignment templatesnote that all components of our statistical machine translation model are purely datadriven and that there is no need for linguistically annotated corporathis is an important advantage compared to syntaxbased translation models that require a parser for source or target languagein section 5 we describe in detail our search algorithm and discuss an efficient implementationwe use a dynamicprogrammingbased beam search algorithm that allows a tradeoff between efficiency and qualitywe also discuss the use of heuristic functions to reduce the number of search errors for a fixed beam sizein section 6 we describe various results obtained on different tasksfor the germanenglish verbmobil task we analyze the effect of various system compoarchitecture of the translation approach based on a loglinear modeling approach nentson the frenchenglish canadian hansards task the alignment template system obtains significantly better results than a singlewordbased translation modelin the chineseenglish 2002 nist machine translation evaluation it yields results that are significantly better statistically than all competing research and commercial translation systemswe are given a source sentence f f1j f1 fj fj which is to be translated into a target sentence e ei1 e1 ei eiamong all possible target sentences we will choose the sentence with the highest probability2 the argmax operation denotes the search problem that is the generation of the output sentence in the target languageas an alternative to the often used sourcechannel approach we directly model the posterior probability pr an especially wellfounded framework for doing this is the maximumentropy framework in this framework we have a set of m feature functions hm m 1 m for each feature function there exists a model 2 the notational convention employed in this article is as followswe use the symbol pr to denote general probability distributions with no specific assumptionsin contrast for modelbased probability distributions we use the generic symbol pthis approach has been suggested by papineni roukos and ward for a natural language understanding taskwe obtain the following decision rule hence the timeconsuming renormalization in equation is not needed in searchthe overall architecture of the loglinear modeling approach is summarized in figure 1a standard criterion on a parallel training corpus consisting of s sentence pairs s 1 s for loglinear models is the maximum class posterior probability criterion which can be derived from the maximumentropy principle this corresponds to maximizing the equivocation or maximizing the likelihood of the directtranslation modelthis direct optimization of the posterior probability in bayes decision rule is referred to as discriminative training because we directly take into account the overlap in the probability distributionsthe optimization problem under this criterion has very nice properties there is one unique global optimum and there are algorithms that are guaranteed to converge to the global optimumyet the ultimate goal is to obtain good translation quality on unseen test dataan alternative training criterion therefore directly optimizes translation quality as measured by an automatic evaluation criterion typically the translation probability pr is decomposed via additional hidden variablesto include these dependencies in our loglinear model we extend the feature functions to include the dependence on the additional hidden variableusing for example the alignment aj1 as hidden variable we obtain m feature functions of the form hm m 1 m and the following model obviously we can perform the same step for translation models with an even richer set of hidden variables than only the alignment aj1in this section we describe methods for learning the singleword and phrasebased translation lexica that are the basis of the machine translation system described in section 4first we introduce the basic concepts of statistical alignment models which are used to learn word alignmentthen we describe how these alignments can be used to learn bilingual phrasal translationsin alignment models pr a hidden alignment a aj1 is introduced that describes a mapping from a source position j to a target position ajthe relationship between the translation model and the alignment model is given by the alignment aj1 may contain alignments aj 0 with the empty word e0 to account for source words that are not aligned with any target wordin general the statistical model depends on a set of unknown parameters θ that is learned from training datato express the dependence of the model on the parameter set we use the following notation a detailed description of different specific statistical alignment models can be found in brown et al and och and ney here we use the hidden markov model alignment model and model 4 of brown et al to compute the word alignment for the parallel training corpusto train the unknown parameters θ we are given a parallel training corpus consisting of s sentence pairs j s 1 sjfor each sentence pair the alignment variable is denoted by a aj1the unknown parameters θ are determined by maximizing the likelihood on the parallel training corpus this optimization can be performed using the expectation maximization algorithm for a given sentence pair there are a large number of alignmentsthe alignment ˆaj1 that has the highest probability is also called the viterbi alignment a detailed comparison of the quality of these viterbi alignments for various statistical alignment models compared to humanmade word alignments can be found in och and ney the baseline alignment model does not allow a source word to be aligned with two or more target wordstherefore lexical correspondences like the german compound word zahnarzttermin for dentists appointment because problems because a single source word must be mapped onto two or more target wordstherefore the resulting viterbi alignment of the standard alignment models has a systematic loss in recallhere we example of a word alignment describe various methods for performing a symmetrization of our directed statistical alignment models by applying a heuristic postprocessing step that combines the alignments in both translation directions figure 2 shows an example of a symmetrized alignmentto solve this problem we train in both translation directionsfor each sentence pair we compute two viterbi alignments aj1 and bi1let a1 f aj 01 and a2 f bi 01 denote the sets of alignments in the two viterbi alignmentsto increase the quality of the alignments we can combine a1 and a2 into one alignment matrix a using one of the following combination methods alignment a1 or in the alignment a2 if neither fj nor ei have an alignment in a or if the following conditions both hold obviously the intersection yields an alignment consisting of only onetoone alignments with a higher precision and a lower recallthe union yields a higher recall and a lower precision of the combined alignmentthe refined alignment method is often able to improve precision and recall compared to the nonsymmetrized alignmentswhether a higher precision or a higher recall is preferred depends on the final application of the word alignmentfor the purpose of statistical mt it seems that a higher recall is more importanttherefore we use the union or the refined combination method to obtain a symmetrized alignment matrixthe resulting symmetrized alignments are then used to train singlewordbased translation lexica p by computing relative frequencies using the count n of how many times e and f are aligned divided by the count n of how many times the word f occurs in this section we present a method for learning relationships between whole phrases of m source language words and n target language wordsthis algorithm which will be called phraseextract takes as input a general word alignment matrix the output is a set of bilingual phrasesin the following we describe the criterion that defines the set of phrases that is consistent with the word alignment matrix hence the set of all bilingual phrases that are consistent with the alignment is constituted by all bilingual phrase pairs in which all words within the source language phrase are aligned only with the words of the target language phrase and the words of the target language phrase are aligned only with the words of the source language phrasenote that we require that at least one word in the source language phrase be aligned with at least one word of the target language phraseas a result there are no empty source or target language phrases that would correspond to the empty word of the wordbased statistical alignment modelsthese phrases can be computed straightforwardly by enumerating all possible phrases in one language and checking whether the aligned words in the other language are consecutive with the possible exception of words that are not aligned at allfigure 3 gives the algorithm phraseextract that computes the phrasesthe algorithm takes into account possibly unaligned words at the boundaries of the source or target language phrasestable 1 shows the bilingual phrases containing between two and seven words that result from the application of this algorithm to the alignment of figure 2examples of two to sevenword bilingual phrases obtained by applying the algorithm phraseextract to the alignment of figure 2 ja yes ja ich yes i ja ich denke mal yes i think ja ich denke mal yes i think ja ich denke mal also yes i think well ich i ich denke mal i think ich denke mal i think ich denke mal also i think well ich denke mal also wir i think well we ich denke mal i think ich denke mal i think ich denke mal also i think well ich denke mal also wir i think well we ich denke mal also wir wollten i think well we plan to denke mal think denke mal also think well denke mal also wir think well we denke mal also wir wollten think well we plan to also well also wir well we also wir wollten well we plan to also wir well we also wir wollten well we plan to wir wollten we plan to in unserer in our in unserer abteilung in our department in unserer abteilung ein neues netzwerk a new network in our department in unserer abteilung ein neues netzwerk set up a new network in our department aufbauen unserer abteilung our department ein neues a new ein neues netzwerk a new network ein neues netzwerk aufbauen set up a new network neues netzwerk new network it should be emphasized that this constraint to consecutive phrases limits the expressive powerif a consecutive phrase in one language is translated into two or three nonconsecutive phrases in the other language there is no corresponding bilingual phrase pair learned by this approachin principle this approach to learning phrases from a wordaligned corpus could be extended straightforwardly to handle nonconsecutive phrases in source and target language as wellinformal experiments have shown that allowing for nonconsecutive phrases significantly increases the number of extracted phrases and especially increases the percentage of wrong phrasestherefore we consider only consecutive phrasesin the following we add generalization capability to the bilingual phrase lexicon by replacing words with word classes and also by storing the alignment information for each phrase pairthese generalized and alignmentannotated phrase pairs are called alignment templatesformally an alignment template z is a triple algorithm phraseextract for extracting phrases from a wordaligned sentence pairhere quasiconsecutive is a predicate that tests whether the set of words tp is consecutive with the possible exception of words that are not aligned that describes the alignment a between a source class sequence fjy1 and a target class sequence ei1 if each word corresponds to one class an alignment template corresponds to a bilingual phrase together with an alignment within this phrasefigure 4 shows examples of alignment templatesthe alignment a is represented as a matrix with j binary elementsa matrix element with value 1 means that the words at the corresponding positions are aligned and the value 0 means that the words are not alignedif a source word is not aligned with a target word then it is aligned with the empty word e0 which is at the imaginary position i 0the classes used in fjy1 and ei1 are automatically trained bilingual classes using the method described in och and constitute a partition of the vocabulary of source and target languagein general we are not limited to disjoint classes as long as each specific instance of a word is disambiguated that is uniquely belongs to a specific classin the following we use the class function c to map words to their classeshence it would be possible to employ partsofspeech or semantic categories instead of the automatically trained word classes used herethe use of classes instead of the words themselves has the advantage of better generalizationfor example if there exist classes in source and target language that contain town names it is possible that an alignment template learned using a specific town name can be generalized to other town namesin the following e and f denote target and source phrases respectivelyto train the probability of applying an alignment template p f we use an extended version of the algorithm phraseextract from section 33all bilingual phrases that are consistent with the alignment are extracted together with the alignment within this bilingual phrasethus we obtain a count n of how often an alignment template occurred in the aligned training corpusthe probability of using an alignment template to translate a specific source language phrase f is estimated by means of relative frequency to reduce the memory requirement of the alignment templates we compute these probabilities only for phrases up to a certain maximal length in the source languagedepending on the size of the corpus the maximal length in the experiments is between four and seven wordsin addition we remove alignment templates that have a probability lower than a certain thresholdin the experiments we use a threshold of 001it should be emphasized that this algorithm for computing aligned phrase pairs and their associated probabilities is very easy to implementthe joint translation model suggested by marcu and wong tries to learn phrases as part of a full them algorithm which leads to very large memory requirements and a rather complicated training algorithma comparison of the two approaches can be found in koehn och and marcu to describe our translation model based on the alignment templates described in the previous section in a formal way we first decompose both the source sentence f1j and the target sentence ei1 into a sequence of phrases note that there are a large number of possible segmentations of a sentence pair into k phrase pairsin the following we will describe the model for a specific segmentationeventually however a model can be described in which the specific segmentation is not known when new text is translatedhence as part of the overall search process we also search for the optimal segmentationto allow possible reordering of phrases we introduce an alignment on the phrase level πk1 between the source phrases f1k and the target phrases ek1hence πk1 is a permutation of the phrase positions 1 k and indicates that the phrases ek and fπk are translations of one anotherwe assume that for the translation between these phrases a specific alignment template zk is used ek zk fπk hence our model has the following hidden variables figure 5 gives an example of the word alignment and phrase alignment of a germanenglish sentence pairwe describe our model using a loglinear modeling approachhence all knowledge sources are described as feature functions that include the given source language string f j1 the target language string ei1 and the abovestated hidden variableshence we have the following functional form of all feature functions figure 6 gives an overview of the decisions made in the alignment template modelfirst the source sentence words fj1 are grouped into phrases f1kfor each phrase f an alignment template z is chosen and the sequence of chosen alignment templates is reordered then every phrase f produces its translation e finally the sequence of phrases ek1 constitutes the sequence of words ei1dependencies in the alignment template modeloch and ney the alignment template approach to statistical machine translation 411 alignment template selectionto score the use of an alignment template we use the probability p defined in section 3we establish a corresponding feature here jπk1 1 is the position of the first word of alignment template zk in the source language sentence and jπk is the position of the last word of that alignment templatenote that this feature function requires that a translation of a new sentence be composed of a set of alignment templates that covers both the source sentence and the produced translationthere is no notion of empty phrase that corresponds to the empty word in wordbased statistical alignment modelsthe alignment on the phrase level is actually a permutation and no insertions or deletions are allowed412 word selectionfor scoring the use of target language words we use a lexicon probability p which is estimated using relative frequencies as described in section 32the target word e depends on the aligned source wordsif we denote the resulting word alignment matrix by a aπkak and the predicted word class for word for p a we use a uniform mixture of a singleword model p which is constrained to predict only words that are in the predicted word class ei a disadvantage of this model is that the word order is ignored in the translation modelthe translations the day after tomorrow or after the day tomorrow for the german word ubermorgen receive an identical probabilityyet the first one should obtain a significantly higher probabilityhence we also include a dependence on the word positions in the lexicon model p here a is 1 if a and 0 otherwiseas a result the word ei depends not only on the aligned french word fj but also on the number of preceding french words aligned with ei and on the number of the preceding english words aligned with fjthis model distinguishes the positions within a phrasal translationthe number of parameters of p is significantly higher than that of p alonehence there is a data estimation problem especially for words that rarely occurtherefore we linearly interpolate the models p and p very often a monotone alignment is a correct alignmenthence the feature function hal measures the amount of nonmonotonicity by summing over the distance of alignment templates that are consecutive in the target language here jπ0 is defined to equal 0 and jπk11 is defined to equal jthe abovestated sum includes k k 1 to include the distance from the end position of the last phrase to the end of sentencethe sequence of k 6 alignment templates in figure 5 corresponds to the following sum of seven jump distances 0 0 1 3 2 0 0 6414 language model featuresas a default language model feature we use a standard backingoff wordbased trigram language model the use of the language model feature in equation helps take longrange dependencies better into accountwithout this feature we typically observe that the produced sentences tend to be too short416 conventional lexiconwe also use a feature that counts how many entries of a conventional lexicon cooccur in the given sentence pairtherefore the weight for the provided conventional dictionary can be learned the intuition is that the conventional dictionary lex is more reliable than the automatically trained lexicon and therefore should get a larger weight417 additional featuresa major advantage of the loglinear modeling approach used is that we can add numerous features that deal with specific problems of the baseline statistical mt systemhere we will restrict ourselves to the described set of featuresyet we could use grammatical features that relate certain grammatical dependencies of source and target languagefor example using a function k that counts how many arguments the main verb of a sentence has in the source or target sentence we can define the following feature which has a nonzero value if the verb in each of the two sentences has the same number of arguments in the same way we can introduce semantic features or pragmatic features such as the dialogue act classificationfor the three different tasks on which we report results we use two different training approachesfor the verbmobil task we train the model parameters λm1 according to the maximum class posterior probability criterion for the french english hansards task and the chineseenglish nist task we simply tune the model parameters by coordinate descent on heldout data with respect to the automatic evaluation metric employed using as a starting point the model parameters obtained on the verbmobil tasknote that this tuning depends on the starting point of the model parameters and is not guaranteed to converge to the global optimum on the training dataas a result this approach is limited to a very small number of model parametersan efficient algorithm for performing this tuning for a larger number of model parameters can be found in och a standard approach to training the loglinear model parameters of the maximum class posterior probability criterion is the gis algorithm to apply this algorithm we have to solve various practical problemsthe renormalization needed in equation requires a sum over many possible sentences for which we do not know of an efficient algorithmhence we approximate this sum by extracting a large set of highly probable sentences as a sample from the space of all possible sentences the set of considered sentences is computed by means of an appropriately extended version of the search algorithm described in section 5using an nbest approximation we might face the problem that the parameters trained with the gis algorithm yield worse translation results even on the training corpusthis can happen because with the modified model scaling factors the nbest list can change significantly and can include sentences that have not been taken into account in trainingusing these sentences the new model parameters might perform worse than the old model parametersto avoid this problem we proceed as followsin a first step we perform a search compute an nbest list and use this nbest list to train the model parameterssecond we use the new model parameters in a new search and compute a new nbest list which is combined with the existing nbest listthird using this extended nbest list new model parameters are computedthis process is iterated until the resulting nbest list does not changein this algorithm convergence is guaranteed as in the limit the nbest list will contain all possible translationsin practice the algorithm converges after five to seven iterationsin our experiments this final nbest list contains about 5001000 alternative translationswe might have the problem that none of the given reference translations is part of the nbest list because the nbest list is too small or because the search algorithm performs pruning which in principle limits the possible translations that can be produced given a certain input sentenceto solve this problem we define as reference translation for maximumentropy training each sentence that has the minimal number of word errors with respect to any of the reference translations in the nbest listmore details of the training procedure can be found in och and ney in this section we describe an efficient search architecture for the alignment template modelin general the search problem for statistical mt even using only model 1 of brown et al is npcomplete therefore we cannot expect to develop efficient search algorithms that are guaranteed to solve the problem without search errorsyet for practical applications it is acceptable to commit some search errors hence the art of developing a search algorithm lies in finding suitable approximations and heuristics that allow an efficient search without committing too many search errorsin the development of the search algorithm described in this section our main aim is that the search algorithm should be efficientit should be possible to translate a sentence of reasonable length within a few seconds of computing timewe accept that the search algorithm sometimes results in search errors as long as the impact on translation quality is minoryet it should be possible to reduce the number of search errors by increasing computing timein the limit it should be possible to search without search errorsthe search algorithm should not impose any principal limitationswe also expect that the search algorithm be able to scale up to very long sentences with an acceptable computing timeto meet these aims it is necessary to have a mechanism that restricts the search effortwe accomplish such a restriction by searching in a breadthfirst manner with pruning beam searchin pruning we constrain the set of considered translation candidates only to the promising oneswe compare in beam search those hypotheses that cover different parts of the input sentencethis makes the comparison of the probabilities problematictherefore we integrate an admissible estimation of the remaining probabilities to arrive at a complete translation as does the original ibm stack search decoder all these simplifications ultimately make the search problem simpler but introduce fundamental search errorsin the following we describe our search algorithm based on the concept of beam search which allows a tradeoff between efficiency and quality by adjusting the size of the beamthe search algorithm can be easily adapted to other phrasebased translation modelsfor singlewordbased search in mt a similar algorithm has been described in tillmann and ney putting everything together and performing search in maximum approximation we obtain the following decision rule using the four feature functions at al wrd and lm we obtain the following decision rule3 here we have grouped the contributions of the various feature functions into those for each word those for every alignment template and those for the end of sentence which includes a term logp for the endofsentence language model probabilityto extend this decision rule for the word penalty feature function we simply obtain an additional term awp for each wordthe classbased 5gram language model can be included like the trigram language modelnote that all these feature functions decompose nicely into contributions for each produced target language word or for each covered source language wordthis makes it possible to develop an efficient dynamic programming search algorithmnot all feature functions have this nice property for the conventional lexicon feature function we obtain an additional term in our decision rule which depends on the full sentencetherefore this feature function will not be integrated in the dynamic programming search but instead will be used to rerank the set of candidate translations produced by the searchwe have to structure the search space in a suitable way to search efficientlyin our search algorithm we generate search hypotheses that correspond to prefixes of target language sentenceseach hypothesis is the translation of a part of the source language sentencea hypothesis is extended by appending one target wordthe set of all hypotheses can be structured as a graph with a source node representing the sentence start goal nodes representing complete translations and intermediate nodes representing partial translationsthere is a directed edge between hypotheses n1 and n2 if the hypothesis n2 is obtained by appending one word to hypothesis n1each edge has associated costs resulting from the contributions of all feature functionsfinally our search problem can be reformulated as finding the optimal path through this graph in the first step we determine the set of all source phrases in f for which an applicable alignment template existsevery possible application of an alignment template z to a subsequence f jj1 of the source sentence is called an alignment j template instantiation z hence the set of all alignment template instantiations for the source sentence fj1 is if the source sentence contains words that have not been seen in the training data we introduce a new alignment template that performs a onetoone translation of each of these words by itselfin the second step we determine a set of probable target language words for each target word position in the alignment template instantiationonly these words are then hypothesized in the searchwe call this selection of highly probable words observation pruning as a criterion for a word e at position i in the alignment template instantiation we use in our experiments we hypothesize only the five bestscoring wordsa decision is a triple d consisting of an alignment template instantiation z the generated word e and the index l of the generated word in za hypothesis n corresponds to a valid sequence of decisions di1the possible decisions are as follows the resulting decision score corresponds to the contribution of expression any valid and complete sequence of decisions di1 1 uniquely corresponds to a certain translation ei1 a segmentation into k phrases a phrase alignment πk1 and a sequence of alignment template instantiations zk1 the sum of the decision scores is equal to the corresponding score described in expressions a straightforward representation of all hypotheses would be the prefix tree of all possible sequences of decisionsobviously there would be a large redundancy in this search space representation because there are many search nodes that are indistinguishable in the sense that the subtrees following these search nodes are identicalwe can recombine these identical search nodes that is we have to maintain only the most probable hypothesis in general the criterion for recombining a set of nodes is that the hypotheses can be distinguished by neither language nor translation modelin performing recombination algorithm for breadthfirst search with pruning we obtain a search graph instead of a search treethe exact criterion for performing recombination for the alignment templates is described in section 55theoretically we could use any graph search algorithm to search the optimal path in the search spacewe use a breadthfirst search algorithm with pruningthis approach offers very good possibilities for adjusting the tradeoff between quality and efficiencyin pruning we always compare hypotheses that have produced the same number of target wordsfigure 7 shows a structogram of the algorithmas the search space increases exponentially it is not possible to explicitly represent ittherefore we represent the search space implicitly using the functions extend and recombinethe function extend produces new hypotheses extending the current hypothesis by one wordsome hypotheses might be identical or indistinguishable by the language and translation modelsthese are recombined by the function recombinewe expand the search space such that only hypotheses with the same number of target language words are recombinedin the pruning step we use two different types of pruningfirst we perform pruning relative to the score qˆ of the current best hypothesiswe ignore all hypotheses that have a probability lower than logˆq where tp is an adjustable pruning parameterthis type of pruning can be performed when the hypothesis extensions are computedsecond in histogram pruning we maintain only the best np hypothesesthe two pruning parameters tp and np have to be optimized with respect to the tradeoff between efficiency and qualityin this section we describe various issues involved in performing an efficient implementation of a search algorithm for the alignment template approacha very important design decision in the implementation is the representation of a hypothesistheoretically it would be possible to represent search hypotheses only by the associated decision and a backpointer to the previous hypothesisyet this would be a very inefficient representation for the implementation of the operations that have to be performed in the searchthe hypothesis representation should contain all information required to perform efficiently the computations needed in the search but should contain no more information than that to keep the memory consumption smallin search we produce hypotheses n each of which contains the following information we compare in beam search those hypotheses that cover different parts of the input sentencethis makes the comparison of the probabilities problematictherefore we integrate an admissible estimation of the remaining probabilities to arrive at a complete translationdetails of the heuristic function for the alignment templates are provided in the next sectionto improve the comparability of search hypotheses we introduce heuristic functionsa heuristic function estimates the probabilities of reaching the goal node from a certain search nodean admissible heuristic function is always an optimistic estimate that is for each search node the product of edge probabilities of reaching a goal node is always equal to or smaller than the estimated probabilityfor an abased search algorithm a good heuristic function is crucial to being able to translate long sentencesfor a beam search algorithm the heuristic function has a different motivationit is used to improve the scoring of search hypothesesthe goal is to make the probabilities of all hypotheses more comparable in order to minimize the chance that the hypothesis leading to the optimal translation is pruned awayheuristic functions for search in statistical mt have been used in wang and waibel and och ueffing and ney wang and waibel have described a simple heuristic function for model 2 of brown et al that was not admissibleoch ueffing and ney have described an admissible heuristic function for model 4 of brown et al and an almostadmissible heuristic function that is empirically obtainedwe have to keep in mind that a heuristic function is helpful only if the overhead introduced in computing the heuristic function is more than compensated for by the gain obtained through a better pruning of search hypothesesthe heuristic functions described in the following are designed such that their computation can be performed efficientlythe basic idea for developing a heuristic function for an alignment model is that all source sentence positions that have not been covered so far still have to be translated to complete the sentenceif we have an estimation rx of the optimal score for translating position j then the value of the heuristic function rx for a node n can be inferred by summing over the contribution for every position j that is not in the coverage vector c that assigns to every alignment template instantiation z a maximal probabilityusing r we can induce a positiondependent heuristic function r here j denotes the number of source language words produced by the alignment template instantiation z and j denotes the position of the first source language wordit can be easily shown that if r is admissible then r is also admissiblewe have to show that for all nonoverlapping sequences zk1 the following holds here k denotes the phrase index k that includes the target language word position jin the following we develop various heuristic functions r of increasing complexitythe simplest realization of a heuristic function r takes into account only the prior probability of an alignment template instantiation the language model can be incorporated by considering that for each target word there exists an optimal language model probability here we assume a trigram language modelin general it is necessary to maximize over all possible different language model historieswe can also combine the language model and the lexicon model into one heuristic function to include the phrase alignment probability in the heuristic function we compute the minimum sum of all jump widths that is needed to complete the translationthis sum can be computed efficiently using the algorithm shown in figure 8then an admissible heuristic function for the jump width is obtained by combining all the heuristic functions for the various models we obtain as final heuristic function for a search hypothesis nwe present results on the verbmobil task which is a speech translation task in the domain of appointment scheduling travel planning and hotel reservation table 2 shows the corpus statistics for this taskwe use a training corpus which is used to train the alignment template model and the language models a development corpus which is used to estimate the model scaling factors and a test corpuson average 332 reference translations for the development corpus and 514 reference translations for the test corpus are useda standard vocabulary had been defined for the various speech recognizers used in verbmobilhowever not all words of this vocabulary were observed in the training corpustherefore the translation vocabulary was extended semiautomatically by adding about 13000 germanenglish entries from an online bilingual lexicon available on the webthe resulting lexicon contained not only wordword entries but also multiword translations especially for the large number of german compound wordsto counteract the sparseness of the training data a couple of straightforward rulebased preprocessing steps were applied before any other type of processing so far in machine translation research there is no generally accepted criterion for the evaluation of experimental resultstherefore we use various criteriain the following experiments we use in the following we analyze the effect of various system components alignment template length search pruning and language model ngram sizea systematic evaluation of the alignment template system comparing it with other translation approaches has been performed in the verbmobil project and is described in tessiore and von hahn there the alignmenttemplatebased system achieved a significantly larger number of approximately correct translations than the competing translation systems 611 effect of alignment template lengthtable 3 shows the effect of constraining the maximum length of the alignment templates in the source languagetypically it is necessary to restrict the alignment template length to keep memory requirements lowwe see that using alignment templates with only one or two words in the source languages results in very bad translation qualityyet using alignment templates with lengths as small as three words yields optimal results algorithm misses the most probable translation and produces a translation which is less probableas we typically cannot efficiently compute the probability of the optimal translation we cannot efficiently compute the number of search errorsyet we can compute a lower bound on the number of search errors by comparing the translation found under specific pruning thresholds with the best translation that we have found using very conservative pruning thresholdstables 4 and 5 show the effect of the pruning parameter tp with the histogram pruning parameter np 50000tables 6 and 7 show the effect of the pruning parameter np with the pruning parameter tp 1012in all four tables we provide the results for using no heuristic functions and three variants of an increasingly informative heuristic functionthe first is an estimate of the alignment template and the lexicon probability the second adds an estimate of the language model probability and the third also adds the alignment probability these heuristic functions are described in section 56without a heuristic function even more than a hundred seconds per sentence cannot guarantee searcherrorfree translationwe draw the conclusion that a good heuristic function is very important to obtaining an efficient search algorithmin addition the search errors have a more severe effect on the error rates if we do not use a heuristic functionif we compare the error rates in table 7 which correspond to about 55 search errors in table 6 we obtain an mwer of 367 using no heuristic function and an mwer of 326 using the combined heuristic functionthe reason is that without a heuristic function often the easy part of the input sentence is translated firstthis yields severe reordering errors ngrambased language modelsideally we would like to take into account longrange dependenciesyet long ngrams are seen rarely and are therefore rarely used on unseen datatherefore we expect that extending the history length will at some point not improve further translation qualitytable 8 shows the effect of the length of the language model history on translation qualitywe see that the language model perplexity improves from 4781 for a unigram model to 299 for a trigram modelthe corresponding translation quality improves from an mwer of 459 to an mwer of 318the largest effect seems to come from taking into account the bigram dependence which achieves an mwer of 329if we perform loglinear interpolation of a trigram model with a classbased 5gram model we observe an additional small improvement in translation quality to an mwer of 309the hansards task involves the proceedings of the canadian parliament which are kept by law in both french and englishabout three million parallel sentences of this bilingual data have been made available by the linguistic data consortium here we use a subset of the data containing only sentences of up to 30 wordstable 9 shows the training and test corpus statisticsthe results for french to english and for english to french are shown in table 10because of memory limitations the maximum alignment template length has been restricted to four wordswe compare here against the singlewordbased search for model 4 described in tillmann we see that the alignment template approach obtains significantly better results than the singlewordbased searchvarious statistical examplebased and rulebased mt systems for a chineseenglish news domain were evaluated in the nist 2002 mt evaluation4 using the alignment template approach described in this article we participated in these evaluationsthe problem domain is the translation of chinese news text into englishtable 11 gives an overview on the training and test datathe english vocabulary consists of fullform words that have been converted to lowercase lettersthe number of sentences has been artificially increased by adding certain parts of the original training material more than once to the training corpus in order to give larger weight to those parts of the training corpus that consist of highquality aligned chinese news text and are therefore expected to be especially helpful for the translation of the test datathe chinese language poses special problems because the boundaries of chinese words are not markedchinese text is provided as a sequence of characters and it is unclear which characters have to be grouped together to obtain entities that can be interpreted as wordsfor statistical mt it would be possible to ignore this fact and treat the chinese characters as elementary units and translate them into englishyet preliminary experiments showed that the existing alignment models produce better results if the chinese characters are segmented in a preprocessing step into single wordswe use the ldc segmentation tool5 for the english corpus the following preprocessing steps are appliedfirst the corpus is tokenized it is then segmented into sentences and all uppercase characters are converted to lowercaseas the final evaluation criterion does not distinguish case it is not necessary to deal with the case informationthen the preprocessed chinese and english corpora are sentence aligned in which the lengths of the source and target sentences are significantly differentfrom the resulting corpus we automatically replace translationsin addition only sentences with less than 60 words in english and chinese are usedto improve the translation of chinese numbers we use a categorization of chinese number and date expressionsfor the statistical learning all number and date expressions are replaced with one of two generic symbols number or datethe number and date expressions are subjected to a rulebased translation by simple lexicon lookupthe translation of the number and date expressions is inserted into the output using the alignment informationfor chinese and english this categorization is implemented independently of the other languageto evaluate mt quality on this task nist made available the nist09 evaluation toolthis tool provides a modified bleu score by computing a weighted precision of ngrams modified by a length penalty for very short translationstable 12 shows the results of the official evaluation performed by nist in june 2002with a score of 765 the results obtained were statistically significantly better than any other competing approachdifferences in the nist score larger than 012 are statistically significant at the 95 levelwe conclude that the developed alignment template approach is also applicable to unrelated language pairs such as chineseenglish and that the developed statistical models indeed seem to be largely languageindependenttable 13 shows various example translationswe have presented a framework for statistical mt for natural languages which is more general than the widely used sourcechannel approachit allows a baseline mt been achieved in 1995 in the economic construction of chinas fourteen border cities open to foreignerstranslation xinhua news agency beijing february 12chinas opening up to the outside world of the 1995 in the fourteen border pleased to obtain the construction of the economyreference foreign investment in jiangsus agriculture on the increase translation to increase the operation of foreign investment in jiangsu agriculture reference according to the data provided today by the ministry of foreign trade and economic cooperation as of november this year china has actually utilized 46959 billion us dollars of foreign capital including 40007 billion us dollars of direct investment from foreign businessmentranslation the external economic and trade cooperation department today provided that this year the foreign capital actually utilized by china on november to us 46959 billion including of foreign company direct investment was us 40007 billionreference according to officials from the provincial department of agriculture and forestry of jiangsu the threecapital ventures approved by agencies within the agricultural system of jiangsu province since 1994 have numbered more than 500 and have utilized over 700 million us dollars worth of foreign capital respectively three times and seven times more than in 1993translation jiangsu province for the secretaries said that from the 1994 years jiangsu province system the approval of the threefunded enterprises there are more than 500 foreign investment utilization rate of more than us 700 million 1993 years before three and sevenreference the actual amount of foreign capital has also increased more than 30 as compared with the same period last yeartranslation the actual amount of foreign investment has increased by more than 30 compared with the same period last yearreference import and export in pudong new district exceeding 9 billion us dollars this year translation foreign trade imports and exports of this year to the pudong new region exceeds us 9 billion system to be extended easily by adding new feature functionswe have described the alignment template approach for statistical machine translation which uses two different alignment levels a phraselevel alignment between phrases and a wordlevel alignment between single wordsas a result the context of words has a greater influence and the changes in word order from source to target language can be learned explicitlyan advantage of this method is that machine translation is learned fully automatically through the use of a bilingual training corpuswe have shown that the presented approach is capable of achieving better translation results on various tasks compared to other statistical examplebased or rulebased translation systemsthis is especially interesting as our system is structured simpler than many competing systemswe expect that better translation can be achieved by using models that go beyond the flat phrase segmentation that we perform in our modela promising avenue is to gradually extend the model to take into account to some extent the recursive structure of natural languages using ideas from wu and wong or alshawi bangalore and douglas we expect other improvements as well from learning nonconsecutive phrases in source or target language and from better generalization methods for the learnedphrase pairsthe work reported here was carried out while the first author was with the lehrstuhl fyou are informatik vi computer science department rwth aachenuniversity of technology
J04-4002
the alignment template approach to statistical machine translationa phrasebased statistical machine translation approach the alignment template approach is describedthis translation approach allows for general manytomany relations between wordsthereby the context of words is taken into account in the translation model and local changes in word order from source to target language can be learned explicitlythe model is described using a loglinear modeling approach which is a generalization of the often used sourcechannel approachthereby the model is easier to extend than classical statistical machine translation systemswe describe in detail the process for learning phrasal translations the feature functions used and the search algorithmthe evaluation of this approach is performed on three different tasksfor the germanenglish speech verbmobil task we analyze the effect of various system componentson the frenchenglish canadian hansards task the alignment template system obtains significantly better results than a singlewordbased translation modelin the chineseenglish 2002 national institute of standards and technology machine translation evaluation it yields statistically significantly better nist scores than all competing research and commercial translation systemswe describe a phraseextract algorithm for extracting phrase pairs from a sentence pair annotated with a 1best alignment
intricacies of collins parsing model m university of pennsylvania this article documents a large set of heretofore unpublished details collins used in his parser such that along with collins thesis this article contains all information necessary to duplicate collins benchmark results indeed these asyetunpublished details account for an 11 relative increase in error from an implementation including all details to a cleanroom implementation of collins model we also show a cleaner and equally wellperforming method for the handling of punctuation and conjunction and reveal certain other probabilistic oddities about collins parser we not only analyze the effect of the unpublished details but also reanalyze the effect of certain wellknown details revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thought finally we perform experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and its part of speech this article documents a large set of heretofore unpublished details collins used in his parser such that along with collins thesis this article contains all information necessary to duplicate collins benchmark resultsindeed these asyetunpublished details account for an 11 relative increase in error from an implementation including all details to a cleanroom implementation of collins modelwe also show a cleaner and equally wellperforming method for the handling of punctuation and conjunction and reveal certain other probabilistic oddities about collins parserwe not only analyze the effect of the unpublished details but also reanalyze the effect of certain wellknown details revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thoughtfinally we perform experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and its part of speechmichael collins parsing models have been quite influential in the field of natural language processingnot only did they achieve new performance benchmarks on parsing the penn treebank and not only did they serve as the basis of collins own future work but they also served as the basis of important work on parser selection an investigation of corpus variation and the effectiveness of bilexical dependencies sample selection bootstrapping nonenglish parsers and the automatic labeling of semantic roles and predicateargument extraction as well as that of other research effortsrecently in order to continue our work combining word sense with parsing and the study of languagedependent and independent parsing features we built a multilingual parsing engine that is capable of instantiating a wide variety of generative statistical parsing models 1 as an appropriate baseline model we chose to instantiate the parameters of collins model 2this task proved more difficult than it initially appearedstarting with collins thesis we reproduced all the parameters described but did not achieve nearly the same high performance on the wellestablished development test set of section 00 of the penn treebanktogether with collins thesis this article contains all the information necessary to replicate collins parsing results2 specifically this article describes all the asyetunpublished details and features of collins model and some analysis of the effect of these features with respect to parsing performance as well as some comparative analysis of the effects of published features3 in particular implementing collins model using only the published details causes an 11 increase in relative error over collins own published resultsthat is taken together all the unpublished details have a significant effect on overall parsing performancein addition to the effects of the unpublished details we also have new evidence to show that the discriminative power of collins model does not lie where once thought bilexical dependencies play an extremely small role in collins models and head choice is not nearly as critical as once thoughtthis article also discusses the rationale for various parameter choicesin general we will limit our discussion to collins model 2 but we make occasional reference to model 3 as wellthere are three primary motivations for this workfirst collins parsing model represents a widely used and cited parsing modelas such if it is not desirable to use it as a black box then it should be possible to replicate the model in full providing a necessary consistency among research efforts employing itcareful examination of its intricacies will also allow researchers to deviate from the original model when they think it is warranted and accurately document those deviations as well as understand the implications of doing sothe second motivation is related to the first science dictates that experiments be replicable for this is the way we may test and validate themthe work described here comes in the wake of several previous efforts to replicate this particular model but this is the first such effort to provide a faithful and equally wellperforming emulation of the originalthe third motivation is that a deep understanding of an existing modelits intricacies the interplay of its many featuresprovides the necessary platform for advancement to newer better modelsthis is especially true in an area like statistical parsing that has seen rapid maturation followed by a soft plateau in performancerather than simply throwing features into a new model and measuring their effect in a crude way using standard evaluation metrics this work aims to provide a more thorough understanding of the nature of a models featuresthis understanding not only is useful in its own right but should help point the way toward newer features to model or better modeling techniques for we are in the best position for advancement when we understand existing strengths and limitations2 in the course of replicating collins results it was brought to our attention that several other researchers had also tried to do this and had also gotten performance that fell short of collins published resultsfor example gildea reimplemented collins model 1 but obtained results with roughly 167 more relative error than collins reported results using that modelthe collins parsing model decomposes the generation of a parse tree into many small steps using reasonable independence assumptions to make the parameter estimation problem tractableeven though decoding proceeds bottomup the model is defined in a topdown mannerevery nonterminal label in every tree is lexicalized the label is augmented to include a unique headword that the node dominatesthe lexicalized pcfg that sits behind model 2 has rules of the form where p l r and h are all lexicalized nonterminals and p inherits its lexical head from its distinguished headchild h in this generative model first p is generated then its headchild h then each of the left and rightmodifying nonterminals are generated from the head outwardthe modifying nonterminals l and r are generated conditioning on p and h as well as a distance metric and an incremental subcategorization frame feature note that if the modifying nonterminals were generated completely independently the model would be very impoverished but in actuality because it includes the distance and subcategorization frame features the model captures a crucial bit of linguistic reality namely that words often have welldefined sets of complements and adjuncts occurring with some welldefined distribution in the righthand sides of a rewriting systemthe process proceeds recursively treating each newly generated modifier as a parent and then generating its head and modifier children the process terminates when preterminals are generatedas a way to guarantee the consistency of the model the model also generates two hidden stop nonterminals as the leftmost and rightmost children of every parent to the casual reader of collins thesis it may not be immediately apparent that there are quite a few preprocessing steps for each annotated training tree and that these steps are crucial to the performance of the parserwe identified 11 preprocessing steps necessary to prepare training trees when using collins parsing model the order of presentation in the foregoing list is not arbitrary as some of the steps depend on results produced in previous stepsalso we have separated the steps into their functional units an implementation could combine steps that are independent of one another finally we note that the final step headfinding is actually required by some of the previous steps in certain cases in our implementation we selectively employ a headfinding module during the first 10 steps where necessarya few of the preprocessing steps rely on the notion of a coordinated phrasein this article the conditions under which a phrase is considered coordinated are slightly more detailed than is described in collins thesisa node represents a coordinated phrase if in the penn treebank a coordinating conjunction is any preterminal node with the label ccthis definition essentially picks out all phrases in which the headchild is truly conjoined to some other phrase as opposed to a phrase in which say there is an initial cc such as an s that begins with the conjunction butas a preprocessing step pruning of unnecessary nodes simply removes preterminals that should have little or no bearing on parser performancein the case of the english treebank the pruned subtrees are all preterminal subtrees whose root label is one of there are two reasons to remove these types of subtrees when parsing the english treebank first in the treebanking guidelines quotation marks were given the lowest possible priority and thus cannot be expected to appear within constituent boundaries in any kind of consistent way and second neither of these types of preterminalsnor any punctuation marks for that mattercounts towards the parsing scorean np is basal when it does not itself dominate an np such np nodes are relabeled npbmore accurately an np is basal when it dominates no other nps except possessive nps where a possessive np is an np that dominates pos the preterminal possessive a nonhead npb child of np requires insertion of extra np marker for the penn treebankthese possessive nps are almost always themselves base nps and are therefore relabeled npbfor consistencys sake when an np has been relabeled as npb a normal np node is often inserted as a parent nonterminalthis insertion ensures that npb nodes are always dominated by np nodesthe conditions for inserting this extra np level are slightly more detailed than is described in collins thesis howeverthe extra np level is added if one of the following conditions holds in postprocessing when an npb is an only child of an np node the extra np level is removed by merging the two nodes into a single np node and all remaining npb nodes are relabeled npthe insertion of extra np levels above certain npb nodes achieves a degree of consistency for nps effectively causing the portion of the model that generates children of np nodes to have less perplexitycollins appears to have made a similar effort to improve the consistency of the npb modelnpb nodes that have sentential nodes as their final child are repaired the sentential child is raised so that it becomes a new rightsibling of the npb node 6 while such a transformation is reasonable it is interesting to note that collins parser performs no equivalent detransformation when parsing is complete meaning that when the parser produces the repaired structure during testing there is a spurious np bracket7 the gap feature is discussed extensively in chapter 7 of collins thesis and is applicable only to his model 3the preprocessing step in which gap information is added locates every null element preterminal finds its coindexed whnp antecedent higher up in the tree replaces the null element preterminal with a special trace tag and threads the gap feature in every nonterminal in the chain between the common ancestor of the antecedent and the tracethe threadedgap feature is represented by appending g to every node label in the chainthe only detail we would like to highlight here is that an implementation of this preprocessing step should check for cases in which threading is impossible such as when two fillergap dependencies crossan implementation should be able to handle nested fillergap dependencies howeverthe node labels of sentences with no subjects are transformed from s to sgthis step enables the parsing model to be sensitive to the different contexts in which such subjectless sentences occur as compared to normal s nodes since the subjectless sentences are functionally acting as noun phrasescollins example of illustrates the utility of this transformationhowever the conditions under which an s may be relabeled are not spelled out one might assume that every s whose subject dominates a null element should be relabeled sgin actuality the conditions are much stricteran s is relabeled sg when the following conditions hold the latter two conditions appear to be an effort to capture only those subjectless sentences that are based around gerunds as in the flying planes example8 removing null elements simply involves pruning the tree to eliminate any subtree that dominates only null elementsthe special trace tag that is inserted in the step that adds gap information is excluded as it is specifically chosen to be something other than the nullelement preterminal marker the step in which punctuation is raised is discussed in detail in chapter 7 of collins thesisthe main idea is to raise punctuationwhich is any preterminal subtree in which the part of speech is either a comma or a colonto the highest possible point in the tree so that it always sits between two other nonterminalspunctuation that occurs at the very beginning or end of a sentence is raised away that is prunedin addition any implementation of this step should handle the case in which multiple punctuation elements appear as the initial or final children of some node as well as the more pathological case in which multiple punctuation elements appear along the left or right frontier of a subtree finally it is not clear what to do with nodes that dominate only punctuation preterminalsour implementation simply issues a warning in such cases and leaves the punctuation symbols untouchedheadchildren are not exempt from being relabeled as argumentscollins employs a small set of heuristics to mark certain nonterminals as arguments by appending a to the nonterminal labelthis section reveals three unpublished details about collins argument finding this step simply involves stripping away all nonterminal augmentations except those that have been added from other preprocessing steps this includes the stripping away of all function tags and indices marked by the treebank annotatorshead moves from right to left conjunct in a coordinated phrase except when the parent nonterminal is npbwith arguments identified as described in section 49 if a subjectless sentence is found to have an argument prior to its head this step detransforms the sg so that it reverts to being an s headfinding is discussed at length in collins thesis and the headfinding rules used are included in his appendix athere are a few unpublished details worth mentioning howeverthere is no headfinding rule for nx nonterminals so the default rule of picking the leftmost child is used10 nx nodes roughly represent the n level of syntax and in practice often denote base npsas such the default rule often picks out a lessthanideal headchild such as an adjective that is the leftmost child in a base npcollins thesis discusses a case in which the initial head is modified when it is found to denote the right conjunct in a coordinated phrasethat is if the head rules pick out a head that is preceded by a cc that is noninitial the head should be modified to be the nonterminal immediately to the left of the cc an important detail is that such head movement does not occur inside base npsthat is a phrase headed by npb may indeed look as though it constitutes a coordinated phraseit has a cc that is noninitial but to the left of the currently chosen headbut the currently chosen head should remain chosen11 as we shall see there is exceptional behavior for base nps in almost every part of the collins parser10 in our first attempt at replicating collins results we simply employed the same headfinding rule for nx nodes as for np nodesthis choice yields differentbut not necessarily inferiorresults11 in section 41 we defined coordinated phrases in terms of heads but here we are discussing how the headfinder itself needs to determine whether a phrase is coordinatedit does this by considering the potential new choice of head if the headfinding rules pick out a head that is preceded by a noninitial cc will moving the head to be a child to the left of the cc yield a coordinated phraseif so then the head should be movedexcept when the parent is npb vi feature is true when generating righthand stop nonterminal because the np the will to continue contains a verbthe trainers job is to decompose annotated training trees into a series of head and modifiergeneration steps recording the counts of each of these stepsreferring to each h li and ri are generated conditioning on previously generated items and each of these events consisting of a generated item and some maximal history context is countedeven with all this decomposition sparse data are still a problem and so each probability estimate for some generated item given a maximal context is smoothed with coarser distributions using less context whose counts are derived from these toplevel head and modifiergeneration countsas mentioned in section 3 instead of generating each modifier independently the model conditions the generation of modifiers on certain aspects of the historyone such function of the history is the distance metricone of the two components of this distance metric is what we will call the verb intervening feature which is a predicate vi that is true if a verb has been generated somewhere in the surface string of the previously generated modifiers on the current side of the headfor example in figure 7 when generating the righthand stop nonterminal child of the vp the vi predicate is true because one of the previously generated modifiers on the right side of the head dominates a verb continue12 more formally this feature is most easily defined in terms of a recursively defined cv predicate which is true if and only if a node dominates a verb bikel intricacies of collins parsing model referring to we define the verbintervening predicate recursively on the firstorder markov process generating modifying nonterminals and similarly for right modifierswhat is considered to be a verbwhile this is not spelled out as it happens a verb is any word whose partofspeech tag is one of vb vbd vbg vbn vbp vbzthat is the cv predicate returns true only for these preterminals and false for all other preterminalscrucially this set omits md which is the marker for modal verbsanother crucial point about the vi predicate is that it does not include verbs that appear within base npsput another way in order to emulate collins model we need to amend the definition of cv by stipulating that cv falseone oddity of collins trainer that we mention here for the sake of completeness is that it skips certain training treesfor odd historical reasons13 the trainer skips all trees with more than 500 tokens where a token is considered in this context to be a word a nonterminal label or a parenthesisthis oddity entails that even some relatively short sentences get skipped because they have lots of tree structurein the standard wall street journal training corpus sections 0221 of the penn treebank there are 120 such sentences that are skippedunless there is something inherently wrong with these trees one would predict that adding them to the training set would improve a parsers performanceas it happens there is actually a minuscule drop in performance when these trees are included531 the threshold problemcollins mentions in chapter 7 of his thesis that all words occurring less than 5 times in training data and words in test data which have never been seen in training are replaced with the unknown token the frequency below which words are considered unknown is often called the unknownword thresholdunfortunately this term can also refer to the frequency above which words are considered knownas it happens the unknownword threshold collins uses in his parser for english is six not five14 to be absolutely unambiguous words that occur fewer than six times which is to say words that occur five times or fewer in the data are considered unknown words into the parsing model then is simply to map all lowfrequency words in the training data to some special unknown token before counting toplevel events for parameter estimation collins trainer actually does not do thisinstead it does not directly modify any of the words in the original training trees and proceeds to break up these unmodified trees into the toplevel eventsafter these events have been collected 13 this phrase was taken from a comment in one of collins preprocessing perl scripts14 as with many of the discovered discrepancies between the thesis and the implementation we determined the different unknownword threshold through reverse engineering in this case through an analysis of the events output by collins trainer and counted the trainer selectively maps lowfrequency words when deriving counts for the various context levels of the parameters that make use of bilexical statisticsif this mapping were performed uniformly then it would be identical to mapping lowfrequency words prior to toplevel event counting this is not the case howeverwe describe the details of this unknownword mapping in section 692while there is a negligible yet detrimental effect on overall parsing performance when one uses an unknownword threshold of five instead of six when this change is combined with the obvious method for handling unknown words there is actually a minuscule improvement in overall parsing performance all parameters that generate trees in collins model are estimates of conditional probabilitieseven though the following overview of parameter classes presents only the maximal contexts of the conditional probability estimates it is important to bear in mind that the model always makes use of smoothed probability estimates that are the linear interpolation of several raw maximumlikelihood estimates using various amounts of context in sections 45 and 49 we saw how the raw treebank nonterminal set is expanded to include nonterminals augmented with a and g although it is not made explicit in collins thesis collins model uses two mapping functions to remove these augmentations when including nonterminals in the history contexts of conditional probabilitiespresumably this was done to help alleviate sparsedata problemswe denote the argument removal mapping function as alpha and the gap removal mapping function as gammafor example since gap augmentations are present only in model 3 the gamma function effectively is the identity function in the context of models 1 and 2the head nonterminal is generated conditioning on its parent nonterminal label as well as the headword and head tag which they share since parents inherit their lexical head information from their headchildrenmore specifically an unlexicalized head nonterminal label is generated conditioning on the fully lexicalized parent nonterminalwe denote the parameter class as follows when the model generates a headchild nonterminal for some lexicalized parent nonterminal it also generates a kind of subcategorization frame on either side of the headchild with the following maximal context a fully lexicalized treethe vp node is the headchild of s probabilistically it is as though these subcats are generated with the headchild via application of the chain rule but they are conditionally independent15 these subcats may be thought of as lists of requirements on a particular side of a headfor example in figure 8 after the root node of the tree has been generated the head child vp is generated conditioning on both the parent label s and the headword of that parent satvbdbefore any modifiers of the headchild are generated both a left and rightsubcat frame are generatedin this case the left subcat is npa and the right subcat is meaning that there are no required elements to be generated on the right side of the headsubcats do not specify the order of the required argumentsthey are dynamically updated multisets when a requirement has been generated it is removed from the multiset and subsequent modifiers are generated conditioning on the updated multiset16 the implementation of subcats in collins parser is even more specific subcats are multisets containing various numbers of precisely six types of items npa sa sbara vpa g and miscellaneousthe g indicates that a gap must be generated and is applicable only to model 3miscellaneous items include all nonterminals that were marked as arguments in the training data that were not any of the other named typesthere are rules for determining whether nps ss sbars and vps are arguments and the miscellaneous arguments occur as the result of the argumentfinding rule for pps which states that the first nonprn nonpartofspeech tag that occurs after the head of a pp should be marked as an argument and therefore nodes that are not one of the four named types can be markedas mentioned above after a headchild and its left and right subcats are generated modifiers are generated from the head outward as indicated by the modifier nonterminal indices in figure 1a fully lexicalized nonterminal has three components the nonterminal label the headword and the headwords part of speechfully lexicalized modifying nonterminals are generated in two steps to allow for the parameters to be independently smoothed which in turn is done to avoid sparsedata problemsthese two steps estimate the joint event of all three components using the chain rulein the a tree containing both punctuation and conjunction first step a partially lexicalized version of the nonterminal is generated consisting of the unlexicalized label plus the part of speech of its headwordthese partially lexicalized modifying nonterminals are generated conditioning on the parent label the head label the headword the head tag the current state of the dynamic subcat and a distance metricsymbolically the parameter classes are where denotes the distance metric17 as discussed above one of the two components of this distance metric is the vi predicatethe other is a predicate that simply reports whether the current modifier is the first modifier being generated that is whether i 1the second step is to generate the headword itself where because of the chain rule the conditioning context consists of everything in the histories of expressions and plus the partially lexicalized modifieras there are some interesting idiosyncrasies with these headwordgeneration parameters we describe them in more detail in section 69651 inconsistent modelas discussed in section 48 punctuation is raised to the highest position in the treethis means that in some sense punctuation acts very much like a coordinating conjunction in that it conjoins the two siblings between which it sitsobserving that it might be helpful for conjunctions to be generated conditioning on both of their conjuncts collins introduced two new parameter classes in his thesis parser ppunc and pcc18 as per the definition of a coordinated phrase in section 41 conjunction via a cc node or a punctuation node always occurs posthead put another way if a conjunction or punctuation mark occurs prehead it is 17 throughout this article we use the notation li to refer to the three items that constitute a fully lexicalized leftmodifying nonterminal which are the unlexicalized label li its headword wli and its part of speech tli and similarly for right modifierswe use li to refer to the two items li and tli of a partially lexicalized nonterminalfinally when we do not wish to distinguish between a left and right modifier we use mi mi and mi not generated via this mechanism19 furthermore even if there is arbitrary material between the right conjunct and the head the parameters effectively assume that the left conjunct is always the headchildfor example in figure 9 the rightmost np is considered to be conjoined to the leftmost np which is the headchild even though there is an intervening np the new parameters are incorporated into the model by requiring that all modifying nonterminals be generated with two boolean flags coord indicating that the nonterminal is conjoined to the head via a cc and punc indicating that the nonterminal is conjoined to the head via a punctuation markwhen either or both of these flags is true the intervening punctuation or conjunction is generated via appropriate instances of the ppunpcc parameter classesfor example the model generates the five children in figure 9 in the following order first the headchild is generated which is the leftmost np conditioning on the parent label and the headword and tagthen since modifiers are always generated from the head outward the rightsibling of the head which is the tall trees np is generated with both the punc and cc flags falsethen the rightmost np is generated with both the punc and cc booleans true since it is considered to be conjoined to the headchild and requires the generation of an intervening punctuation mark and conjunctionfinally the intervening punctuation is generated conditioning on the parent the head and the right conjunct including the headwords of the two conjoined phrases and the intervening cc is similarly generateda simplified version of the probability of generating all these children is summarized as follows the idea is that using the chain rule the generation of two conjuncts and that which conjoins them is estimated as one large joint event20 this scheme of using flags to trigger the ppun and pcc parameters is problematic at least from a theoretical standpoint as it causes the model to be inconsistentfigure 10 shows three different trees that would all receive the same probability from collins modelthe problem is that coordinating conjunctions and punctuation are not generated as firstclass words but only as triggered from these punc and coord flags meaning that the number of such intervening conjunctive items is not specifiedso for a given sentencetree pair containing a conjunction andor a punctuation mark there is an infinite number of similar sentencetree pairs with arbitrary amounts of conjunctive material between the same two nodesbecause all of these trees have the same nonzero probability the sum etp where t is a possible tree generated by the model diverges meaning the model is inconsistent another consequence of not generating posthead conjunctions and punctuation as firstclass words is that they the collins model assigns equal probability to these three trees do not count when calculating the headadjacency component of collins distance metricwhen emulating collins model instead of reproducing the ppun and pcc parameter classes directly in our parsing engine we chose to use a different mechanism that does not yield an inconsistent model but still estimates the large joint event that was the motivation behind these parameters in the first place652 history mechanismin our emulation of collins model we use the history rather than the dedicated parameter classes pcc and ppun to estimate the joint event of generating a conjunction and its two conjunctsthe first big change that results is that we treat punctuation preterminals and ccs as firstclass objects meaning that they are generated in the same way as any other modifying nonterminalthe second change is a little more involvedfirst we redefine the distance metric to consist solely of the vi predicatethen we add to the conditioning context a mapped version of the previously generated modifier according to the following where mi is some modifier li or ri21 so the maximal context for our modifying nonterminal parameter class is now defined as follows where side is a booleanvalued event that indicates whether the modifier is on the left or right side of the headby treating cc and punctuation nodes as firstclass nonterminals and by adding the mapped version of the previously generated modifier we have in one fell swoop incorporated the no intervening component of collins distance metric and achieved an estimate of the joint event of a conjunction and its conjuncts albeit with different dependencies that is a different application of the chain ruleto put this parameterization change in sharp relief consider the abstract tree structure to a first approximation under the old parameterization the conjunction of some node r1 with a head h and a parent p looked like this ˆph ˆpr ˆpcc whereas under the new parameterization it looks like this either way the probability of the joint conditional event h cc r1 p is being estimated but with the new method there is no need to add two new specialized parameter classes and the new method does not introduce inconsistency into the modelusing less simplification the probability of generating the five children of figure 9 is now 21 originally we had an additional mechanism that attempted to generate punctuation and conjunctions with conditional independenceone of our reviewers astutely pointed out that the mechanism led to a deficient model and so we have subsequently removed it from our modelthe removal leads to a 005 absolute reduction in fmeasure on sentences of length 40 words in section 00 of the penn treebankas this difference is not at all statistically significant all evaluations reported in this article are with the original modelas shown in section 81 this new parameterization yields virtually identical performance to that of the collins model22 as we have already seen there are several ways in which base nps are exceptional in collins parsing modelthis is partly because the flat structure of base nps in the penn treebank suggested the use of a completely different model by which to generate themessentially the model for generating children of npb nodes is a bigrams of nonterminals modelthat is it looks a great deal like a bigram language model except that the items being generated are not words but lexicalized nonterminalsheads of npb nodes are generated using the normal headgeneration parameter but modifiers are always generated conditioning not on the head but on the previously generated modifierthat is we modify expressions and to be though it is not entirely spelled out in his thesis collins considers the previously generated modifier to be the headchild for all intents and purposesthus the subcat and distance metrics are always irrelevant since it is as though the current modifier is right next to the head23 another consequence of this is that npbs are never considered to be coordinated phrases and thus ccs dominated by npb are never generated using a pcc parameter instead they are generated using a normal modifyingnonterminal parameterpunctuation dominated by npb on the other hand is still as always generated via ppunc parameters but crucially the modifier is always conjoined to the pseudohead that is the previously generated modifierconsequently when some right modifier ri is generated the previously generated modifier on the right side of the head ri1 is never a punctuation preterminal but always the previous real preterminal24 base nps are also exceptional with respect to determining chart item equality the commapruning rule and general beam pruning two parameter classes that make their appearance only in appendix e of collins thesis are those that compute priors on lexicalized nonterminalsthese priors are used as a crude proxy for the outside probability of a chart item previous work has shown that the inside probability alone is an insufficient scoring metric when comparing chart items covering the same span during decoding and that some estimate of the outside probability of a chart item should be factored into the scorea prior on the root nonterminal label of the derivation forest represented by a particular chart item is used for this purpose in collins parser22 as described in bikel our parsing engine allows easy experimentation with a wide variety of different generative models including the ability to construct history contexts from arbitrary numbers of previously generated modifiersthe mapping function delta and the transition function tau presented in this section are just two examples of this capabilitythe prior of a lexicalized nonterminal m is broken down into two separate estimates using parameters from two new classes ppriorw and ppriornt where ˆp is smoothed with ˆp and estimates using the parameters of the ppriorw class are unsmoothedmany of the parameter classes in collins modeland indeed in most statistical parsing modelsdefine conditional probabilities with very large conditioning contextsin this case the conditioning contexts represent some subset of the history of the generative processeven if there were orders of magnitude more training data available the large size of these contexts would cause horrendous sparsedata problemsthe solution is to smooth these distributions that are made rough primarily by the abundance of zeroscollins uses the technique of deleted interpolation which smoothes the distributions based on full contexts with those from coarser models that use less of the context by successively deleting elements from the context at each backoff levelas a simple example the head parameter class smoothes ph0 with ph1 and ph2for some conditional probability p let us call the reduced context at the ith backoff level oi where typically o0 beach estimate in the backoff chain is computed via maximumlikelihood estimation and the overall smoothed estimate with n backoff levels is computed using n 1 smoothing weights denoted a0 an2these weights are used in a recursive fashion the smoothed version ei pi of an unsmoothed ml estimate ei ˆpi at backoff level i is computed via the formula so for example with three levels of backoff the overall smoothed estimate would be defined as each smoothing weight can be conceptualized as the confidence in the estimate with which it is being multipliedthese confidence values can be derived in a number of sensible ways the technique used by collins was adapted from that used in bikel et al which makes use of a quantity called the diversity of the history context which is equal to the number of unique futures observed in training for that history context681 deficient modelas previously mentioned n backoff levels require n1 smoothing weightscollins parser effectively uses n weights because the estimator always adds an extra constantvalued estimate to the backoff chaincollins parser hardcodes this extra value to be a vanishingly small probability of 1019 resulting in smoothed estimates of the form when there are three levels of backoffthe addition of this constantvalued en 1019 causes all estimates in the parser to be deficient as it ends up throwing away probability massmore formally the proof leading to equation no longer holds the distribution sums to less than one 25 for computing smoothing weights is where ci is the count of the history context oi and ui is the diversity of that context26 the multiplicative constant five is used to give less weight to the backoff levels with more context and was optimized by looking at overall parsing performance on the development test set section 00 of the penn treebankwe call this constant the smoothing factor and denote it as ffas it happens the actual formula for computing smoothing weights in collins implementation is where ft is an unmentioned smoothing termfor every parameter class except the subcat parameter class and ppriorw ft 0 and ff 50for the subcat parameter class ft 50 and ff 0for ppriorw ft 10 and ff 00this curiously means that diversity is not used at all when smoothing subcatgeneration probabilities27 the second case in handles the situation in which the history context was never observed in training that is where ci ui 0 which would yield an undefined value 25 collins used this technique to ensure that even futures that were never seen with an observed history context would still have some probability mass albeit a vanishingly small one another commonly used technique would be to back off to the uniform distribution which has the desirable property of not producing deficient estimatesas with all of the treebank or modelspecific aspects of the collins parser our engine uses equation or depending on the value of a particular runtime setting26 the smoothing weights can be viewed as confidence values for the probability estimates with which they are multipliedthe wittenbell technique crucially makes use of the quantity ni ui the average number of transitions from the history context oi to a possible futurewith a little algebraic manipulation we have a quantity that is at its maximum when ni ci and at its minimum when ni 1 that is when every future observed in training was uniquethis latter case represents when the model is most uncertain in that the transition distribution from oi is uniform and poorly trained because these smoothing weights measure in some sense the closeness of the observed distribution to uniform they can be viewed as proxies for the entropy of the distribution pbackoff levels for plwprw the modifier headword generation parameter classes wliand tli are respectively the headword and its part of speech of the nonterminal lithis table is basically a reproduction of the last column of table 71 in collins thesisour new parameter class for the generation of headwords of modifying nonterminals when ft 0in such situations making λi 0 throws all remaining probability mass to the smoothed backoff estimate ei1this is a crucial part of the way smoothing is done if a particular history context φi has never been observed in training the smoothed estimate using less context φi1 is simply substituted as the best guess for the estimate using more context that is ei ei128 as mentioned in section 64 fully lexicalized modifying nonterminals are generated in two stepsfirst the label and partofspeech tag are generated with an instance of pl or prnext the headword is generated via an instance of one of two parameter classes plw or prwthe backoff contexts for the smoothed estimates of these parameters are specified in table 1notice how the last level of backoff is markedly different from the previous two levels in that it removes nearly all the elements of the history in the face of sparse data the probability of generating the headword of a modifying nonterminal is conditioned only on its part of speech order to capture the most data for the crucial last level of backoff collins uses words that occur on either side of the headword resulting in a general estimate ˆp as opposed to ˆplwaccordingly in our emulation of collins model we replace the left and rightword parameter classes with a single modifier headword generation parameter class that as with includes a boolean side component that is deleted from the last level of backoff even with this change there is still a problemevery headword in a lexicalized parse tree is the modifier of some other headwordexcept the word that is the head of the entire sentence in order to properly duplicate collins model an implementation must take care that the p model includes counts for these important headwords29 the lowfrequency word fido is mapped to unknown but only when it is generated not when it is conditioned uponall the nonterminals have been lexicalized to show where the heads are692 unknownword mappingas mentioned above instead of mapping every lowfrequency word in the training data to some special unknown token collins trainer instead leaves the training data untouched and selectively maps words that appear in the backoff levels of the parameters from the pl and pr parameter classesrather curiously the trainer maps only words that appear in the futures of these parameters but never in the historiesput another way lowfrequency words are generated as unknown but are left unchanged when they are conditioned uponfor example in figure 11 where we assume fido is a lowfrequency word the trainer would derive counts for the smoothed parameter the word would not be mappedthis strange mapping scheme has some interesting consequencesfirst imagine what happens to words that are truly unknown that never occurred in the training datasuch words are mapped to the unknown token outright before parsingwhenever the parser estimates a probability with such a truly unknown word in the history it will necessarily throw all probability mass to the backedoff estimate since unknown effectively never occurred in a history context during trainingthe second consequence is that the mapping scheme yields a superficient30 model if all other parts of the model are probabilistically sound as the root nonterminal of a parse treeptopnt is unsmoothed na not applicable not the case herewith a parsing model such as collins that uses bilexical dependencies generating words in the course of parsing is done very much as it is in a bigram language model every word is generated conditioning on some previously generated word as well as some hidden materialthe only difference is that the word being conditioned upon is often not the immediately preceding word in the sentencehowever one could plausibly construct a consistent bigram language model that generates words with the same dependencies as those in a statistical parser that uses bilexical dependencies derived from headlexicalizationcollins notes that his parsers unknownwordmapping scheme could be made consistent if one were to add a parameter class that estimated ˆp where w e vl you unknownthe values of these estimates for a given sentence would be constant across all parses meaning that the superficiency of the model would be irrelevant when determining arg max pit is assumed that all trees that can be generated by the model have an implicit nonterminal top that is the parent of the observed rootthe observed lexicalized root nonterminal is generated conditioning on top using a parameter from the class ptopthis special parameter class is mentioned in a footnote in chapter 7 of collins thesisthere are actually two parameter classes used to generated observed roots one for generating the partially lexicalized root nonterminal which we call ptopnt and the other for generating the headword of the entire sentence which we call ptopwtable 3 gives the unpublished backoff structure of these two additional parameter classesnote that ptopw backs off to simply estimating ˆptechnically it should be estimating ˆpnt which is to say the probability of a words occurring with a tag in the space of lexicalized nonterminalsthis is different from the last level of backoff in the modifier headword parameter classes which is effectively estimating ˆp in the space of lexicalized preterminalsthe difference is that in the same sentence the same headword can occur with the same tag in multiple nodes such as sat in figure 8 which occurs with the tag vbd three times in the tree shown theredespite this difference collins parser uses counts from the last level of backoff of the plw and prw parameters when delivering e1 estimates for the ptopw parametersour parsing engine emulates this count sharing for ptopw by default by sharing counts from our pmw parameter classparsing or decoding is performed via a probabilistic version of the cky chartparsing algorithmas with normal cky even though the model is defined in a topdown generative manner decoding proceeds bottomupcollins thesis gives a pseusince the goal of the decoding process is to determine the maximally likely theory if during decoding a proposed chart item is equal to an item that is already in the chart the one with the greater score surviveschart item equality is closely tied to the generative parameters used to construct theories we want to treat two chart items as unequal if they represent derivation forests that would be considered unequal according to the output elements and conditioning contexts of the parameters used to generate them subject to the independence assumptions of the modelfor example for two chart items to be considered equal they must have the same label the same headword and tag and the same left and right subcatthey must also have the same head label if a chart items root label is an np node its head label is most often an npb node given the extra np levels that are added during preprocessing to ensure that npb nodes are always dominated by np nodesin such cases the chart item will contain a back pointer to the chart item that represents the base npcuriously however collins implementation considers the head label of the np chart item not to be npb but rather the head label of the npb chart itemin other words to get the head label of an np chart item one must peek through the npb and get at the npbs head labelpresumably this was done as a consideration for the npb nodes being extra nodes in some senseit appears to have little effect on overall parsing accuracy howeverideally every parse theory could be kept in the chart and when the root symbol has been generated for all theories the topranked one would win in order to speed things up collins employs three different types of pruningthe first form of pruning is to use a beam the chart memoizes the highestscoring theory in each span and if a proposed chart item for that span is not within a certain factor of the topscoring item it is not added to the chartcollins reports in his thesis that he uses a beam width of 105as it happens the beam width for his thesis experiments was 104interestingly there is a negligible difference in overall parsing accuracy when this wider beam is used an interesting modification to the standard beam in collins parser is that for chart items representing np or npa derivations with more than one child the beam is expanded to be 104 e3we suspect that collins made this modification after he added the base np model to handle the greater perplexity associated with npsthe second form of pruning employed is a comma constraintcollins observed that in the penn treebank data 96 of the time when a constituent contained a comma the word immediately following the end of the constituents span was either a comma or the end of the sentenceso for speed reasons the decoder rejects all theories that would generate constituents that violate this comma constraint31 there is a subtlety to collins implementation of this form of pruning howevercommas are quite common within parenthetical phrasesaccordingly if a comma in an input overall parsing results using only details found in collins the first two lines show the results of collins parser and those of our parser in its complete emulation mode all reported scores are for sentences of length 40 wordslr and lp are the primary scoring metricscbs is the number of crossing brackets0 cbs and 2 cbs are the percentages of sentences with 0 and 2 crossing brackets respectivelyf is the evenly weighted harmonic mean of precision and recall or 1 lplr sentence occurs after an opening parenthesis and before a closing parenthesis or the end of the sentence it is not considered a comma for the purposes of the comma constraintanother subtlety is that the comma constraint should effectively not be employed when pursuing theories of an npb subtreeas it turns out using the comma constraint also affects accuracy as shown in section 81the final form of pruning employed is rather subtle within each cell of the chart that contains items covering some span of the sentence collins parser uses buckets of items that share the same root nonterminal label for their respective derivationsonly 100 of the topscoring items covering the same span with the same nonterminal label are kept in a particular bucket meaning that if a new item is proposed and there are already 100 items covering the same span with the same label in the chart then it will be compared to the lowestscoring item in the bucketif it has a higher score it will be added to the bucket and the lowestscoring item will be removed otherwise it will not be addedapparently this type of pruning has little effect and so we have not duplicated it in our engine32 when the parser encounters an unknown word the firstbest tag delivered by ratnaparkhis tagger is usedas it happens the tag dictionary built up when training contains entries for every word observed even lowfrequency wordsthis means that during decoding the output of the tagger is used only for those words that are truly unknown that is that were never observed in trainingfor all other words the chart is seeded with a separate item for each tag observed with that word in trainingin this section we present the results of effectively doing a cleanroom implementation of collins parsing model that is using only information available in as shown in table 4the cleanroom model has a 106 increase in fmeasure error compared to collins parser and an 110 increase in fmeasure error compared to our engine in its complete emulation of collins model 2this is comparable to the increase in error seen when removing such published features as the verbintervening component of the distance metric which results in an fmeasure error increase of 986 or the subcat feature which results in a 762 increase in fmeasure error33 therefore while the collection of unpublished details presented in sections 47 is disparate in toto those details are every bit as important to overall parsing performance as certain of the published featuresthis does not mean that all the details are equally importanttable 5 shows the effect on overall parsing performance of independently removing or changing certain of the more than 30 unpublished details34 often the detrimental effect of a particular change is quite insignificant even by the standards of the performanceobsessed world of statistical parsing and occasionally the effect of a change is not even detrimental at allthat is why we do not claim the importance of any single unpublished detail but rather that of their totality given that several of the unpublished details are most likely interactinghowever we note that certain individual details such as the universal p model do appear to have a much more marked effect on overall parsing accuracy than othersthe previous section accounts for the noticeable effects of all the unpublished details of collins modelbut what of the details that were publishedin chapter 8 of his thesis collins gives an account on the motivation of various features of his model including the distance metric the models use of subcats and structural versus semantic preferencesin the discussion of this last issue collins points to the fact that structural preferenceswhich in his model are 33 these fmeasures and the differences between them were calculated from experiments presented in collins these experiments unlike those on which our reported numbers are based were on all sentences not just those of length 40 wordsas collins notes removing both the distance metric and subcat features results in a gigantic drop in performance since without both of these features the model has no way to encode the fact that flatter structures should be avoided in several crucial cases such as for pps which tend to prefer one argument to the right of their headchildren34 as a reviewer pointed out the use of the comma constraint is a published detailhowever the specifics of how certain commas do not apply to the constraint is an unpublished detail as mentioned in section 72number of times our parsing engine was able to deliver a probability for the various levels of backoff of the modifierword generation model pmw when testing on section 00 having trained on sections 0221in other words this table reports how often a context in the backoff chain of pmw that was needed during decoding was observed in training modeled primarily by the pl and pr parametersoften provide the right information for disambiguating competing analyses but that these structural preferences may be overridden by semantic preferencesbilexical statistics as represented by the maximal context of the plw and prw parameters serve as a proxy for such semantic preferences where the actual modifier word indicates the particular semantics of its headindeed such bilexical statistics were widely assumed for some time to be a source of great discriminative power for several different parsing models including that of collinshowever gildea reimplemented collins model 1 and altered the plw and prw parameters so that they no longer had the top level of context that included the headword in other words gildea removed all bilexical statistics from the overall modelsurprisingly this resulted in only a 045 absolute reduction in fmeasure unfortunately this result was not entirely conclusive in that gildea was able to reimplement collins baseline model only partially and the performance of his partial reimplementation was not quite as good as that of collins parser35 training on sections 0221 we have duplicated gildeas bigramremoval experiment except that our chosen test set is section 00 instead of section 23 and our chosen model is the more widely used model 2using the mode that most closely emulates collins model 2 with bigrams our engine obtains a recall of 8989 and a precision of 9014 on sentences of length 40 words without bigrams performance drops only to 8949 on recall 8995 on precision an exceedingly small drop in performance in an additional experiment we have examined the number of times that the parser is able while decoding section 00 to deliver a requested probability for the modifierword generation model using the increasingly lessspecific contexts of the three backoff levelsthe results are presented in table 6backoff level 0 indicates the use of the full history context which contains the headchilds headwordnote that probabilities making use of this full context that is making use of bilexical dependencies are available only 149 of the timecombined with the results from the previous experiment this suggests rather convincingly that such statistics are far less significant than once thought to the overall discriminative power of collins models confirming gildeas result for model 236 if not bilexical statistics then surely one might think headchoice is critical to the performance of a headdriven lexicalized statistical parsing modelpartly to this end in chiang and bikel we explored methods for recovering latent information in treebanksthe second half of that paper focused on a use of the insideoutside algorithm to reestimate the parameters of a model defined over an augmented tree space where the observed data were considered to be the goldstandard labeled bracketings found in the treebank and the hidden data were considered to be the headlexicalizations one of the most notable tree augmentations performed by modern statistical parsersthese expectation maximization experiments were motivated by the desire to overcome the limitations imposed by the heuristics that have been heretofore used to perform headlexicalization in treebanksin particular it appeared that the head rules used in collins parser had been tweaked specifically for the english penn treebankusing them would mean that very little effort would need to be spent on developing head rules since them could take an initial model that used simple heuristics and optimize it appropriately to maximize the likelihood of the unlexicalized training treesto test this we performed experiments with an initial model trained using an extremely simplified headrule set in which all rules were of the form if the parent is x then choose the leftrightmost child a surprising side result was that even with this simplified set of headrules overall parsing performance still remained quite highusing our simplified headrule set for english our engine in its model 2 emulation mode achieved a recall of 8855 and a precision of 8880 for sentences of length 40 words in section 00 so contrary to our expectations the lack of careful headchoice is not crippling in allowing the parser to disambiguate competing theories and is a further indication that semantic preferences as represented by conditioning on a headword rarely override structural onesgiven that bilexical dependencies are almost never used and have a surprisingly small effect on overall parsing performance and given that the choice of head is not terribly critical either one might wonder what power if any headlexicalization is providingthe answer is that even when one removes bilexical dependencies from the model there are still plenty of lexicostructural dependencies that is structures being generated conditioning on headwords and headwords being generated conditioning on structuresto test the effect of such lexicostructural dependencies in our lexicalized pcfgstyle formalism we experimented with the removal of the head tag th andor the head word wh from the conditioning contexts of the pmw and pm parametersthe recertainly points to the utility of caching probabilities parsing performance with various models on section 00 of the penn treebankpm is the parameter class for generating partially lexicalized modifying nonterminals pmw is the parameter class that generates the headword of a modifying nonterminaltogether pm and pmw generate a fully lexicalized modifying nonterminalthe check marks indicate the inclusion of the headword wh and its part of speech th of the lexicalized head nonterminal h in the conditioning contexts of pm and pmwsee table 4 for definitions of the remaining column headings sults are shown in table 8model mtwtw shows our baseline and model mφφ shows the effect of removing all dependence on the headword and its part of speech with the other models illustrating varying degrees of removing elements from the two parameter classes conditioning contextsnotably including the headword wh in or removing it from the pm contexts appears to have a significant effect on overall performance as shown by moving from model mtwt to model mtt and from model mtwφ to model mtφthis reinforces the notion that particular headwords have structural preferences so that making the pm parameters dependent on headwords would capture such preferencesas for effects involving dependence on the head tag th observe that moving from model mtwt to model mtwφ results in a small drop in both recall and precision whereas making an analogous move from model mtt to model mtφ results in a drop in recall but a slight gain in precision it is not evident why these two moves do not produce similar performance losses but in both cases the performance drops are small relative to those observed when eliminating wh from the conditioning contexts indicating that headwords matter far more than parts of speech for determining structural preferences as one would expectwe have documented what we believe is the complete set of heretofore unpublished details collins used in his parser such that along with collins thesis thi s article contains all information necessary to duplicate collins benchmark resultsindeed these asyetunpublished details account for an 11 relative increase in error from an implementation including all details to a cleanroom implementation of collins modelwe have also shown a cleaner and equally wellperforming method for the handling of punctuation and conjunction and we have revealed certain other probabilistic oddities about collins parserwe have not only analyzed the effect of the unpublished details but also reanalyzed the effect of certain wellknown details revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thoughtfinally we have performed experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and head tagthese results regarding the lack of reliance on bilexical statistics suggest that generative models still have room for improvement through the employment of bilexicalclass statistics that is dependencies among headmodifier word classes where such classes may be defined by say wordnet synsetssuch dependencies might finally be able to capture the semantic preferences that were thought to be captured by standard bilexical statistics as well as to alleviate the sparsedata problems associated with standard bilexical statisticsthis is the subject of our current researchthis section contains tables for all parameter classes in collins model 3 with appropriate modifications and additions from the tables presented in collins thesisthe notation is that used throughout this articlein particular for notational brevity we use mi to refer to the three items mi tmi and wmi that constitute some fully lexicalized modifying nonterminal and similarly mi to refer to the two items mi and tmi that constitute some partially lexicalized modifying nonterminalthe nonterminalmapping functions alpha and gamma are defined in section 61as a shorthand yi ytmithe headgeneration parameter class ph gapgeneration parameter class pg and subcatgeneration parameter classes psubcatl and psubcatr have backoff structures as follows the two parameter classes for generating modifying nonterminals that are not dominated by a base np pm and pmw have the following backoff structuresrecall that backoff level 2 of the pmw parameters includes words that are the heads of the observed roots of sentences the two parameter classes for generating modifying nonterminals that are children of base nps pmnpb and pmwnpb have the following backoff structuresbackoff level 2 of the pmwnpb parameters includes words that are the heads of the observed roots of sentences also note that there is no coord flag as coordinating conjunctions are generated in the same way as regular modifying nonterminals when they are dominated by npbfinally we define m0 h that is the head nonterminal label of the base np that was generated using a ph parameterthe two parameter classes for generating punctuation and coordinating conjunctions ppunc and pcoord have the following backoff structures where 2 type ttype the parameter classes for generating fully lexicalized root nonterminals given the hidden root top ptop and ptopw have the following backoff structures the parameter classes for generating prior probabilities on lexicalized nonterminals m ppriorw and ppriornt have the following backoff structures where prior is a dummy variable to indicate that ppriorwis not smoothed i would especially like to thank mike collins for his invaluable assistance and great generosity while i was replicating his thesis results and for his comments on a prerelease draft of this articlemany thanks to david chiang and dan gildea for the many valuable discussions during the course of this workalso thanks to the anonymous reviewers for their helpful and astute observationsfinally thanks to my phd advisor mitch marcus who during the course of this work was as ever a source of keen insight and unbridled optimismthis work was supported in part by nsf grant nosbr8920239 and darpa grant non660010018915
J04-4004
intricacies of collins parsing modelthis article documents a large set of heretofore unpublished details collins used in his parser such that along with collins thesis this article contains all information necessary to duplicate collins benchmark resultsindeed these asyetunpublished details account for an 11 relative increase in error from an implementation including all details to a cleanroom implementation of collins modelwe also show a cleaner and equally wellperforming method for the handling of punctuation and conjunction and reveal certain other probabilistic oddities about collins parserwe not only analyze the effect of the unpublished details but also reanalyze the effect of certain wellknown details revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thoughtfinally we perform experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and its part of speechthe results suggest that the power of collinsstyle parsing models did not lie primarily with the use of bilexical dependencies as was once thought but in lexicostructural dependencies that is predicting syntactic structures conditioning on head wordswe show that bilexicalinformation is used in only 149 of the decisions in collins model2 parser and that removing this information results in an exceedingly small drop in performance
discriminative reranking for natural language parsing this article considers approaches which rerank the output of an existing probabilistic parser the base parser produces a set of candidate parses for each input sentence with associated probabilities that define an initial ranking of these parses a second model then attempts to improve upon this initial ranking using additional features of the tree as evidence the strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account we introduce a new method for the reranking task based on the boosting approach to ranking problems described in freund et al we apply the boosting method to parsing the wall street journal treebank the method combined the loglikelihood under a baseline model with evidence from an additional 500000 features over parse trees that were not included in the model the new model achieved 8975 a 13 relative decrease in measure error over the baseline models score of 882 the article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach we argue that the method is an appealing alternativein terms of both simplicity and efficiencyto work on feature selection methods within loglinear models although the experiments in this article are on natural language parsing the approach should be applicable to many other nlp problems which are naturally framed as ranking tasks for example speech recognition machine translation or natural language generation this article considers approaches which rerank the output of an existing probabilistic parserthe base parser produces a set of candidate parses for each input sentence with associated probabilities that define an initial ranking of these parsesa second model then attempts to improve upon this initial ranking using additional features of the tree as evidencethe strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into accountwe introduce a new method for the reranking task based on the boosting approach to ranking problems described in freund et al we apply the boosting method to parsing the wall street journal treebankthe method combined the loglikelihood under a baseline model with evidence from an additional 500000 features over parse trees that were not included in the original modelthe new model achieved 8975 fmeasure a 13 relative decrease in fmeasure error over the baseline models score of 882the article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing dataexperiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approachwe argue that the method is an appealing alternativein terms of both simplicity and efficiencyto work on feature selection methods within loglinear modelsalthough the experiments in this article are on natural language parsing the approach should be applicable to many other nlp problems which are naturally framed as ranking tasks for example speech recognition machine translation or natural language generationmachinelearning approaches to natural language parsing have recently shown some success in complex domains such as news wire textmany of these methods fall into the general category of historybased models in which a parse tree is represented as a derivation and the probability of the tree is then calculated as a product of decision probabilitieswhile these approaches have many advantages it can be awkward to encode some constraints within this frameworkin the ideal case the designer of a statistical parser would be able to easily add features to the model that are believed to be useful in discriminating among candidate trees for a sentencein practice however adding new features to a generative or historybased model can be awkward the derivation in the model must be altered to take the new features into account and this can be an intricate taskthis article considers approaches which rerank the output of an existing probabilistic parserthe base parser produces a set of candidate parses for each input sentence with associated probabilities that define an initial ranking of these parsesa second model then attempts to improve upon this initial ranking using additional features of the tree as evidencethe strength of our approach is that it allows a tree to be represented as an arbitrary set of features without concerns about how these features interact or overlap and without the need to define a derivation which takes these features into accountwe introduce a new method for the reranking task based on the boosting approach to ranking problems described in freund et al the algorithm can be viewed as a feature selection method optimizing a particular loss function that has been studied in the boosting literaturewe applied the boosting method to parsing the wall street journal treebank the method combines the loglikelihood under a baseline model with evidence from an additional 500000 features over parse trees that were not included in the original modelthe baseline model achieved 882 fmeasure on this taskthe new model achieves 8975 fmeasure a 13 relative decrease in fmeasure erroralthough the experiments in this article are on natural language parsing the approach should be applicable to many other natural language processing problems which are naturally framed as ranking tasks for example speech recognition machine translation or natural language generationsee collins for an application of the boosting approach to named entity recognition and walker rambow and rogati for the application of boosting techniques for ranking in the context of natural language generationthe article also introduces a new more efficient algorithm for the boosting approach which takes advantage of the sparse nature of the feature space in the parsing dataother nlp tasks are likely to have similar characteristics in terms of sparsityexperiments show an efficiency gain of a factor of 2600 for the new algorithm over the obvious implementation of the boosting approachefficiency issues are important because the parsing task is a fairly large problem involving around one million parse trees and over 500000 featuresthe improved algorithm can perform 100000 rounds of feature selection on our task in a few hours with current processing speedsthe 100000 rounds of feature selection require computation equivalent to around 40 passes over the entire training set the problems with historybased models and the desire to be able to specify features as arbitrary predicates of the entire tree have been noted beforein particular previous work has investigated the use of markov random fields or loglinear models as probabilistic models with global features for parsing and other nlp taskssimilar methods have also been proposed for machine translation and language understanding in dialogue systems previous work has drawn connections between loglinear models and boosting for classification problemsone contribution of our research is to draw similar connections between the two approaches to ranking problemswe argue that the efficient boosting algorithm introduced in this article is an attractive alternative to maximumentropy models in particular feature selection methods that have been proposed in the literature on maximumentropy modelsthe earlier methods for maximumentropy feature selection methods require several full passes over the training set for each round of feature selection suggesting that at least for the parsing data the improved boosting algorithm is several orders of magnitude more efficient1 in section 64 we discuss our approach in comparison to these earlier methods for feature selection as well as the more recent work of mccallum zhou et al and riezler and vasserman the remainder of this article is structured as followssection 2 reviews historybased models for nlp and highlights the perceived shortcomings of historybased models which motivate the reranking approaches described in the remainder of the articlesection 3 describes previous work that derives connections between boosting and maximumentropy models for the simpler case of classification problems this work forms the basis for the reranking methodssection 4 describes how these approaches can be generalized to ranking problemswe introduce loss functions for boosting and mrf approaches and discuss optimization methodswe also derive the efficient algorithm for boosting in this sectionsection 5 gives experimental results investigating the performance improvements on parsing efficiency issues and the effect of various parameters of the boosting algorithmsection 6 discusses related work in more detailfinally section 7 gives conclusionsthe reranking models in this article were originally introduced in collins in this article we give considerably more detail in terms of the algorithms involved their justification and their performance in experiments on natural language parsingbefore discussing the reranking approaches we describe historybased models they are important for a few reasonsfirst several of the bestperforming parsers on the wsj treebank are cases of historybased modelsmany systems applied to partofspeech tagging speech recognition and other language or speech tasks also fall into this class of modelsecond a particular historybased model is used as the initial model for our approachfinally it is important to describe historybased modelsand to explain their limitationsto motivate our departure from themparsing can be framed as a supervised learning task to induce a function f xy given training examples where xi z x yi z ywe define genîy to be the set of candidates for a given input xin the parsing problem x is a sentence and gen is a set of candidate trees for that sentencea particular characteristic of the problem is the complexity of gen gen can be very large and each member of gen has a rich internal structurethis contrasts with typicalclassification problems in which gen is a fixed small set for example f111 in binary classification problemsin probabilistic approaches a model is defined which assigns a probability p to each pair2 the most likely parse for each sentence x is then arg maxyegen pthis leaves the question of how to define pin historybased approaches a onetoone mapping is defined between each pair and a decision sequence the sequence can be thought of as the sequence of moves that build in some canonical ordergiven this mapping the probability of a tree can be written as here is the history for the ith decisionf is a function which groups histories into equivalence classes thereby making independence assumptions in the modelprobabilistic contextfree grammars are one example of a historybased modelthe decision sequence is defined as the sequence of rule expansions in a topdown leftmost derivation of the treethe history is equivalent to a partially built tree and f picks out the nonterminal being expanded making the assumption that p depends only on the nonterminal being expandedin the resulting model a tree with rule expansions our base model that of collins is also a historybased modelit can be considered to be a type of pcfg where the rules are lexicalizedan example rule would be lexicalization leads to a very large number of rules to make the number of parameters manageable the generation of the righthand side of a rule is broken down into a number of decisions as follows 2 to be more precise generative probabilistic models assign joint probabilities p to each pairsimilar arguments apply to conditional historybased models which define conditional probabilities p through a definition where d1 dn are again the decisions made in building a parse and f is a function that groups histories into equivalence classesnote that x is added to the domain of f see ratnaparkhi for one example of a method using this approachfigure 1 illustrates this processeach of the above decisions has an associated probability conditioned on the lefthand side of the rule and other information in some caseshistorybased approaches lead to models in which the logprobability of a parse tree can be written as a linear sum of parameters ak multiplied by features hkeach feature hk is the count of a different eventor fragment within the treeas an example consider a pcfg with rules for 1 ok is seen in the tree and ak log p is the parameter associated with that rule then all models considered in this article take this form although in the boosting models the score for a parse is not a logprobabilitythe features hk define an mdimensional vector of counts which represent the treethe parameters ak represent the influence of each feature on the score of a treea drawback of historybased models is that the choice of derivation has a profound influence on the parameterization of the modelwhen designing a model it would be desirable to have a framework in which features can be easily added to the modelunfortunately with historybased models adding new features often requires a modification of the underlying derivations in the modelmodifying the derivation to include a new feature type can be a laborious taskin an ideal situation we would be able to encode arbitrary features hk without having to worry about formulating a derivation that included these featuresto take a concrete example consider partofspeech tagging using a hidden markov model we might have the intuition that almost every sentence has at least one verb and therefore that sequences including at least one verb should have increased scores under the modelencoding this constraint in a compact way in an hmm takes some ingenuitythe obvious approachto add to each state the information about whether or not a verb has been generated in the historydoubles the sequence of decisions involved in generating the righthand side of a lexical rule the number of states in the modelin contrast it would be trivial to implement a feature hk which is 1 if y contains a verb 0 otherwisewe now turn to machinelearning methods for the ranking taskin this section we review two methods for binary classification problems logistic regression models and boostingthese methods form the basis for the reranking approaches described in later sections of the articlemaximumentropy models are a very popular method within the computational linguistics community see for example berger della pietra and della pietra for an early article which introduces the models and motivates themboosting approaches to classification have received considerable attention in the machinelearning community since the introduction of adaboost by freund and schapire boosting algorithms and in particular the relationship between boosting algorithms and maximumentropy models are perhaps not familiar topics in the nlp literaturehowever there has recently been much work drawing connections between the two methods in this section we review this workmuch of this work has focused on binary classification problems and this section is also restricted to problems of this typelater in the article we show how several of the ideas can be carried across to reranking problemsthe general setup for binary classification problems is as follows where each ak e r hence a is an mdimensional realvalued vectorwe show that both logistic regression and boosting implement a linear or hyperplane classifierthis means that given an input example x and parameter values a the output from the classifier is collins and koo discriminative reranking for nlp where hyperplane which passes through the origin4 of the space and has a as its normalpoints lying on one side of this hyperplane are classified as 1 points on the other side are classified as 1the central question in learning is how to set the parameters a given the training examples bðx1 y1þ ðx2 y2þ ðxn ynþàlogistic regression and boosting involve different algorithms and criteria for training the parameters a but recent work has shown that the methods have strong similaritiesthe next section describes parameter estimation methodsa central idea in both logistic regression and boosting is that of a loss function which drives the parameter estimation methods of the two approachesthis section describes loss functions for binary classificationlater in the article we introduce loss functions for reranking tasks which are closely related to the loss functions for classification tasksfirst consider a logistic regression modelthe parameters of the model a are used to define a conditional probability where fðx aþ is as defined in equation some form of maximumlikelihood estimation is often used for parameter estimationthe parameters are chosen to maximize the loglikelihood of the training set equivalently we talk about minimizing the negative loglikelihoodthe negative loglikelihood logloss is defined as there are many methods in the literature for minimizing logloss with respect to a for example generalized or improved iterative scaling or conjugate gradient methods in the next section we describe feature selection methods as described in berger della pietra and della pietra and della pietra della pietra and lafferty once the parameters a are estimated on training examples the output for an example x is the most likely label under the model where as before sign 1 if z 0 sign 1 otherwisethus we see that the logistic regression model implements a hyperplane classifierin boosting a different loss function is used namely exploss which is defined as this loss function is minimized using a feature selection method which we describe in the next sectionthere are strong similarities between logloss and exploss in making connections between the two functions it is useful to consider a third function of the parameters and training examples where gpä is one if p is true zero otherwiseerror is the number of incorrectly classified training examples under parameter values afinally it will be useful to define the margin on the ith training example given parameter values a as the three loss functions differ only in their choice of an underlying potential functionof the margins fthis function is f log f ez or f qz 1 time 2 times 3 times and so on421 ranking errors and marginsthe loss functions we consider are all related to the number of ranking errors a function f makes on the training setthe ranking error rate is the number of times a lowerscoring parse is ranked above the best parse where again gpä is one if p is true zero otherwisein the ranking problem we define the margin for each example xij such that i 1 n j 2 ni as thus mij is the difference in ranking score between the correct parse of a sentence and a competing parse xijit follows that the ranking error is zero if all margins are positivethe loss functions we discuss all turn out to be direct functions of the margins on training examples422 loglikelihoodthe first loss function is that suggested by markov random fieldsas suggested by ratnaparkhi roukos and ward and johnson et al the conditional probability of xiq being the correct parse for the ith sentence is defined as hence once the parameters are trained the ranking function is used to order candidate trees for test examplesthe loglikelihood of the training data is under maximumlikelihood estimation the parameters a would be set to maximize the loglikelihoodequivalently we again talk about minimizing the negative loglikelihoodsome manipulation shows that the negative loglikelihood is a function of the margins on training data note the similarity of equation to the logloss function for classification in equation described in schapire and singer it is a special case of the general ranking methods described in freund et al with the ranking feedbackbeing a simple binary distinction between the highestscoring parse and the other parsesagain the loss function is a function of the margins on training data note the similarity of equation to the exploss function for classification in equation it can be shown that explossðaþ errorðaþ so that minimizing explossðaþ is closely related to minimizing the number of ranking errors11 this follows from the fact that for any x ex gx pp vbd np np sbar with head vbd as an examplenote that the output of our baseline parser produces syntactic trees with headword annotations for a description of the rules used to find headwordstwolevel rulessame as rules but also including the entire rule above the ruletwolevel bigramssame as bigrams but also including the entire rule above the ruletrigramsall trigrams within the rulethe example rule would contribute the trigrams and grandparent bigramssame as bigrams but also including the nonterminal above the bigramslexical bigramssame as bigrams but with the lexical heads of the two nonterminals also includedhead modifiersall headmodifier pairs with the grandparent nonterminal also includedan adj flag is also included which is one if the modifier is adjacent to the head zero otherwiseas an example say the nonterminal dominating the example rule is s the example rule would contribute and ppslexical trigrams involving the heads of arguments of prepositional phrasesthe example shown at right would contribute the trigram in addition to the relation which ignores the headword of the constituent being modified by the ppthe three nonterminals identify the parent of the entire phrase the nonterminal of the head of the phrase and the nonterminal label for the ppdistance head modifiersfeatures involving the distance between headwordsfor example assume dist is the number of words between the headwords of the vbd and sbar in the headmodifier relation in the above rulethis relation would then generate features and for all 1 x distfurther lexicalizationin order to generate more features a second pass was made in which all nonterminals were augmented with their lexical heads when these headwords were closedclass wordsall features apart from head modifiers pps and distance head modifiers were then generated with these augmented nonterminalsall of these features were initially generated but only features seen on at least one parse for at least five different sentences were included in the final model the exploss method was trained with several values for the smoothing parameter e 00001 000025 00005 000075 0001 00025 0005 00075for each value of e the method was run for 100000 rounds on the training datathe implementation was such that the feature updates for all 100000 rounds for each training run were recorded in a filethis made it simple to test the model on development data for all values of n between 0 and 100000the different values of and n were compared on development data through the following criterion where score is as defined above and zi is the output of the model on the ith development set examplethe n values which maximized this quantity were used to define the final model applied to the test data the optimal values were 00025 and n 90386 at which point 11673 features had nonzero values the computation took roughly 34 hours on a machine with a 16 ghz pentium processor and around 2 gb of memorytable 1 shows results for the methodthe model of collins was the base model the exploss model gave a 15 absolute improvement over this methodthe method gives very similar accuracy to the model of charniak which also uses a rich set of initial features in addition to charniaks original modelthe logloss method was too inefficient to run on the full data setinstead we made some tests on a smaller subset of the data and 52294 features15 on an older machine the boosting method took 40 minutes for 10000 rounds on this data setthe logloss method took 20 hours to complete 3500 rounds this was in spite of various heuristics that were implemented in an attempt to speed up logloss for example selecting multiple features at each round or recalculating the statistics for only the best k features for some small k at the previous round of feature selectionin initial experiments we found exploss to give similar perhaps slightly better accuracy than loglossthis section describes further experiments investigating various aspects of the boosting algorithm the effect of the and n parameters learning curves the choice of the sij weights and efficiency issues541 the effect of the a and n parametersfigure 5 shows the learning curve on development data for the optimal value of the accuracy shown is the performance relative to the baseline method of using the probability from the generative model alone in ranking parses where the measure in equation is used to measure performancefor example a score of 1015 indicates a 15 increase in this scorethe learning curve is initially steep eventually flattening off but reaching its peak value after a large number of rounds of feature selectiontable 2 indicates how the peak performance varies with the smoothing parameter figure 6 shows learning curves for various values of it can be seen that values other than 00025 can lead to undertraining or overtraining of the modelresults on section 23 of the wsj treebanklris labeled recall lpis labeled precision cbsis the average number of crossing brackets per sentence 0 cbsis the percentage of sentences with 0 crossing brackets 2 cbsis the percentage of sentences with two or more crossing bracketsall the results in this table are for models trained and tested on the same data using the same evaluation metricnote that the exploss results are very slightly different from the original results published in collins we recently reimplemented the boosting code and reran the experiments and minor differences in the code and a values tested on development data led to minor improvements in the resultslearning curve on development data for the optimal value for a the yaxis is the level of accuracy and the xaxis is the number of rounds of boosting idea of weights sij representing the importance of examplesthus far in the experiments in this article we have used the definition thereby weighting examples in proportion to their difference in score from the correct parse for the sentence in questionin this section we compare this approach to a default definition of sij namely sij 14 1 ð23þ using this definition we trained the exploss method on the same training set for several values of the smoothing parameter a and evaluated the performance on development datatable 3 compares the peak performance achieved under the two definitions of sij on the development setit can be seen that the definition in equation outperforms the simpler method in equation figure 7 shows the learning curves for the optimal values of a for the two methodsit can be seen that the learning curve for the definition of sij in equation consistently dominates the curve for the simpler definition543 efficiency gainssection 45 introduced an efficient algorithm for optimizing explossin this section we explore the empirical gains in efficiency seen on the parsing data sets in this articlewe first define the quantity t as follows learning curves on development data for various values of in each case the yaxis is the level of accuracy and the xaxis is the number of rounds of boostingthe three graphs compare the curve for 00025 to 00001 00075 and 0001the top graph shows that 00001 leads to undersmoothing initially the graph is higher than that for 00025 but on later rounds the performance starts to decreasethe middle graph shows that 00075 leads to oversmoothing the graph shows consistently lower performance than that for 00025the bottom graph shows that there is little difference in performance for 0001 versus 00025this is a measure of the number of updates to the wþk and wk variables required in making a pass over the entire training setthus it is a measure of the amount of computation that the naive algorithm for exploss presented in figure 3 requires for each round of feature selectionnext say the improved algorithm in figure 4 selects feature k on the t th round of feature selectionthen we define the following quantity we are now in a position to compare the running times of the two algorithmswe define the following quantities here work is the computation required for n rounds of feature selection where a single unit of computation corresponds to a pass over the entire training setsavings tracks the relative efficiency of the two algorithms as a function of the number of features n for example if savings 1200 this signifies that for the first 100 rounds of feature selection the improved algorithm is 1200 times as efficient as the naive algorithmfinally savings indicates the relative efficiency between rounds a and b inclusive of feature selectionfor example savings 83 signifies that between rounds 11 and 100 inclusive of the algorithm the improved algorithm was 83 times as efficientfigures 8 and 9 show graphs of work and savings versus n the savings from the improved algorithm are dramaticin 100000 rounds of feature selection the improved algorithm requires total computation that is equivalent to a mere 371 passes over the training setthis is a saving of a factor of 2692 over the naive algorithmtable 4 shows the value of savings for various values of it can be seen that the performance gains are significantly larger in later rounds of feature selection presumably because in later stages relatively infrequent features are being selectedeven so there are still savings of a factor of almost 50 in the early stages of the methodcharniak describes a parser which incorporates additional features into a previously developed parser that of charniak the method gives substantial improvements over the original parser and results which are very close to the results of the boosting method we have described in this article our features are in many ways similar to those of charniak the model in charniak is quite different howeverthe additional features are incorporated using a method inspired by maximumentropy models ratnaparkhi describes the use of maximumentropy techniques applied to parsingloglinear models are used to estimate the conditional probabilities p in a historybased parseras a result the model can take into account quite a rich set of features in the historysavings versus nboth approaches still rely on decomposing a parse tree into a sequence of decisions and we would argue that the techniques described in this article have more flexibility in terms of the features that can be included in the modelabney describes the application of loglinear models to stochastic headdriven phrase structure grammars della pietra della pietra and lafferty describe feature selection methods for loglinear models and rosenfeld describes application of these methods to language modeling for speech recognitionthese methods all emphasize models which define a joint probability over the space of all parse trees for this reason we describe these approaches as joint loglinear modelsthe probability of a tree xij is here z is the set of possible trees and the denominator cannot be calculated explicitlythis is a problem for parameter estimation in which an estimate of the denominator is required and monte carlo methods have been proposed as a technique for estimation of this valueour sense is that these methods can be computationally expensivenotice that the joint likelihood in equation is not a direct function of the margins on training examples and its relation to error rate is therefore not so clear as in the discriminative approaches described in this articleratnaparkhi roukos and ward johnson et al and riezler et al suggest training loglinear models for parsing problemsratnaparkhi roukos and ward use feature selection techniques for the taskjohnson et al and riezler et al do not use a feature selection technique employing instead an objective function which includes a gaussian prior on the parameter values thereby penalizing parameter values which become too large closedform updates under iterative scaling are not possible with this objective function instead optimization algorithms such as gradient descent or conjugate gradient methods are used to estimate parameter valuesin more recent work lafferty mccallum and pereira describe the use of conditional markov random fields for tagging tasks such as named entity recognition or partofspeech tagging crfs employ the objective function in equation a key insight of lafferty mccallum and pereira is that when features are of a significantly local nature the gradient of the function in equation can be calculated efficiently using dynamic programming even in cases in which the set of candidates involves all possible tagged sequences and is therefore exponential in sizesee also sha and pereira for more recent work on crfsoptimizing a loglinear model with a gaussian prior is a plausible alternative to the feature selection approaches described in the current article or to the feature selection methods previously applied to loglinear modelsthe gaussian prior has been found in practice to be very effective in combating overfitting of the parameters to the training data the function in equation can be optimized using variants of gradient descent which in practice require tens or at most hundreds of passes over the training data thus loglinear models with a gaussian prior are likely to be comparable in terms of efficiency to the feature selection approach described in this article note however that the two methods will differ considerably in terms of the sparsity of the resulting rerankerwhereas the feature selection approach leads to around 11000 of the features in our model having nonzero parameter values loglinear models with gaussian priors typically have very few nonzero parameters this may be important in some domains for example those in which there are a very large number of features and this large number leads to difficulties in terms of memory requirements or computation timea number of previous papers describe feature selection approaches for loglinear models applied to nlp problemsearlier work suggested methods that added a feature at a time to the model and updated all parameters in the current model at each step assuming that selection of a feature takes one pass over the training set and that fitting a model takes p passes over the training set these methods require f x passes over the training set where f is the number of features selectedin our experiments f z 10000it is difficult to estimate the value for p but assuming that p 2 selecting 10000 features would require 30000 passes over the training setthis is around 1000 times as much computation as that required for the efficient boosting algorithm applied to our data suggesting that the feature selection methods in berger della pietra and della pietra ratnaparkhi and della pietra della pietra and lafferty are not sufficiently efficient for the parsing taskmore recent work has considered methods for speeding up the feature selection methods described in berger della pietra and della pietra ratnaparkhi and della pietra della pietra and lafferty mccallum and riezler and vasserman describe approaches that add k features at each step where k is some constant greater than onethe running time for these methods is therefore o1kriezler and vasserman test a variety of values for k finding that k 100 gives optimal performancemccallum uses a value of k 1000zhou et al use a different heuristic that avoids having to recompute the gain for every feature at every iterationwe would argue that the alternative feature selection methods in the current article may be preferable on the grounds of both efficiency and simplicityeven with large values of k in the approach of mccallum and riezler and vasserman the approach we describe is likely to be at least as efficient as these alternative approachesin terms of simplicity the methods in mccallum and riezler and vasserman require selection of a number of free parameters governing the behavior of the algorithm the value for k the value for a regularizer constant and the precision with which the model is optimized at each stage of feature selection in contrast our method requires a single parameter to be chosen and makes a single approximation the latter approximation is particularly important as it leads to the efficient algorithm in figure 4 which avoids a pass over the training set at each iteration of feature selection note that there are other important differences among the approachesboth della pietra della pietra and lafferty and mccallum describe methods that induce conjunctions of basefeatures in a way similar to decision tree learnersthus a relatively small number of base features can lead to a very large number of possible conjoined featuresin future work it might be interesting to consider these kinds of approaches for the parsing problemanother difference is that both mccallum and riezler and vasserman describe approaches that use a regularizer in addition to feature selection mccallum uses a twonorm regularizer riezler and vasserman use a onenorm regularizerfinally note that other feature selection methods have been proposed within the machinelearning community for example filtermethods in which feature selection is performed as a preprocessing step before applying a learning method and backward selection methods in which initially all features are added to the model and features are then incrementally removed from the model65 boosting perceptron and support vector machine approaches for ranking problems freund et al introduced a formulation of boosting for ranking problemsthe problem we have considered is a special case of the problem in freund et al in that we have considered a binary distinction between candidates whereas freund et al consider learning full or partial orderings over candidatesthe improved algorithm that we introduced in figure 4 is however a new algorithm that could perhaps be generalized to the full problem of freund et al we leave this to future researchaltun hofmann and johnson and altun johnson and hofmann describe experiments on tagging tasks using the exploss function in contrast to the logloss function used in lafferty mccallum and pereira altun hofmann and johnson describe how dynamic programming methods can be used to calculate gradients of the exploss function even in cases in which the set of candidates again includes all possible tagged sequences a set which grows exponentially in size with the length of the sentence being taggedresults in altun johnson and hofmann suggest that the choice of exploss versus logloss does not have a major impact on accuracy for the tagging task in questionperceptronbased algorithms or the voted perceptron approach of freund and schapire are another alternative to boosting and logloss methodssee collins and collins and duffy for applications of the perceptron algorithmcollins gives convergence proofs for the methods collins directly compares the boosting and perceptron approaches on a named entity task and collins and duffy use a reranking approach with kernels which allow representations of parse trees or labeled sequences in veryhighdimensional spacesshen sarkar and joshi describe support vector machine approaches to ranking problems and apply support vector machines using treeadjoining grammar features to the parsing data sets we have described in this article with good empirical resultssee collins for a discussion of many of these methods including an overview of statistical bounds for the boosting perceptron and svm methods as well as a discussion of the computational issues involved in the different algorithmsthis article has introduced a new algorithm based on boosting approaches in machine learning for ranking problems in natural language processingthe approach gives a 13 relative reduction in error on parsing wall street journal datawhile in this article the experimental focus has been on parsing many other problems in natural language processing or speech recognition can also be framed as reranking problems so the methods described should be quite broadly applicablethe boosting approach to ranking has been applied to named entity segmentation and natural language generation the key characteristics of the approach are the use of global features and of a training criterion that is discriminative and closely related to the task at hand in addition the article introduced a new algorithm for the boosting approach which takes advantage of the sparse nature of the feature space in the parsing data that we useother nlp tasks are likely to have similar characteristics in terms of sparsityexperiments show an efficiency gain of a factor of over 2600 on the parsing data for the new algorithm over the obvious implementation of the boosting approachwe would argue that the improved boosting algorithm is a natural alternative to maximumentropy or loglinear modelsthe article has drawn connections between boosting and maximumentropy models in terms of the optimization problems that they involve the algorithms used their relative efficiency and their performance in empirical teststhis appendix gives a derivation of the optimal updates for explossthe derivation is very close to that in schapire and singer recall that for parameter values a we need to compute bestwtðk aþ and bestlossðk aþ for k 14 1 m where bestwtðk aþ 14 arg min explossðupdða k dþþ d and bestlossðk aþ 14 explossðupdða k bestwtðk aþþþ the first thing to note is that an update in parameters from a to updða kdþþ results in a simple additive update to the ranking function f fðxij updða k dþþ 14 fðxij aþ þ dhkðxijþ it follows that the margin on example ði jþ also has a simple update next we note that 12hkðxi1þ hkðxijþ can take on three values 1 1 or 0we split the training sample into three sets depending on this value aþk 14 fðijþ 12hkðxi1þ hkðxijþ 14 1g to find the value of d that minimizes this loss we set the derivative of with respect to d to zero giving the following solution where z exploss pi pni 2 sijemij is a constant which appears in the bestloss for all features and therefore does not affect their rankingappendix b an alternative method for logloss in this appendix we sketch an alternative approach for feature selection in logloss that is potentially an efficient method at the cost of introducing an approximation in the feature selection methoduntil now we have defined bestlossðk aþ to be the minimum of the loss given that the kth feature is updated an optimal amount bestlossðk aþ 14 min loglossðupdðak dþþ d in this section we sketch a different approach based on results from collins schapire and singer which leads to an algorithm very similar to that for exploss in figures 3 and 4take the following definitions and with only the definitions for wk and wk being altered note that the exploss computations can be recovered by replacing qij in equation with qij 14 emijðaþthis is the only essential difference between the new algorithm and the exploss methodresults from collins schapire and singer show that under these definitions the following guarantee holds loglossðupdðak bestwtðk aþþþ bestlossðk aþ so it can be seen that the update from a to updða k bestwtðk aþþ is guaranteed to decrease logloss by at least ffiffiffiffiffiffiffi 2from these results the algorithms in figures 3 and 4 could be altered to take the revised definitions of wþk and wk into accountselecting the feature with the minimum value of bestlossðk aþ at each iteration leads to the largest guaranteed decrease in loglossnote that this is now an approximation in that bestlossðk a is an upper bound on the loglikelihood which may or may not be tightthere are convergence guarantees for the method however in that as the number of rounds of feature selection goes to infinity the logloss approaches its minimum valuethe algorithms in figures 3 and 4 could be modified to take the alternative definitions of wþk and wk into account thereby being modified to optimize logloss instead of explossthe denominator terms in the qij definitions in equation may complicate the algorithms somewhat but it should still be possible to derive relatively efficient algorithms using the techniquefor a full derivation of the modified updates and for quite technical convergence proofs see collins schapire and singer we give a sketch of the argument herefirst we show that loglossðupdða k dþþ loglossða wþk wk þ wþk ed þ wk edþ ðbeforeþ equation can be derived from equation through the bound logð1 xþ x for all xthe second step is to minimize the righthand side of the bound in equation with respect to d it can be verified that the minimum is found at at which value the righthand side of equation is equal tothanks to rob schapire and yoram singer for useful discussions on boosting algorithms and to mark johnson for useful discussions about linear models for parse rankingsteve abney and fernando pereira gave useful feedback on earlier drafts of this workfinally thanks to the anonymous reviewers for several useful comments
J05-1003
discriminative reranking for natural language parsingthis article considers approaches which rerank the output of an existing probabilistic parserthe base parser produces a set of candidate parses for each input sentence with associated probabilities that define an initial ranking of these parsesa second model then attempts to improve upon this initial ranking using additional features of the tree as evidencethe strength of our approach is that it allows a tree to be represented as an arbitrary set of features without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into accountwe introduce a new method for the reranking task based on the boosting approach to ranking problems described in freund et al we apply the boosting method to parsing the wall street journal treebankthe method combined the loglikelihood under a baseline model with evidence from an additional 500000 features over parse trees that were not included in the original modelthe new model achieved 8975 fmeasure a 13 relative decrease in fmeasure error over the baseline models score of 882the article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing dataexperiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approachwe argue that the method is an appealing alternativein terms of both simplicity and efficiencyto work on feature selection methods within loglinear modelsalthough the experiments in this article are on natural language parsing the approach should be applicable to many other nlp problems which are naturally framed as ranking tasks for example speech recognition machine translation or natural language generationwe show that applying reranking techniques to the nbest output of a base parser can improve parsing performancewe propose a method only updates values of features cooccurring with a rule feature on examples at each iteration
sentence fusion for multidocument news summarization a system that can produce informative summaries highlighting common information found in many online documents will help web users to pinpoint information that they need without extensive reading in this article we introduce sentence fusion a novel texttotext generation technique for synthesizing common information across documents sentence fusion involves bottomup local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentence sentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sources a system that can produce informative summaries highlighting common information found in many online documents will help web users to pinpoint information that they need without extensive readingin this article we introduce sentence fusion a novel texttotext generation technique for synthesizing common information across documentssentence fusion involves bottomup local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentencesentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sourcesredundancy in large text collections such as the web creates both problems and opportunities for natural language systemson the one hand the presence of numerous sources conveying the same information causes difficulties for end users of search engines and news providers they must read the same information over and over againon the other hand redundancy can be exploited to identify important and accurate information for applications such as summarization and question answering clearly it would be highly desirable to have a mechanism that could identify common information among multiple related documents and fuse it into a coherent textin this article we present a method for sentence fusion that exploits redundancy to achieve this task in the context of multidocument summarizationa straightforward approach for approximating sentence fusion can be found in the use of sentence extraction for multidocument summarization once a system finds a set of sentences that convey similar information one of these sentences is selected to represent the setthis is a robust approach that is always guaranteed to output a grammatical sentencehowever extraction is only a coarse approximation of fusionan extracted sentence may include not only common information but additional information specific to the article from which it came leading to source bias and aggravating fluency problems in the extracted summaryattempting to solve this problem by including more sentences to restore the original context might lead to a verbose and repetitive summaryinstead we want a finegrained approach that can identify only those pieces of sentences that are commonlanguage generation offers an appealing approach to the problem but the use of generation in this context raises significant research challengesin particular generation for sentence fusion must be able to operate in a domainindependent fashion scalable to handle a large variety of input documents with various degrees of overlapin the past generation systems were developed for limited domains and required a rich semantic representation as inputin contrast for this task we require texttotext generation the ability to produce a new text given a set of related texts as inputif language generation can be scaled to take fully formed text as input without semantic interpretation selecting content and producing wellformed english sentences as output then generation has a large potential payoffin this article we present the concept of sentence fusion a novel texttotext generation technique which given a set of similar sentences produces a new sentence containing the information common to most sentences in the setthe research challenges in developing such an algorithm lie in two areas identification of the fragments conveying common information and combination of the fragments into a sentenceto identify common information we have developed a method for aligning syntactic trees of input sentences incorporating paraphrasing informationour alignment problem poses unique challenges we only want to match a subset of the subtrees in each sentence and are given few constraints on permissible alignments our algorithm meets these challenges through bottomup local multisequence alignment using words and paraphrases as anchorscombination of fragments is addressed through construction of a fusion lattice encompassing the resulting alignment and linearization of the lattice into a sentence using a language modelour approach to sentence fusion thus features the integration of robust statistical techniques such as local multisequence alignment and language modeling with linguistic representations automatically derived from input documentssentence fusion is a significant first step toward the generation of abstracts as opposed to extracts for multidocument summarizationunlike extraction methods sentence fusion allows for the true synthesis of information from a set of input documentsit has been shown that combining information from several sources is a natural strategy for multidocument summarizationanalysis of humanwritten summaries reveals that most sentences combine information drawn from multiple documents sentence fusion achieves this goal automaticallyour evaluation shows that our approach is promising with sentence fusion outperforming sentence extraction for the task of content selectionthis article focuses on the implementation and evaluation of the sentence fusion method within the multidocument summarization system multigen which daily summarizes multiple news articles on the same event as part1 of columbias news browsing system newsblaster in the next section we provide an overview of multigen focusing on components that produce input or operate over output of sentence fusionin section 3 we provide an overview of our fusion algorithm and detail on its main steps identification of common information fusion lattice computation and lattice linearization evaluation results and their analysis are presented in section 4analysis of the systems output reveals the capabilities and the weaknesses of our texttotext generation method and identifies interesting challenges that will require new insightsan overview of related work and a discussion of future directions conclude the articlesentence fusion is the central technique used within the multigen summarization systemmultigen takes as input a cluster of news stories on the same event and produces a summary which synthesizes common information across input storiesan example of a multigen summary is shown in figure 1the input clusters are automatically produced from a large quantity of news articles that are retrieved by newsblaster from 30 news sites each dayin order to understand the role of sentence fusion within summarization we overview the multigen architecture providing details on the processes that precede sentence fusion and thus the input that the fusion component requiresfusion itself is discussed in the subsequent sections of the articlemultigen follows a pipeline architecture shown in figure 2the analysis component of the system simfinder clusters sentences of input documents into themes groups of sentences that convey similar information once themes are constructed the system selects a subset of the groups to be included in the summary depending on the desired compression an example of multigen summary as shown in the columbia newsblaster interfacesummary phrases are followed by parenthetical numbers indicating their source articlesthe last sentence is extracted because it was repeated verbatim in several input articlesmultigen architecture length the selected groups are passed to the ordering component which selects a complete order among themes the analysis component of multigen simfinder identifies themes groups of sentences from different documents that each say roughly the same thingeach theme will ultimately correspond to at most one sentence in the output summary generated by the fusion component and there may be many themes for a set of articlesan example of a theme is shown in table 1as the set of sentences in the table illustrates sentences within a theme are not exact repetitions of each other they usually include phrases expressing information that is not common to all sentences in the themeinformation that is common across sentences is shown in the table in boldface other portions of the sentence are specific to individual articlesif one of these sentences were used as is to represent the theme the summary would contain extraneous informationalso errors in clustering might result in the inclusion of some unrelated sentencesevaluation involving human judges revealed that simfinder identifies similar sentences with 493 precision at 529 recall we will discuss later how this error rate influences sentence fusionto identify themes simfinder extracts linguistically motivated features for each sentence including wordnet synsets and syntactic dependencies such as subjectverb and verbobject relationsa loglinear regression model is used to combine the evidence from the various features into a single similarity valuethe model was trained on a large set of sentences which were manually marked for similaritythe output of the model is a listing of realvalued similarity values on sentence pairsthese similarity values are fed into a clustering algorithm that partitions the sentences into closely related groupstheme with corresponding fusion sentenceto generate a summary of predetermined length we induce a ranking on the themes and select the n highest2 this ranking is based on three features of the theme size measured as the number of sentences similarity of sentences in a theme and salience scorethe first two of these scores are produced by simfinder and the salience score is computed using lexical chains as described belowcombining different rankings further filters common information in terms of saliencesince each of these scores has a different range of values we perform ranking based on each score separately then induce total ranking by summing ranks from individual categories rank rank rank rank lexical chainssequences of semantically related wordsare tightly connected to the lexical cohesive structure of the text and have been shown to be useful for determining which sentences are important for singledocument summarization in the multidocument scenario lexical chains can be adapted for theme ranking based on the salience of theme sentences within their original documentsspecifically a theme that has many sentences ranked high by lexical chains as important for a singledocument summary is in turn given a higher salience score for the multidocument summaryin our implementation a salience score for a theme is computed as the sum of lexical chain scores of each sentence in a themeonce we filter out the themes that have a low rank the next task is to order the selected themes into coherent textour ordering strategy aims to capture chronological order of the main events and ensure coherenceto implement this strategy in multigen we select for each theme the sentence which has the earliest publication time to increase the coherence of the output text we identify blocks of topically related themes and then apply chronological ordering on blocks of themes using theme time stamps these stages produce a sorted set of themes which are passed as input to the sentence fusion component described in the next sectiongiven a group of similar sentencesa themethe problem is to create a concise and fluent fusion of information reflecting facts common to all sentencesto achieve this goal we need to identify phrases common to most theme sentences then combine them into a new sentenceat one extreme we might consider a shallow approach to the fusion problem adapting the bag of words approachhowever sentence intersection in a settheoretic sense produces poor resultsfor example the intersection of the first two sentences from the theme shown in table 1 is besides its being ungrammatical it is impossible to understand what event this intersection describesthe inadequacy of the bagofwords method to the fusion task demonstrates the need for a more linguistically motivated approachat the other extreme previous approaches have demonstrated that this task is feasible when a detailed semantic representation of the input sentences is availablehowever these approaches operate in a limited domain where information extraction systems can be used to interpret the source textthe task of mapping input text into a semantic representation in a domainindependent setting extends well beyond the ability of current analysis methodsthese considerations suggest that we need a new method for the sentence fusion taskideally such a method would not require a full semantic representationrather it would rely on input texts and shallow linguistic knowledge that can be automatically derived from a corpus to generate a fusion sentencein our approach sentence fusion is modeled after the typical generation pipeline content selection and surface realization in contrast to that involved in traditional generation systems in which a content selection component chooses content from semantic units our task is complicated by the lack of semantics in the textual inputat the same time we can benefit from the textual information given in the input sentences for the tasks of syntactic realization phrasing and ordering in many cases constraints on text realization are already present in the inputthe algorithm operates in three phases content selection occurs primarily in the first phase in which our algorithm uses local alignment across pairs of parsed sentences from which we select fragments to be included in the fusion sentenceinstead of examining all possible ways to combine these fragments we select a sentence in the input which contains most of the fragments and transform its parsed tree into the fusion lattice by eliminating nonessential information and augmenting it with information from other input sentencesthis construction of the fusion lattice targets content selection but in the process alternative verbalizations are selected and thus some aspects of realization are also carried out in this phasefinally we generate a sentence from this representation based on a language model derived from a large body of textsour task is to identify information shared between sentenceswe do this by aligning constituents in the syntactic parse trees for the input sentencesour alignment process differs considerably from alignment for other nl tasks such as machine translation because we cannot expect a complete alignmentrather a subset of the subtrees in one sentence will match different subsets of the subtrees in the othersfurthermore order across trees is not preserved there is no natural starting point for alignment and there are no constraints on crossesfor these reasons we have developed a bottomup local multisequence alignment algorithm that uses words and phrases as anchors for matchingthis algorithm operates on the dependency trees for pairs of input sentenceswe use a dependencybased representation because it abstracts over features irrelevant for comparison such as constituent orderingin the subsections that follow we describe first how this representation is computed then how dependency subtrees are aligned and finally how we choose between constituents conveying overlapping informationin this section we first describe an algorithm which given a pair of sentences determines which sentence constituents convey information appearing in both sentencesthis algorithm will be applied to pairwise combinations of sentences in the input set of related sentencesthe intuition behind the algorithm is to compare all constituents of one sentence to those of another and select the most similar onesof course how this comparison is performed depends on the particular sentence representation useda good sentence representation will emphasize sentence features that are relevant for comparison such as dependencies between sentence constituents while ignoring irrelevant features such as constituent orderinga representation which fits these requirements is a dependencybased representation we first detail how this representation is computed then describe a method for aligning dependency subtrees311 sentence representationour sentence representation is based on a dependency tree which describes the sentence structure in terms of dependencies between wordsthe similarity of the dependency tree to a predicateargument structure makes it a natural representation for our comparison3 this representation can be constructed from the output of a traditional parserin fact we have developed a rulebased component that transforms the phrase structure output of collinss parser into a representation in which a node has a direct link to its dependentswe also mark verb subject and verbnode dependencies in the treethe process of comparing trees can be further facilitated if the dependency tree is abstracted to a canonical form which eliminates features irrelevant to the comparisonwe hypothesize that the difference in grammatical features such as auxiliaries number and tense has a secondary effect when the meaning of sentences is being comparedtherefore we represent in the dependency tree only nonauxiliary words with their associated grammatical featuresfor nouns we record their number articles and class for verbs we record tense mood voice polarity aspect and taxis the eliminated auxiliary words can be recreated using these recorded featureswe also transform all passivevoice sentences to the active voice changing the order of affected childrenwhile the alignment algorithm described in section 312 produces onetoone mappings in practice some paraphrases are not decomposable to words forming onetomany or manytomany paraphrasesour manual analysis of paraphrased sentences revealed that such alignments most frequently occur in pairs of noun phrases and pairs including verbs with particles to correctly align such phrases we flatten subtrees containing noun phrases and verbs with particles into one nodewe subsequently determine matches between flattened sentences using statistical metricsdependency tree of the sentence the idf spokeswoman did not confirm this but said the palestinians fired an antitank missile at a bulldozer on the sitethe features of the node confirm are explicitly markedan example of a sentence and its dependency tree with associated features is shown in figure 3312 alignmentour alignment of dependency trees is driven by two sources of information the similarity between the structure of the dependency trees and the similarity between lexical itemsin determining the structural similarity between two trees we take into account the types of edges an edge is labeled by the syntactic function of the two nodes it connects it is unlikely that an edge connecting a subject and verb in one sentence for example corresponds to an edge connecting a verb and an adjective in another sentencethe word similarity measures take into account more than word identity they also identify pairs of paraphrases using wordnet and a paraphrasing dictionarywe automatically constructed the paraphrasing dictionary from a large comparable news corpus using the cotraining method described in barzilay and mckeown the dictionary contains pairs of wordlevel paraphrases as well as phraselevel paraphrases4 several examples of automatically extracted paraphrases are given in table 2during alignment each pair of nonidentical words that do not comprise a synset in wordnet is looked up in the paraphrasing dictionary in the case of a match the pair is considered to be a paraphrasewe now give an intuitive explanation of how our tree similarity function denoted by sim is computedif the optimal alignment of two trees is known then the value of the similarity function is the sum of the similarity scores of aligned nodes and aligned edgessince the best alignment of given trees is not known a priori we select the maximal score among plausible alignments of the treesinstead of exhaustively traversing the space of all possible alignments we recursively construct the best alignment for trees of given depths assuming that we know how to find an optimal alignment for trees of shorter depthsmore specifically at each point of the traversal we consider two cases shown in figure 4in the first case two top nodes are aligned with each other and their children are aligned in an optimal way by applying the algorithm to shorter treesin the second case one tree is aligned with one of the children of the top node of the other tree again we can apply our algorithm for this computation since we decrease the height of one of the treesbefore giving the precise definition of sim we introduce some notationwhen t is a tree with root node v we let c denote the set containing all children of v for a tree t containing a node s the subtree of t which has s as its root node is denoted by tstree alignment computationin the first case two tops are aligned while in the second case the top of one tree is aligned with a child of another treegiven two trees t and t with root nodes v and v respectively the similarity sim between the trees is defined to be the maximum of the three expressions nodecompare maxsc sim and maxsc simthe upper part of figure 4 depicts the computation of nodecompare in which two top nodes are aligned with each otherthe remaining expressions maxsc sim and maxsc sim capture mappings in which the top of one tree is aligned with one of the children of the top node of the other tree the maximization in the nodecompare formula searches for the best possible alignment for the child nodes of the given pair of nodes and is defined by where m is the set of all possible matchings between a and a and a matching is a subset m of a x a such that for any two distinct elements e m both a b and a bin the base case when one of the trees has depth one nodecompare is defined to be nodesimilaritythe similarity score nodesimilarity of atomic nodes depends on whether the corresponding words are identical paraphrases or unrelatedthe similarity scores for pairs of identical words pairs of synonyms pairs of paraphrases and edges are manually derived using a small development corpuswhile learning of the similarity scores automatically is an appealing alternative its application in the fusion context is challenging because of the absence of a large training corpus and the lack of an automatic evaluation function5 the similarity of nodes containing flattened subtrees6 such as noun phrases is computed as the score of their intersection normalized by the length of the longest phrasefor instance the similarity score of the noun phrases antitank missile and machine gun and antitank missile is computed as a ratio between the score of their intersection antitank missile divided by the length of the latter phrase the similarity function sim is computed using bottomup dynamic programming in which the shortest subtrees are processed firstthe alignment algorithm returns the similarity score of the trees as well as the optimal mapping between the subtrees of input treesthe pseudocode of this function is presented in the appendixin the resulting tree mapping the pairs of nodes whose nodesimilarity positively contributed to the alignment are considered parallelfigure 5 shows two dependency trees and their alignmentas is evident from the sim definition we are considering only onetoone node matchings every node in one tree is mapped to at most one node in another treethis restriction is necessary because the problem of optimizing manytomany alignments is nphard7 the subtree flattening performed during the preprocessing stage aims to minimize the negative effect of the restriction on alignment granularityanother important property of our algorithm is that it produces a local alignmentlocal alignment maps local regions with high similarity to each other rather than creating an overall optimal global alignment of the entire treethis strategy is more meaningful when only partial meaning overlap is expected between input sentences as in typical sentence fusion inputonly these highsimilarity regions which we call intersection subtrees are included in the fusion sentencefusion lattice computation is concerned with combining intersection subtreesduring this process the system will remove phrases from a selected sentence add phrases from other sentences and replace words with the paraphrases that annotate each nodeamong the many possible combinations of subtrees we are interested only in those combinations which yield semantically sound sentences and do not distort the information presented in the input sentenceswe cannot explore every possible combination since the lack of semantic information in the trees prohibits us from assessing the quality of the resulting sentencesin fact our early experimentation with generation from constituent phrases demonstrated that it was difficult to ensure that semantically anomalous or ungrammatical sentences would not be generatedinstead we select a combination already present in the input sentences as a basis and transform it into a fusion sentence by removing extraneous information and augmenting the fusion sentence with information from other sentencesthe advantage of this strategy is that when the initial sentence is semantically correct and the applied transformations aim to preserve semantic correctness the resulting sentence is a semantically correct oneour generation strategy is reminiscent of robin and mckeowns earlier work on revision for summarization although robin and mckeown used a threetiered representation of each sentence including its semantics and its deep and surface syntax all of which were used as triggers for revisionthe three steps of the fusion lattice computation are as follows selection of the basis tree augmentation of the tree with alternative verbalizations and pruning of 7 the complexity of our algorithm is polynomial in the number of nodeslet n1 denote the number of nodes in the first tree and n2 denote the number of nodes in the second treewe assume that the branching factor of a parse tree is bounded above by a constantthe function nodecompare is evaluated only once on each node pairtherefore it is evaluated n1 n2 times totallyeach evaluation is computed in constant time assuming that values of the function for node children are knownsince we use memoization the total time of the procedure is otwo dependency trees and their alignment treesolid lines represent aligned edgesdotted and dashed lines represent unaligned edges of the theme sentences the extraneous subtreesalignment is essential for all the stepsthe selection of the basis tree is guided by the number of intersection subtrees it includes in the best case it contains all such subtreesthe basis tree is the centroid of the input sentences the sentence which is the most similar to the other sentences in the inputusing the alignmentbased similarity score described in section 312 we identify the centroid by computing for each sentence the average similarity score between the sentence and the rest of the input sentences then selecting the sentence with the highest scorenext we augment the basis tree with information present in the other input sentencesmore specifically we add alternative verbalizations for the nodes in the basis tree and the intersection subtrees which are not part of the basis treethe alternative verbalizations are readily available from the pairwise alignments of the basis tree with other trees in the input computed in the previous sectionfor each node of the basis tree we record all verbalizations from the nodes of the other input trees aligned with a given nodea verbalization can be a single word or it can be a phrase if a node represents a noun compound or a verb with a particlean example of a fusion lattice augmented a basis lattice before and after augmentationsolid lines represent aligned edges of the basis treedashed lines represent unaligned edges of the basis tree and dotted lines represent insertions from other theme sentencesadded subtrees correspond to sentences from table 1 with alternative verbalizations is given in figure 6even after this augmentation the fusion lattice may not include all of the intersection subtreesthe main difficulty in subtree insertion is finding an acceptable placement this is often determined by syntactic semantic and idiosyncratic knowledgetherefore we follow a conservative insertion policyamong all the possible aligned sentences we insert only subtrees whose top node aligns with one of the nodes in a basis tree8 we further constrain the insertion procedure by inserting only trees that appear in at least half of the sentences of a themethese two constituentlevel restrictions prevent the algorithm from generating overly long unreadable sentences9 finally subtrees which are not part of the intersection are pruned off the basis treehowever removing all such subtrees may result in an ungrammatical or semantically flawed sentence for example we might create a sentence without a subjectthis overpruning may happen if either the input to the fusion algorithm is noisy or the alignment has failed to recognize similar subtreestherefore we perform a more conservative pruning deleting only the selfcontained components which can be removed without leaving ungrammatical sentencesas previously observed in the literature such components include a clause in the clause conjunction relative clauses and some elements within a clause for example this procedure transforms the lattice in figure 6 into the pruned basis lattice shown in figure 7 by deleting the clause the clash erupted and the verb phrase to better protect israeli forcesthese phrases are eliminated because they do not appear in the other sentences of the theme and at the same time their removal does not interfere with the wellformedness of the fusion sentenceonce these subtrees are removed the fusion lattice construction is completedthe final stage in sentence fusion is linearization of the fusion latticesentence generation includes selection of a tree traversal order lexical choice among available alternatives and placement of auxiliaries such as determinersour generation method utilizes information given in the input sentences to restrict the search space and then chooses among remaining alternatives using a language model derived from a large text collectionwe first motivate the need for reordering and rephrasing then discuss our implementationfor the wordordering task we do not have to consider all the possible traversals since the number of valid traversals is limited by ordering constraints encoded in the fusion latticehowever the basis lattice does not uniquely determine the ordering the placement of trees inserted in the basis lattice from other theme sentences is not restricted by the original basis treewhile the ordering of many sentence constituents is determined by their syntactic roles some constituents such as time location and manner circumstantials are free to move therefore the algorithm still has to select an appropriate order from among different orders of the inserted treesthe process so far produces a sentence that can be quite different from the extracted sentence although the basis sentences provides guidance for the generation process constituents may be removed added in or reorderedwording can also be modified during this processalthough the selection of words and phrases which appear in the basis tree is a safe choice enriching the fusion sentence with alternative verbalizations has several benefitsin applications such as summarization in which the length of the produced sentence is a factor a shorter alternative is desirablethis goal can be achieved by selecting the shortest paraphrase among available alternativesalternate verbalizations can also be used to replace anaphoric expressions for instance a pruned basis lattice when the basis tree contains a noun phrase with anaphoric expressions and one of the other verbalizations is anaphorafreesubstitution of the latter for the anaphoric expression may increase the clarity of the produced sentence since frequently the antecedent of the anaphoric expression is not present in a summarymoreover in some cases substitution is mandatoryas a result of subtree insertions and deletions the words used in the basis tree may not be a good choice after the transformations and the best verbalization might be achieved by using a paraphrase of them from another theme sentenceas an example consider the case of two paraphrasing verbs with different subcategorization frames such as tell and sayif the phrase our correspondent is removed from the sentence sharon told our correspondent that the elections were delayed a replacement of the verb told with said yields a more readable sentencethe task of auxiliary placement is alleviated by the presence of features stored in the input nodesin most cases aligned words stored in the same node have the same feature values which uniquely determine an auxiliary selection and conjugationhowever in some cases aligned words have different grammatical features in which case the linearization algorithm needs to select among available alternativeslinearization of the fusion sentence involves the selection of the best phrasing and placement of auxiliaries as well as the determination of optimal orderingsince we do not have sufficient semantic information to perform such selection our algorithm is driven by corpusderived knowledgewe generate all possible sentences10 from the valid traversals of the fusion lattice and score their likelihood according to statistics derived from a corpusthis approach originally proposed by knight and hatzivassiloglou and langkilde and knight is a standard method used in statistical generationwe trained a trigram model with goodturing smoothing over 60 megabytes of news articles collected by newsblaster using the second version cmucambridge statistical language modeling toolkit the sentence with the lowest lengthnormalized entropy is selected as the verbalization of the fusion latticetable 4 shows several verbalizations produced by our algorithm from the central tree in figure 7here we can see that the lowestscoring sentence is both grammatical and concisetable 4 also illustrates that entropybased scoring does not always correlate with the quality of the generated sentencefor example the fifth sentence in table 4 palestinians fired antitank missile at a bulldozer to build a new embankment in the areais not a wellformed sentence however our language model gave it a better score than its wellformed alternatives the second and the third sentences despite these shortcomings we preferred entropybased scoring to symbolic linearizationin the next section we motivate our choice331 statistical versus symbolic linearizationin the previous version of the system we performed linearization of a fusion dependency structure using the language generator fufsurge as a largescale linearizer used in many traditional semantictotext generation systems fufsurge could be an appealing solution to the task of surface realizationbecause the input structure and the requirements on the linearizer are quite different in texttotext generation we had to design rules for mapping between dependency structures produced by the fusion component and fufsurge inputfor instance fufsurge requires that the input contain a semantic role for prepositional phrases such as manner purpose or location which is not present in our dependency representation thus we had to augment the dependency representation with this informationin the case of inaccurate prediction or the lack of relevant semantic information the linearizer scrambles the order of sentence constituents selects wrong prepositions or even fails to generate an outputanother feature of the fufsurge system that negatively influences system performance is its limited ability to reuse phrases readily available in the input instead of generating every phrase from scratchthis makes the generation process more complex and thus prone to errorwhile the initial experiments conducted on a set of manually constructed themes seemed promising the system performance deteriorated significantly when it was applied to automatically constructed themesour experience led us to believe that transformation of an arbitrary sentence into a fufsurge input representation is similar in its complexity to semantic parsing a challenging problem in its own rightrather than refining the mapping mechanism we modified multigen to use a statistical linearization component which handles uncertainty and noise in the input in a more robust wayin our previous work we evaluated the overall summarization strategy of multigen in multiple experiments including comparisons with humanwritten summaries in the document understanding conference 11 evaluation and quality assessment in the context of a particular information access task in the newsblaster framework in this article we aim to evaluate the sentence fusion algorithm in isolation from other system components we analyze the algorithm performance in terms of content selection and the grammaticality of the produced sentenceswe first present our evaluation methodology then we describe our data the results and our analysis of them 411 construction of a reference sentencewe evaluated content selection by comparing an automatically generated sentence with a reference sentencethe reference sentence was produced by a human who was instructed to generate a sentence conveying information common to many sentences in a themethe rfa was not familiar with the fusion algorithmthe rfa was provided with the list of theme sentences the original documents were not includedthe instructions given to the rfa included several examples of themes with fusion sentences generated by the authorseven though the rfa was not instructed to use phrases from input sentences the sentences presented as examples reused many phrases from the input sentenceswe believe that phrase reuse elucidates the connection between input sentences and a resulting fusion sentencetwo examples of themes reference sentences and system outputs are shown in table 5examples from the test seteach example contains a theme a reference sentence generated by the rfa and a sentence generated by the systemsubscripts in the systemgenerated sentence represent a theme sentence from which a word was extracted additional filtersfirst we excluded themes that contained identical or nearly identical sentences when processing such sentences our algorithm reduces to sentence extraction which does not allow us to evaluate the generation abilities of our algorithmsecond themes for which the rfa was unable to create a reference sentence were also removed from the test setas mentioned above simfinder does not always produce accurate themes12 and therefore the rfa could choose not to generate a reference sentence if the theme sentences had too little in commonan example of a theme for which no sentence was generated is shown in table 6as a result of this filtering 34 of the sentences were removed413 baselinesin addition to the systemgenerated sentence we also included in the evaluation a fusion sentence generated by another human and three baselinesthe first baseline is the shortest sentence among the theme sentences which is obviously grammatical and it also has a good chance of being representative of common topics conveyed in the inputthe second baseline is produced by a simplification of our algorithm where paraphrase information is omitted during the alignment processthis baseline is included to capture the contribution of paraphrase information to the performance of the fusion algorithmthe third baseline consists of the basis sentencethe comparison with this baseline reveals the contribution of the insertion and deletion stages in the fusion algorithmthe comparison against an rfa2 sentence provides an upper bound on the performance of the system and baselinesin addition this comparison sheds light on the human agreement on this taskan example of noisy simfinder outputthe shares have fallen 60 this yearthey said qwest was forcing them to exchange their bonds at a fraction of face valuebetween 525 and 825 depending on the bondor else fall lower in the pecking order for repayment in case qwest went brokeqwest had offered to exchange up to 129 billion of the old bonds which carried interest rates between 5875 and 79the new debt carries rates between 13 and 14their yield fell to about 1522 from 1598 tence along with the corresponding reference sentencethe judge also had access to the original theme from which these sentences were generatedthe order of the presentation was randomized across themes and peer systemsreference and peer sentences were divided into clauses by the authorsthe judges assessed overlap on the clause level between reference and peer sentencesthe wording of the instructions was inspired by the duc instructions for clause comparisonfor each clause in the reference sentence the judge decided whether the meaning of a corresponding clause was conveyed in a peer sentencein addition to 0 score for no overlap and 1 for full overlap this framework allows for partial overlap with a score of 05from the overlap data we computed weighted recall and precision based on fractional count recall is a ratio of weighted clause overlap between a peer and a reference sentence and the number of clauses in a reference sentenceprecision is a ratio of weighted clause overlap between a peer and a reference sentence and the number of clauses in a peer sentence415 grammaticality assessmentgrammaticality was rated in three categories grammatical partially grammatical and not grammatical the judge was instructed to rate a sentence in the grammatical category if it contained no grammatical mistakespartially grammatical included sentences that contained at most one mistake in agreement articles and tense realizationthe not grammatical category included sentences that were corrupted by multiple mistakes of the former type by erroneous component order or by the omission of important components punctuation is one issue in assessing grammaticalityimproper placement of punctuation is a limitation of our implementation of the sentence fusion algorithm that we are well aware of13 therefore in our grammaticality evaluation the judge was asked to ignore punctuationto evaluate our sentence fusion algorithm we selected 100 themes following the procedure described in the previous sectioneach set varied from three to seven sentences with 422 sentences on averagethe generated fusion sentences consisted of 191 clauses on averagenone of the sentences in the test set were fully extracted on average each sentence fused fragments from 214 theme sentencesout of 100 sentence 57 sentences produced by the algorithm combined phrases from several sentences while the rest of the sentences comprised subsequences of one of the theme sentenceswe included these sentences in the evaluation because they reflect both content selection and realization capacities of the algorithmtable 5 shows two sentences from the test corpus along with input sentencesthe examples are chosen so as to reflect good and badperformance casesnote that the first example results in inclusion of the essential information and leaves out details the problematic example incorrectly selects the number of people killed as six even though this number is not repeated and different numbers are referred to in the textthis mistake is caused by a noisy entry in our paraphrasing dictionary which erroneously identifies five and six as paraphrases of each othertable 7 shows the length ratio precision recall fmeasure and grammaticality score for each algorithmthe length ratio of a sentence was computed as the ratio of its output length to the average length of the theme input sentencesthe results in table 7 demonstrate that sentences manually generated by the second human participant not only are the shortest but are also closest to the reference sentence in terms of selected informationthe tight connection14 between sentences generated by the rfas establishes a high upper bound for the fusion taskwhile neither our system nor the baselines were able to reach this level of performance the fusion algorithm clearly outperforms all the baselines in terms of content selection at a reasonable level of compressionthe performance of baseline 1 and baseline 2 demonstrates that neither the shortest sentence nor the basis sentence is an adequate substitution for fusion in terms of content selectionthe gap in recall between our system and baseline 3 confirms our hypothesis about the importance of paraphrasing information for the fusion processomission of paraphrases causes an 8 drop in recall due to the inability to match equivalent phrases with different wordingtable 7 also reveals a downside of the fusion algorithm automatically generated sentences contain grammatical errors unlike fully extracted humanwritten sentencesgiven the high sensitivity of humans to processing ungrammatical sentences one has to consider the benefits of flexible information selection against the decrease in readability of the generated sentencessentence fusion may not be a worthy direction to pursue if low grammaticality is intrinsic to the algorithm and its correction requires 14 we cannot apply kappa statistics for measuring agreement in the content selection task since the event space is not welldefinedthis prevents us from computing the probability of random agreementevaluation results for a humancrafted fusion sentence our system output the shortest sentence in the theme the basis sentence and a simplified version of our algorithm without paraphrasing information knowledge which cannot be automatically acquiredin the remainder of the section we show that this is not the caseour manual analysis of generated sentences revealed that most of the grammatical mistakes are caused by the linearization component or more specifically by suboptimal scoring of the language modellanguage modeling is an active area of research and we believe that advances in this direction will be able to dramatically boost the linearization capacity of our algorithm441 error analysisin this section we discuss the results of our manual analysis of mistakes in content selection and surface realizationnote that in some cases multiple errors are entwined in one sentence which makes it hard to distinguish between a sequence of independent mistakes and a becauseandeffect chaintherefore the presented counts should be viewed as approximations rather than precise numberswe start with the analysis of the test set and continue with the description of some interesting mistakes that we encountered during system developmentmistakes in content selectionmost of the mistakes in content selection can be attributed to problems with alignmentin most cases erroneous alignments missed relevant word mappings as a result of the lack of a corresponding entry in our paraphrasing resourcesat the same time mapping of unrelated words was quite rare this performance level is quite predictable given the accuracy of an automatically constructed dictionary and limited coverage of wordneteven in the presence of accurate lexical information the algorithm occasionally produced suboptimal alignments because of the simplicity of our weighting scheme which supports limited forms of mapping typology and also uses manually assigned weightsanother source of errors was the algorithms inability to handle manytomany alignmentsnamely two trees conveying the same meaning may not be decomposable into the nodelevel mappings which our algorithm aims to computefor example the mapping between the sentences in table 8 expressed by the rule x denied claims by y x said that ys claim was untrue cannot be decomposed into smaller matching unitsat least two mistakes resulted from noisy preprocessing in addition to alignment overcutting during lattice pruning caused the omission of three clauses that were present in the corresponding reference sentencesthe sentence conservatives were cheering language is an example of an incomplete sentence derived from the following input sentence conservatives were cheering language in the final version syria denied claims by israeli prime minister ariel sharon the syrian spokesman said that sharons claim was untrue that ensures that onethird of all funds for prevention programs be used to promote abstinencethe omission of a relative clause was possible because some sentences in the input theme contained the noun language without any relative clausesmistakes in surface realizationgrammatical mistakes included incorrect selection of determiners erroneous word ordering omission of essential sentence constituents and incorrect realization of negation constructions and tensethese mistakes originated during linearization of the lattice and were caused either by incompleteness of the linearizer or by suboptimal scoring of the language modelmistakes of the first type are caused by missing rules for generating auxiliaries given node featuresan example of this phenomenon is the sentence the coalition to have play a central role which verbalizes the verb construction will have to play incorrectlyour linearizer lacks the completeness of existing applicationindependent linearizers such as the unificationbased fufsurge and the probabilistic fergus unfortunately we were unable to reuse any of the existing largescale linearizers because of significant structural differences between input expected by these linearizers and the format of a fusion latticewe are currently working on adapting fergus for the sentence fusion taskmistakes related to suboptimal scoring were the most common in these cases a language model selected illformed sentences assigning a worse score to a better sentencethe sentence the diplomats were given to leave the country in 10 days illustrates a suboptimal linearization of the fusion latticethe correct linearizationsthe diplomats were given 10 days to leave the country and the diplomats were ordered to leave the country in 10 dayswere present in the fusion lattice but the language model picked the incorrect verbalizationwe found that in 27 cases the optimal verbalizations were ranked below the top10 sentences ranked by the language modelwe believe that more powerful language models that incorporate linguistic knowledge can improve the quality of generated sentences pagewe have noted a number of interesting errors that crop up from time to time that seem to require information about the full syntactic parse semantics or even discourseconsider for example the last sentence from a summary entitled estrogenprogestin supplements now linked to dementia which is shown in table 9this sentence was created by sentence fusion and clearly there is a problemcertainly there was a study finding the risk of dementia in women who took one type of combined hormone pill but it was not the government study which was abruptly halted last summerin looking at the two sentences from which this summary sentence was drawn we can see that there is a good amount of overlap between the two but the component does not have enough information about the referents of the different terms to know that two different an example of wrong reference selectionsubscripts in the generated sentence indicate the theme sentence from which the words were extracted1 last summer a government study was abruptly halted after finding an increased risk of breast cancer heart attacks and strokes in women who took one type of combined hormone pill2 the most common form of hormone replacement therapy already linked to breast cancer stroke and heart disease does not improve mental functioning as some earlier studies suggested and may increase the risk of dementia researchers said on tuesdaysystem last1 summer1 a1 government1 study1 abruptly1 was1 halted1 after1 finding1 the2 risk2 of2 dementia2 in1 women1 who1 took1 one1 type1 of1 combined1 hormone1 pill1 studies are involved and that fusion should not take placeone topic of our future work is the problem of reference and summarizationanother example is shown in table 10here again the problem is referencethe first error is in the references to the segmentsthe two uses of segments in the first source document sentence do not refer to the same entity and thus when the modifier is dropped we get an anomalythe second more unusual problem is in the equation of clintondole doleclinton and clinton and doleunlike traditional concepttotext generation approaches texttotext generation methods take text as input and transform it into a new text satisfying some constraints in addition to sentence fusion compression algorithms and methods for expansion of a multiparallel corpus are other instances of such methodscompression methods have been developed for singledocument summarization and they aim to reduce a sentence by eliminating constituents which are not crucial for understanding the sentence and not salient enough to include in the summarythese approaches are based on the observation that the importance of a sentence constituent can often be determined based on shallow features such as its syntactic role and the words it containsfor example in many cases a relative clause that is stopped airing in 1979 but will instead be called clintondole one week and doleclinton the next week peripheral to the central point of the document can be removed from a sentence without significantly distorting its meaningwhile earlier approaches for text compression were based on symbolic reduction rules more recent approaches use an aligned corpus of documents and their human written summaries to determine which constituents can be reduced the summary sentences which have been manually compressed are aligned with the original sentences from which they were drawnknight and marcu treat reduction as a translation process using a noisychannel model in this model a short string is treated as a source and additions to this string are considered to be noisethe probability of a source string s is computed by combining a standard probabilistic contextfree grammar score which is derived from the grammar rules that yielded tree s and a wordbigram score computed over the leaves of the treethe stochastic channel model creates a large tree t from a smaller tree s by choosing an extension template for each node based on the labels of the node and its childrenin the decoding stage the system searches for the short string s that maximizes p which is equivalent to maximizing p pwhile this approach exploits only syntactic and lexical information jing and mckeown also rely on cohesion information derived from word distribution in a text phrases that are linked to a local context are retained while phrases that have no such links are droppedanother difference between these two methods is the extensive use of knowledge resources in the latterfor example a lexicon is used to identify which components of the sentence are obligatory to keep it grammatically correctthe corpus in this approach is used to estimate the degree to which a fragment is extraneous and can be omitted from a summarya phrase is removed only if it is not grammatically obligatory is not linked to a local context and has a reasonable probability of being removed by humansin addition to reducing the original sentences jing and mckeown use a number of manually compiled rules to aggregate reduced sentences for example reduced clauses might be conjoined with andsentence fusion exhibits similarities with compression algorithms in the ways in which it copes with the lack of semantic data in the generation process relying on shallow analysis of the input and statistics derived from a corpusclearly the difference in the nature of both tasks and in the type of input they expect dictates the use of different methodshaving multiple sentences in the input poses new challengessuch as a need for sentence comparisonbut at the same time it opens up new possibilities for generationwhile the output of existing compression algorithms is always a substring of the original sentence sentence fusion may generate a new sentence which is not a substring of any of the input sentencesthis is achieved by arranging fragments of several input sentences into one sentencethe only other texttotext generation approach able to produce new utterances is that of pang knight and marcu their method operates over multiple english translations of the same foreign sentence and is intended to generate novel paraphrases of the input sentenceslike sentence fusion their method aligns parse trees of the input sentences and then uses a language model to linearize the derived latticethe main difference between the two methods is in the type of the alignment our algorithm performs local alignment while the algorithm of pang knight and marcu performs global alignmentthe differences in alignment are caused by differences in input pang knight and marcus method expects semantically equivalent sentences while our algorithm operates over sentences with only partial meaning overlapthe presence of deletions and insertions in input sentences makes alignment of comparable trees a new and particularly significant challengethe alignment method described in section 3 falls into a class of tree comparison algorithms extensively studied in theoretical computer science and widely applied in many areas of computer science primarily computational biology these algorithms aim to find an overlap subtree that captures structural commonality across a set of related treesa typical tree similarity measure considers the proximity at both the node and the edge levels between input treesin addition some algorithms constrain the topology of the resulting alignment based on the domainspecific knowledgethese constraints not only narrow the search space but also increase the robustness of the algorithm in the presence of a weak similarity functionin the nlp context this class of algorithms has been used previously in examplebased machine translation in which the goal is to find an optimal alignment between the source and the target sentences the algorithm operates over pairs of parallel sentences where each sentence is represented by a structuresharing forest of plausible syntactic treesthe similarity function is driven by lexical mapping between tree nodes and is derived from a bilingual dictionarythe search procedure is greedy and is subject to a number of constraints needed for alignment of parallel sentencesthis algorithm has several features in common with our method it operates over syntactic dependency representations and employs recursive computation to find an optimal solutionhowever our method is different in two key aspectsfirst our algorithm looks for local regions with high similarity in nonparallel data rather than for full alignment expected in the case of parallel treesthe change in optimization criteria introduces differences in the similarity measurespecifically the relaxation of certain constraintsand the search procedure which in our work uses dynamic programmingsecond our method is an instance of a multisequence alignment15 in contrast to the pairwise alignment described in meyers yangarber and grishman combining evidence from multiple trees is an essential step of our algorithmpairwise comparison of nonparallel trees may not provide enough information regarding their underlying correspondencesin fact previous applications of multisequence alignment have been shown to increase the accuracy of the comparison in other nlp tasks unlike our work these approaches operate on strings not trees and with the exception of they apply alignment to parallel data not comparable textsin this article we have presented sentence fusion a novel method for texttotext generation which given a set of similar sentences produces a new sentence containing the information common to most sentencesunlike traditional generation methods sentence fusion does not require an elaborate semantic representation of the input but instead relies on the shallow linguistic representation automatically derived from the input documents and knowledge acquired from a large text corpusgeneration is performed by reusing and altering phrases from input sentencesas the evaluation described in section 4 shows our method accurately identifies common information and in most cases generates a wellformed fusion sentenceour algorithm outperforms the shortestsentence baseline in terms of content selection without a significant drop in grammaticalitywe also show that augmenting the fusion process with paraphrasing knowledge improves the output by both measureshowever there is still a gap between the performance of our system and human performancean important goal for future work on sentence fusion is to increase the flexibility of content selection and realizationwe believe that the process of aligning theme sentences can be greatly improved by having the system learn the similarity function instead of using manually assigned weightsan interesting question is how such a similarity function can be induced in an unsupervised fashionin addition we can improve the flexibility of the fusion algorithm by using a more powerful language modelrecent research has show that syntaxbased language models are more suitable for language generation tasks the study of such models is a promising direction to explorean important feature of the sentence fusion algorithm is its ability to generate multiple verbalizations of a given fusion latticein our implementation this property is utilized only to produce grammatical texts in the changed syntactic context but it can also be used to increase coherence of the text at the discourse level by taking context into accountin our current system each sentence is generated in isolation independently from what is said before and what will be said afterclear evidence of the limitation of this approach is found in the selection of referring expressionsfor example all summary sentences may contain the full description of a named entity while the use of shorter descriptions such as bollinger or anaphoric expressions in some summary sentences would increase the summarys readability these constraints can be incorporated into the sentence fusion algorithm since our alignmentbased representation of themes often contains several alternative descriptions of the same objectbeyond the problem of referringexpression generation we found that by selecting appropriate paraphrases of each summary sentence we can significantly improve the coherence of an output summaryan important research direction for future work is to develop a probabilistic text model that can capture properties of wellformed texts just as a language model captures properties of sentence grammaticalityideally such a model would be able to discriminate between cohesive fluent texts and illformed texts guiding the selection of sentence paraphrases to achieve an optimal sentence sequencefunction edgesim returns the similarity score of two input edges based on their type begin if type of type of subjectverb then return subject verb score if is phrase or is phrase then all the comparison functions employ memoization implemented by hash table wrappersfunction mapchildren memoized returns given two dependency trees mapchildren finds the optimal alignment of tree childrenthe function returns the score of the alignment and the mapping itself begin generate all legitimate mappings between the children on tree1 and tree2 allmaps generateallpermutations best compute the score of each mapping and select the one with the highest score sim function nodecompare memoized returns given two dependency trees nodecompare finds their optimal alignment that maps two top nodes of the tree one to anotherthe function returns the score of the alignment and the mapping itself begin nodesim nodesim if one of the trees is of height one return the nodesim score between two tops if is leaf or is leaf then return else find an optimal alignment of the children nodes res mapchildren the alignment score is computed as a sum of the similarity of top nodes and the score of the optimal alignment of nodethe tree alignment is assembled by adding a pair of top nodes to the optimal alignment of their children return you resmap we are grateful to eli barzilay michael collins noemie elhadad julia hirschberg mirella lapata lillian lee smaranda muresan and the anonymous reviewers for helpful comments and conversationsportions of this work were completed while the first author was a graduate student at columbia universitythis article is based upon work supported in part by the national science foundation under grant iis0448168 darpa grant n660010018919 and a louis morin scholarshipany opinions findings and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the national science foundation
J05-3002
sentence fusion for multidocument news summarizationa system that can produce informative summaries highlighting common information found in many online documents will help web users to pinpoint information that they need without extensive readingin this article we introduce sentence fusion a novel texttotext generation technique for synthesizing common information across documentssentence fusion involves bottomup local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentencesentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sourceswe represent the inputs by dependency trees align some words to merge the input trees into a lattice and then extract a single connected dependency tree as the outputwe introduce the problem of converting multiple sentences into a single summary sentence
improving machine translation performance by exploiting nonparallel corpora we present a novel method for discovering parallel sentences in comparable nonparallel corpora we train a maximum entropy classifier that given a pair of sentences can reliably determine whether or not they are translations of each other using this approach we extract parallel data from large chinese arabic and english nonparallel newspaper corpora we evaluate the quality of the extracted data by showing that it improves the performance of a stateoftheart statistical machine translation system we also show that a goodquality mt system can be built from scratch by starting with a very small parallel corpus and exploiting a large nonparallel corpus thus our method can be applied with great benefit to language pairs for which only scarce resources are available we present a novel method for discovering parallel sentences in comparable nonparallel corporawe train a maximum entropy classifier that given a pair of sentences can reliably determine whether or not they are translations of each otherusing this approach we extract parallel data from large chinese arabic and english nonparallel newspaper corporawe evaluate the quality of the extracted data by showing that it improves the performance of a stateoftheart statistical machine translation systemwe also show that a goodquality mt system can be built from scratch by starting with a very small parallel corpus and exploiting a large nonparallel corpusthus our method can be applied with great benefit to language pairs for which only scarce resources are availableparallel textstexts that are translations of each otherare an important resource in many nlp applicationsthey provide indispensable training data for statistical machine translation and have been found useful in research on automatic lexical acquisition crosslanguage information retrieval and annotation projection unfortunately parallel texts are also scarce resources limited in size language coverage and language registerthere are relatively few language pairs for which parallel corpora of reasonable sizes are available and even for those pairs the corpora come mostly from one domain that of political discourse this is especially problematic for the field of statistical machine translation because translation systems trained on data from a particular domain will perform poorly when translating texts from a different domain one way to alleviate this lack of parallel data is to exploit a much more available and diverse resource comparable nonparallel corporacomparable corpora are texts that while not parallel in the strict sense are somewhat related and convey overlapping informationgood examples are the multilingual news feeds produced by news agencies such as agence france presse xinhua news reuters cnn bbc etcsuch texts are widely available on the web for many language pairs and domainsthey often contain many sentence pairs that are fairly good translations of each otherthe ability to reliably identify these pairs would enable the automatic creation of large and diverse parallel corporahowever identifying good translations in comparable corpora is hardeven texts that convey the same information will exhibit great differences at the sentence levelconsider the two newspaper articles in figure 1they have been published by the english and french editors of agence france presse and report on the same event an epidemic of cholera in pyongyangthe lines in the figure connect sentence pairs that are approximate translations of each otherdiscovering these links automatically is clearly nontrivialtraditional sentence alignment algorithms are designed to align sentences in parallel corpora and operate on the assumption that there are no reorderings and only limited insertions and deletions between the two renderings of a parallel documentthus they perform poorly on comparable nonparallel textswhat we need are methods able to judge sentence pairs in isolation independent of the contextthis article describes a method for identifying parallel sentences in comparable corpora and builds on our earlier work on parallel sentence extraction we describe how to build a maximum entropybased classifier that can reliably judge whether two sentences are translations of each other without making use of any contextusing this classifier we extract parallel sentences from very large comparable corpora of newspaper articleswe demonstrate the quality of our a pair of comparable texts extracted sentences by showing that adding them to the training data of an smt system improves the systems performancewe also show that language pairs for which very little parallel data is available are likely to benefit the most from our method by running our extraction system on a large comparable corpus in a bootstrapping manner we can obtain performance improvements of more than 50 over a baseline mt system trained only on existing parallel dataour main experimental framework is designed to address the commonly encountered situation that exists when the mt training and test data come from different domainsin such a situation the test data is indomain and the training data is outofdomainthe problem is that in such conditions translation performance is quite poor the outofdomain data does not really help the system to produce good translationswhat is needed is additional indomain training dataour goal is to get such data from a large indomain comparable corpus and use it to improve the performance of an outofdomain mt systemwe work in the context of arabicenglish and chineseenglish statistical machine translation systemsour outofdomain data comes from translated united nations proceedings and our indomain data consists of news articlesin this experimental framework we have access to a variety of resources all of which are available from the linguistic data consortium1 in summary we call indomain the domain of the test data that we wish to translate in this article that indomain data consists of news articlesoutofdomain data is data that belongs to any other domain in this article the outofdomain data is drawn from united nations parliamentary proceedingswe are interested in the situation that exists when we need to translate news data but only have un data available for trainingthe solution we propose is to get comparable news data automatically extract parallel sentences from it and use these sentences as additional training data we will show that doing this improves translation performance on a news test setthe arabicenglish and chineseenglish resources described in the previous paragraph enable us to simulate our conditions of interest and perform detailed measurements of the impact of our proposed solutionwe can train baseline systems on un parallel data extract additional news data from the large comparable corpora accurately measure translation performance on news data against four reference translations and compare the impact of the automatically extracted news data with that of similar amounts of humantranslated news data in the next section we give a highlevel overview of our parallel sentence extraction systemin section 3 we describe in detail the core of the system the parallel sentence classifierin section 4 we discuss several data extraction experimentsin section 5 we evaluate the extracted data by showing that adding it to outofdomain parallel data improves the indomain performance of an outofdomain mt system and in section 6 we show that in certain cases even larger improvements can be obtained by using bootstrappingin section 7 we present examples of sentence pairs extracted by our method and discuss some of its weaknessesbefore concluding we discuss related workthe general architecture of our extraction system is presented in figure 2starting with two large monolingual corpora divided into documents we begin by selecting pairs of similar documents from each such pair we generate all possible sentence pairs and pass them through a simple wordoverlapbased filter thus obtaining candidate sentence pairsthe candidates are presented to a maximum entropy classifier that decides whether the sentences in each pair are mutual translations of each otherthe resources required by the system are minimal a bilingual dictionary and a small amount of parallel data the dictionaries used in our experiments are learned automatically from parallel corpora2 thus the only resource used by our system consists of parallel sentences2 if such a resource is unavailable other dictionaries can be usedour comparable corpus consists of two large nonparallel news corpora one in english and the other in the foreign language of interest the parallel sentence extraction process begins by selecting for each foreign article english articles that are likely to contain sentences that are parallel to those in the foreign onethis step of the process emphasizes recall rather than precisionfor each foreign document we do not attempt to find the bestmatching english document but rather a set of similar english documentsthe subsequent components of the system are robust enough to filter out the extra noise introduced by the selection of additional english documentswe perform document selection using the lemur ir toolkit3 we first index all the english documents into a databasefor each foreign document we take the top five translations of each of its words and create an english language querythe translation probabilities are only used to choose the word translations they do not appear in the querywe use the query to run tfidf retrieval against the database take the top 20 english documents returned by lemur and pair each of them with the foreign query documentthis document matching procedure is both slow and imprecise we attempt to fix these problems by using the following heuristic we consider it likely that articles with similar content have publication dates that are close to each otherthus each query is actually run only against english documents published within a window of five days around the publication date of the foreign query document we retrieve the best 20 of these documentseach query is thus run against fewer documents so it becomes faster and has a better chance of getting the right documents at the topour experiments have shown that the final performance of the system does not depend too much on the size of the window however having no window at all leads to a decrease in the overall performance of the systemfrom each foreign document and set of associated english documents we take all possible sentence pairs and pass them through a wordoverlap filterthe filter verifies that the ratio of the lengths of the two sentences is no greater than twoit then checks that at least half the words in each sentence have a translation in the other sentence according to the dictionarypairs that do not fulfill these two conditions are discardedthe others are passed on to the parallel sentence selection stagethis step removes most of the noise introduced by our recalloriented document selection procedureit also removes good pairs that fail to pass the filter because the dictionary does not contain the necessary entries but those pairs could not have been handled reliably anyway so the overall effect of the filter is to improve the precision and robustness of the systemhowever the filter also accepts many wrong pairs because the wordoverlap condition is weak for instance stopwords almost always have a translation on the other side so if a few of the content for each candidate sentence pair we need a reliable way of deciding whether the two sentences in the pair are mutual translationsthis is achieved by a maximum entropy classifier which is the core component of our systemthose pairs that are classified as being translations of each other constitute the output of the systemin the maximum entropy statistical modeling framework we impose constraints on the model of our data by defining a set of feature functionsthese feature functions emphasize properties of the data that we believe to be useful for the modeling taskfor example for a sentence pair sp the word overlap might be a useful indicator of whether the sentences are parallelwe therefore define a feature function f whose value is the word overlap of the sentences in spaccording to the me principle the optimal parametric form of the model of our data taking into account the constraints imposed by the feature functions is a log linear combination of these functionsthus for our classification problem we have where ci is the class z is a normalization factor and fij are the feature functions the resulting model has free parameters λj the feature weightsthe parameter values that maximize the likelihood of a given training corpus can be computed using various optimization algorithms for our particular classification problem we need to find feature functions that distinguish between parallel and nonparallel sentence pairsfor this purpose we compute and exploit wordlevel alignments between the sentences in each paira word alignment between two sentences in different languages specifies which words in one sentence are translations of which words in the otherword alignments were first introduced in the context of statistical mt where they are used to estimate the parameters of a translation model since then they were found useful in many other nlp applications figures 3 and 4 give examples of word alignments between two englisharabic sentence pairs from our comparable corpuseach figure contains two alignmentsthe one on the left is a correct alignment produced by a human while the one on the right alignments between two parallel sentences was computed automaticallyas can be seen from the gloss next to the arabic words the sentences in figure 3 are parallel while the sentences in figure 4 are notin a correct alignment between two nonparallel sentences most words would have no translation equivalents in contrast in an alignment between parallel sentences most words would be alignedautomatically computed alignments however may have incorrect connections for example on the right side of figure 3 the arabic word issue is connected to the comma and in figure 4 the arabic word at is connected to the english phrase its case to thesuch errors are due to noisy dictionary entries and to alignments between two nonparallel sentences shortcomings of the model used to generate the alignmentsthus merely looking at the number of unconnected words while helpful is not discriminative enoughstill automatically produced alignments have certain additional characteristics that can be exploitedwe follow brown et al in defining the fertility of a word in an alignment as the number of words it is connected tothe presence in an automatically computed alignment between a pair of sentences of words of high fertility is indicative of nonparallelismmost likely these connections were produced because of a lack of better alternativesanother aspect of interest is the presence of long contiguous connected spans which we define as pairs of bilingual substrings in which the words in one substring are connected only to words in the other substringsuch a span may contain a few words without any connection but no word with a connection outside the spanexamples of such spans can be seen in figure 3 the english strings after saudi mediation failed or to the international court ofjustice together with their arabic counterpartslong contiguous connected spans are indicative of parallelism since they suggest that the two sentences have long phrases in commonand in contrast long substrings whose words are all unconnected are indicative of nonparallelismto summarize our classifier uses the following features defined over two sentences and an automatically computed alignment between themgeneral features in order to compute word alignments we need a simple and efficient modelwe want to align a large number of sentences with many outofvocabulary words in reasonable timewe also want a model with as few parameters as possiblepreferably only wordforword translation probabilitiesone such model is the ibm model 1 according to this model given foreign sentence english sentence and translation probabilities t the best alignment f e is obtained by linking each foreign word fj to its most likely english translation argmaxeitthus each foreign word is aligned to exactly one english word due to its simplicity this model has several shortcomings some more structural than others thus we use a version that is augmented with two simple heuristics that attempt to alleviate some of these shortcomingsone possible improvement concerns english words that appear more than once in a sentenceaccording to the model a foreign word that prefers to be aligned with such an english word could be equally well aligned with any instance of that wordin such situations instead of arbitrarily choosing the first instance or a random instance we attempt to make a smarter decisionfirst we create links only for those english words that appear exactly once next for words that appear more than once we choose which instance to link with so that we minimize the number of crossings with already existing linksthe second heuristic attempts to improve the choice of the most likely english translation of a foreign wordour translation probabilities are automatically learned from parallel data and we learn values for both t and twe can therefore decide that the most likely english translation of fj is argmaxeittusing both sets of probabilities is likely to help us make a betterinformed decisionusing this alignment strategy we follow and compute one alignment for each translation direction and then combine themoch and ney present three combination methods intersection union and refined thus for each sentence pair we compute five alignments and then extract one set of general features and five sets of alignment features we create training instances for our classifier from a small parallel corpusthe simplest way to obtain classifier training data from a parallel corpus is to generate all possible sentence pairs from the corpus this generates 50002 training instances out of which 5000 are positive and the rest are negativeone drawback of this approach is that the resulting training set is very imbalanced ie it has many more negative examples than positive onesclassifiers trained on such data do not achieve good performance they generally tend to predict the majority class ie classify most sentences as nonparallel our solution to this is to downsample ie eliminate a number of negative instancesanother problem is that the large majority of sentence pairs in the cartesian product have low word overlap as explained in section 2 when extracting data from a comparable corpus we only apply the classifier on the output of the wordoverlap filterthus lowoverlap sentence pairs which would be discarded by the filter are unlikely to be useful as training exampleswe therefore use for training only those pairs from the cartesian product that are accepted by the wordoverlap filterthis has the additional advantage that since all these pairs have many words in common the classifier learns to make distinctions that cannot be made based on word overlap aloneto summarize we prepare our classifier training set in the following manner starting from a parallel corpus of about 5000 sentence pairs we generate all the sentence pairs in the cartesian product we discard the pairs that do not fulfill the conditions of the wordoverlap filter if the resulting set is imbalanced ie the ratio of nonparallel to parallel pairs is greater than five we balance it by removing randomly chosen nonparallel pairswe then compute word alignments and extract feature valuesusing the training set we compute values for the classifier feature weights using the yasmet4 implementation of the gis algorithm since we are dealing with few parameters and have sufficiently many training instances using more advanced training algorithms is unlikely to bring significant improvementswe test the performance of the classifier by generating test instances from a different parallel corpus and checking how many of these instances are correctly classifiedwe prepare the test set by creating the cartesian product of the sentences in the test parallel corpus and applying the wordoverlap filter although we apply the filter we still conceptually classify all pairs from the cartesian product in a twostage classification process all pairs discarded by the filter are classified as nonparallel and for the rest we obtain predictions from the classifiersince this is how we apply the system on truly unseen data this is the process in whose performance we are interestedwe measure the performance of the classification process by computing precision and recallprecision is the ratio of sentence pairs correctly judged as parallel to the total number of pairs judged as parallel by the classifierrecall is the ratio of sentence pairs correctly identified as parallel by the classifier to the total number of truly parallel pairsie the number of pairs in the parallel corpus used to generate the test instancesboth numbers are expressed as percentagesmore formally let classified parallel be the total number of sentence pairs from our test set that the classifier judged as parallel classified well be the number of pairs that the classifier correctly judged as parallel and true parallel be the total number of parallel pairs in the test setthen classified parallel true parallel there are two factors that influence a classifiers performance dictionary coverage and similarity between the domains of the training and test instanceswe performed evaluation experiments to account for both these factorsall our dictionaries are automatically learned from parallel data thus we can create dictionaries of various coverage by learning them from parallel corpora of different sizeswe use five dictionaries learned from five initial outofdomain parallel corpora whose sizes are 100k 1m 10m 50m and 95m tokens as measured on the english sidesince we want to use the classifier to extract sentence pairs from our indomain comparable corpus we test it on instances generated from an indomain parallel corpusin order to measure the effect of the domain difference we use two training sets one generated from an indomain parallel corpus and another one from an outofdomain parallel corpusin summary for each language pair we use the following corpora precision and recall of the arabicenglish classifiersfrom each initial outofdomain corpus we learn a dictionarywe then take the classifier training and test corpora and using the method described in the previous section create two sets of training instances and one set of test instanceswe train two classifiers and evaluate both of them on the test setthe parallel corpora used for generating training and test instances have around 5k sentence pairs each and generate around 10k training instances and 8k test instancesprecision and recall of the chineseenglish classifiersfigures 5 and 6 show the recall and precision of our classifiers for both arabicenglish and chineseenglishthe results show that the precision of our classification process is robust with respect to dictionary coverage and training domaineven when starting from a very small initial parallel corpus we can build a highprecision classifierhaving a good dictionary and training data from the right domain does help though mainly with respect to recallthe classifiers achieve high precision because their positive training examples are clean parallel sentence pairs with high word overlap thus the classification decision frontier is pushed towards goodlooking alignmentsthe low recall results are partly due to the wordoverlap filter which discards many parallel pairsif we do not apply the filter before the classifier the recall results increase by about 20 however the filter plays a very important role in keeping the extraction pipeline robust and efficient so this loss of recall is a price worth payingclassifier evaluations using different subsets of features show that most of the classifier performance comes from the general features together with the alignment features concerning the percentage and number of words that have no connectionhowever we expect that in real data the differences between parallel and nonparallel pairs are less clear than in our test data and can no the amounts of data processed by our system during extraction from the chineseenglish comparable corpus longer be accounted for only by counting the linked words thus the other features should become more importantthe comparable corpora that we use for parallel sentence extraction are collections of news stories published by the agence france presse and xinhua news agenciesthey are parts of the arabic english and chinese gigaword corpora which are available from the linguistic data consortiumfrom these collections for each language pair we create an indomain comparable corpus by putting together articles coming from the same agency and the same time periodtable 1 presents in detail the sources and sizes of the resulting comparable corporathe remainder of the section presents the various data sets that we extracted automatically from these corpora under various experimental conditionsin the experiments described in section 34 we started out with five outofdomain initial parallel corpora of various sizes and obtained five dictionaries and five outofdomain trained classifiers we now plug in each of these classifiers in our extraction system and apply it to our comparable corporawe thus obtain five arabicenglish and five chineseenglish extracted corporanote that in each of these experiments the only resource used by our system is the initial outofdomain parallel corpusthus the experiments fit in the framework of interest described in section 1 which assumes the availability of outofdomain training data and indomain comparable datatable 2 shows the sizes of the extracted corpora for each initial corpus size for both chineseenglish and arabicenglishas can be seen when the initial parallel corpus is very small the amount of extracted data is also quite smallthis is due to the low coverage of the dictionary learned from that corpusour candidate pair selection step discards pairs with too many unknown words according to the dictionary thus only few sentences fulfill the wordoverlap condition of our filteras mentioned in section 1 our goal is to use the extracted data as additional mt training data and obtain better translation performance on a given indomain mt test seta simple way of estimating the usefulness of the data for this purpose is to measure its coverage of the test set ie the percentage of running ngrams from the test corpus that are also in our corpustables 3 and 4 present the coverage of our extracted corporafor each initial corpus size the first column shows the coverage of that initial corpus and the second column shows the coverage of the initial corpus plus the extracted corpuseach cell contains four numbers that represent the coverage with respect to unigrams bigrams trigrams and 4gramsthe numbers show that unigram coverage depends only on the size of the corpus but for longer ngrams our indomain extracted data brings significant improvements in coveragethe extraction experiments from the previous section are controlled experiments in which we only use limited amounts of parallel data for our extraction systemin this section we describe experiments in which the goal is to assess the applicability of our method to data that we mined from the webwe obtained comparable corpora from the web by going to bilingual news websites and downloading news articles in each language independentlyin order to get as many articles as possible we used the web sites search engine to get lists of articles and their urls and then crawled those listswe used the agentbuilder tool for crawlingthe tool can be programmed to automatically initiate searches with different parameters and to identify and extract the desired article urls from the result pagestable 5 shows the sources time periods and size of the datasets that we downloadedfor the extraction experiments we used dictionaries of high coverage learned from all our available parallel training datathe sizes of these training corpora measured in number of english tokens are as follows we applied our extraction method on both the ldcreleased gigaword corpora and the webdownloaded comparable corporafor each language pair we used the highest precision classifier from those presented in section 34in order to obtain data of higher quality we did not use all the sentences classified as parallel but only those for which the probability computed by our classifier was higher than 070table 6 shows the amounts of extracted data measured in number of english tokensfor arabicenglish we were able to extract from the gigaword corpora much more data than in our previous experiments clearly due to the better dictionaryfor chineseenglish there was no increase in the size of extracted data in the previous section we measured for our training corpora their coverage of the test set we repeated the measurements for the training data from table 6 and obtained very similar results using the additional extracted data improves coverage especially for longer ngramsto give the reader an idea of the amount of data that is funneled through our system we show in figure 7 the sizes of the data processed by each of the systems components during extraction from the gigaword and webbased chineseenglish comparable corporawe use a dictionary learned from a parallel corpus on 190m english tokens and a classifier trained on instances generated from a parallel corpus of 220k english tokenswe start with a comparable corpus consisting of 500k chinese articles and 600k english articlesthe article selection step outputs 75m similar article pairs from each article pair we generate all possible sentence pairs and obtain 2400m pairsof these less than 1 pass the candidate selection stage and are presented to the me classifierthe system outputs 430k sentence pairs that have been classified as parallel the figure also presents in the lower part the parameters that control the filtering at each stage the particular sentence pair to be parallel the higher the value the higher the classifiers confidencethus in order to obtain higher precision we can choose to define as parallel only those pairs for which the classifier probability is above a certain thresholdin the experiments from section 41 we use the threshold of 05 while in section 42 we use 07our main goal is to extract from an indomain comparable corpus parallel training data that improves the performance of an outofdomaintrained smt systemthus we evaluate our extracted corpora by showing that adding them to the outofdomain training data of a baseline mt system improves its performancewe first evaluate the extracted corpora presented in section 41the extraction system used to obtain each of those corpora made use of a certain initial outofdomain parallel corpuswe train a baseline mt system on that initial corpuswe then train another mt system on the initial corpus plus the extracted corpusin order to compare the quality of our extracted data with that of humantranslated data from the same domain we also train an upperbound mt system using the initial corpus plus a corpus of indomain humantranslated datafor each initial corpus we use the same amount of humantranslated data as there is extracted data thus for each language pair and each initial parallel corpus we compare 3 mt systems baseline plusextracted and upperboundall our mt systems were trained using a variant of the alignment template model described in each system used two language models a very large one trained on 800 million english tokens which is the same for all the systems and a smaller one trained only on the english side of the parallel training data for that particular systemthis ensured that any differences in performance are caused only by differences in the training datathe systems were tested on the news test corpus used for the nist 2003 mt evaluation5 translation performance was measured using the automatic bleu evaluation metric on four reference translationsfigures 8 and 9 show the bleu scores obtained by our mt systemsthe 95 confidence intervals of the scores computed by bootstrap resampling are marked on the graphs the delta value is around 12 for arabicenglish and 1 for chineseenglishas the results show the automatically extracted additional training data yields significant improvements in performance over most initial training corpora for both language pairsat least for chineseenglish the improvements are quite comparable to those produced by the humantranslated dataand as can be expected the impact of the extracted data decreases as the size of the initial corpus increasesin order to check that the classifier really does something important we performed a few experiments without itafter the article selection step we simply paired each foreign document with the bestmatching english one assumed they are parallel sentencealigned them with a generic sentence alignment method and added the resulting data to the training corpusthe resulting bleu scores were practically the same as the baseline thus our classifier does indeed help to discover higherquality parallel datawe also measured the mt performance impact of the extracted corpora described in section 42we trained a baseline mt system on all our available parallel data and a plusextracted system on the parallel data plus the extracted indomain dataclearly we have access to no upperbound system in this casethe results are presented in the first two rows of table 7adding the extracted corpus lowers the score for the arabicenglish system and improves the score for the chineseenglish one however none of the differences are statistically significantsince the baseline systems are trained on such large amounts of data it is not surprising that our extracted corpora have no significant impactin an attempt to give a better indication of the value of these corpora we used them alone as mt training datathe bleu scores obtained by the systems we trained on them are presented in the third row of table 7for comparison purposes the last line of the table shows the scores of systems trained on 10m english tokens of outofdomain dataas can be seen our automatically extracted corpora obtain better mt performance than outofdomain parallel corpora of similar sizeit is true that this is not a fair comparison since the extracted corpora were obtained using all our available parallel datathe numbers do show however that the extracted data although it was obtained automatically is of good value for machine translationas can be seen from table 2 the amount of data we can extract from our comparable corpora is adversely affected by poor dictionary coveragethus if we start with very little parallel data we do not make good use of the comparable corporaone simple way to alleviate this problem is to bootstrap after we have extracted some indomain data we can use it to learn a new dictionary and go back and extract againbootstrapping was also successfully applied to this problem by fung and cheung we performed bootstrapping iterations starting from two very small corpora 100k english tokens and 1m english tokens respectivelyafter each iteration we trained mt performance improvements for chineseenglish an mt system on the initial data plus the data extracted in that iterationwe did not use any of the data extracted in previous iterations since it is mostly a subset of that extracted in the current iterationwe iterated until there were no further improvements in mt performance on our development datafigures 10 and 11 show the sizes of the data extracted at each iteration for both initial corpus sizesiteration 0 is the one that uses the dictionary learned from the initial corpusstarting with 100k words of parallel data we eventually collect 20m words of indomain arabicenglish data and 90m words of indomain chineseenglish datafigures 12 and 13 show the bleu scores of these mt systemsfor comparison purposes we also plotted on each graph the performance of our best mt system for that language pair trained on all our available parallel data as we can see bootstrapping allows us to extract significantly larger amounts of data which leads to significantly higher bleu scoresstarting with as little as 100k english tokens of parallel data we obtain mt systems that come within 710 bleu points of systems trained on parallel corpora of more than 100m english tokensthis shows that using our method a goodquality mt system can be built from very little parallel data and a large amount of comparable nonparallel datawe conclude the description of our method by presenting a few sentence pairs extracted by our systemwe chose the examples by looking for cases when a given foreign sentence was judged parallel to several different english sentencesfigures 14 and 15 show the foreign sentence in arabic and chinese respectively followed by a humanproduced translation in bold italic font followed by the automatically extracted matching english sentences in normal fontthe sentences are picked from the data sets presented in section 42the examples reveal the two main types of errors that our system makesthe first type concerns cases when the system classifies as parallel sentence pairs that although they share many content words express slightly different meanings as in figure 15 example 7the second concerns pairs in which the two sentences convey different amounts of informationin such pairs one of the sentences contains a transsizes of the chineseenglish corpora extracted using bootstrapping in millions of english tokensbleu scores of the arabicenglish mt systems using bootstrapping lation of the other plus additional phrases these errors are caused by the noise present in the automatically learned dictionaries and by the use of a weak word alignment model for extracting the classifier bleu scores of the chineseenglish mt systems using bootstrapping featuresin an automatically learned dictionary many words will have a lot of spurious translationsthe ibm1 alignment model takes no account of word order and allows a source word to be connected to arbitrarily many target wordsalignments computed using this model and a noisy automatically learned dictionary will contain many incorrect linksthus if two sentences share several content words these incorrect links together with the correct links between the common content words will yield an alignment good enough to make the classifier judge the sentence pair as parallelthe effect of the noise in the dictionary is even more clear for sentence pairs with few words such as figure 14 example 6the sentences in that example are tables of soccer team statisticsthey are judged parallel because corresponding digits align to each other and according to our dictionary the arabic word for mexico can be translated as any of the country names listed in the examplethese examples also show that the problem of finding only true translation pairs is hardtwo sentences may share many content words and yet express different meanings however our task of getting useful mt training data does not require a perfect solution as we have seen even such noisy training pairs can help improve a translation systems performancewhile there is a large body of work on bilingual comparable corpora most of it is focused on learning word translations we are aware of only three previous efforts aimed at discovering parallel sentenceszhao and vogel describe a generative model for discovering parallel sentences in the xinhua news chineseenglish corpusutiyama et al use crosslanguage information retrieval techniques and dynamic programming to extract sentences from an englishjapanese comparable corpusfung and cheung present an extraction method similar to ours but focus on verynonparallel corpora aggregations of chinese and english news stories from different sources and time periodsthe first two systems extend algorithms designed to perform sentence alignment of parallel textsthey start by attempting to identify similar article pairs from the two corporathen they treat each of those pairs as parallel texts and align their sentences by defining a sentence pair similarity score and use dynamic programming to find the leastcost alignment over the whole document pairin the article pair selection stage the researchers try to identify for an article in one language the best matching article in the other languagezhao and vogel measure article similarity by defining a generative model in which an english story generates a chinese story with a given probabilityutiyama et al use the bm25 similarity measurethe two works also differ in the way they define the sentence similarity scorezhao and vogel combine a sentence length model with an ibm model 1type translation modelutiyama et al define a score based on word overlap which also includes the similarity score of the article pair from which the sentence pair originatesthe performance of these approaches depends heavily on the ability to reliably find similar document pairsmoreover comparable article pairs even those similar in content may exhibit great differences at the sentence level therefore they pose hard problems for the dynamic programming alignment approachin contrast our method is more robustthe document pair selection part plays a minor role it only acts as a filterwe do not attempt to find the bestmatching english document for each foreign one but rather a set of similar documentsand most importantly we are able to reliably judge each sentence pair in isolation without need for contexton the other hand the dynamic programming approach enables discovery of manytoone sentence alignments whereas our method is limited to finding onetoone alignmentsthe approach of fung and cheung is a simpler version of oursthey match each foreign document with a set of english documents using a threshold on their cosine similaritythen from each document pair they generate all possible sentence pairs compute their cosine similarity and apply another threshold in order to select the ones that are parallelusing the set of extracted sentences they learn a new dictionary try to extend their set of matching document pairs and iteratethe evaluation methodologies of these previous approaches are less direct than oursutiyama et al evaluate their sentence pairs manually they estimate that about 90 of the sentence pairs in their final corpus are parallelfung and cheung also perform a manual evaluation of the extracted sentences and estimate their precision to be 657 after bootstrappingin addition they also estimate the quality of a lexicon automatically learned from those sentenceszhao and vogel go one step further and show that the sentences extracted with their method improve the accuracy of automatically computed word alignments to an fscore of 5256 over a baseline of 4646in a subsequent publication vogel evaluates these sentences in the context of an mt system and shows that they bring improvement under special circumstances designed to reduce the noise introduced by the automatically extracted corpuswe go even further and demonstrate that our method can extract data that improves endtoend mt performance without any special processingmoreover we show that our approach works even when only a limited amount of initial parallel data is availablethe problem of aligning sentences in comparable corpora was also addressed for monolingual textsbarzilay and elhadad present a method of aligning sentences in two comparable english corpora for the purpose of building a training set of texttotext rewriting examplesmonolingual parallel sentence detection presents a particular challenge there are many sentence pairs that have low lexical overlap but are nevertheless paralleltherefore pairs cannot be judged in isolation and context becomes an important factorbarzilay and elhadad make use of contextual information by detecting the topical structure of the articles in the two corpora and aligning them at paragraph level based on the topic assigned to each paragraphafterwards they proceed and align sentences within paragraph pairs using dynamic programmingtheir results show that both the induced topical structure and the paragraph alignment improve the precision of their extraction methoda line of research that is both complementary and related to ours is that of resnik and smith their strand webmining system has a purpose that is similar to ours to identify translational pairshowever strand focuses on extracting pairs of parallel web pages rather than sentencesresnik and smith show that their approach is able to find large numbers of similar document pairstheir system is potentially a good way of acquiring comparable corpora from the web that could then be mined for parallel sentences using our methodthe most important feature of our parallel sentence selection approach is its robustnesscomparable corpora are inherently noisy environments where even similar content may be expressed in very different waysmoreover outofdomain corpora introduce additional difficulties related to limited dictionary coveragetherefore the ability to reliably judge sentence pairs in isolation is crucialcomparable corpora of interest are usually of large size thus processing them requires efficient algorithmsthe computational processes involved in our system are quite modestall the operations necessary for the classification of a sentence pair can be implemented efficiently and scaled up to very large amounts of datathe task can be easily parallelized for increased speedfor example extracting data from 600k english documents and 500k chinese documents required only about 7 days of processing time on 10 processorsthe data that we extract is usefulits impact on mt performance is comparable to that of humantranslated data of similar size and domainthus although we have focused our experiments on the particular scenario where there is little indomain training data available we believe that our method can be useful for increasing the amount of training data regardless of the domain of interestas we have shown this could be particularly effective for language pairs for which only very small amounts of parallel data are availableby acquiring a large comparable corpus and performing a few bootstrapping iterations we can obtain a training corpus that yields a competitive mt systemwe suspect our approach can be used on comparable corpora coming from any domainthe only domaindependent element of the system is the date window parameter of the article selection stage for other domains this can be replaced with a more appropriate indication of where the parallel sentences are likely to be foundfor example if the domain were that of technical manuals one would cluster printer manuals and aircraft manuals separatelyit is important to note that our work assumes that the comparable corpus does contain parallel sentences whether this is true for comparable corpora from other domains is an empirical question outside the scope of this article however both our results and those of resnik and smith strongly indicate that good data is available on the weblack of parallel corpora is a major bottleneck in the development of smt systems for most language pairsthe method presented in this paper is a step towards the important goal of automatic acquisition of such corporacomparable texts are available on the web in large quantities for many language pairs and domainsin this article we have shown how they can be efficiently mined for parallel sentencesthis work was supported by darpaito grant nn660010019814 and nsf grant iis0326276the experiments were run on university of southern californias highperformance computer cluster hpc we would like to thank hal daume iii alexander fraser radu soricut as well as the anonymous reviewers for their helpful commentsany remaining errors are of course our own
J05-4003
improving machine translation performance by exploiting nonparallel corporawe present a novel method for discovering parallel sentences in comparable nonparallel corporawe train a maximum entropy classifier that given a pair of sentences can reliably determine whether or not they are translations of each otherusing this approach we extract parallel data from large chinese arabic and english nonparallel newspaper corporawe evaluate the quality of the extracted data by showing that it improves the performance of a stateoftheart statistical machine translation systemwe also show that a goodquality mt system can be built from scratch by starting with a very small parallel corpus and exploiting a large nonparallel corpusthus our method can be applied with great benefit to language pairs for which only scarce resources are availablewe use publication date and vectorbased similarity to identify similar news articleswe filter out negative examples with high length difference or low word overlap we define features primarily based on ibm model 1 alignments
similarity of semantic relations are at least two kinds of similarity similarity correspondence between rein contrast with which is correspondence between attributes two words have a high degree of attributional similarity we call them when two pairs of words have a high degree of relational similarity we say that their relations are for example the word pair masonstone is analogous to the pair carpenterwood this article introduces latent relational analysis a method for measuring relational similarity lra has potential applications in many areas including information extraction word sense disambiguation and information retrieval recently the vector space model of information retrieval has been adapted to measuring relational similarity achieving a score of 47 on a collection of 374 collegelevel multiplechoice word analogy questions in the vsm approach the relation between a pair of words is characterized by a vector offrequencies of predefined patterns in a large corpus lra extends the vsm approach in three ways the patterns are derived automatically from the corpus the singular value decomposition is used to smooth the frequency data and automatically generated synonyms are used to explore variations of the word pairs lra achieves 56 on the 374 analogy questions statistically equivalent to the average human score of 57 on the related problem of classifying semantic relations lra achieves similar gains over the vsm there are at least two kinds of similarityrelational similarity is correspondence between relations in contrast with attributional similarity which is correspondence between attributeswhen two words have a high degree of attributional similarity we call them synonymswhen two pairs of words have a high degree of relational similarity we say that their relations are analogousfor example the word pair masonstone is analogous to the pair carpenterwoodthis article introduces latent relational analysis a method for measuring relational similaritylra has potential applications in many areas including information extraction word sense disambiguation and information retrievalrecently the vector space model of information retrieval has been adapted to measuring relational similarity achieving a score of 47 on a collection of 374 collegelevel multiplechoice word analogy questionsin the vsm approach the relation between a pair of words is characterized by a vector offrequencies of predefined patterns in a large corpuslra extends the vsm approach in three ways the patterns are derived automatically from the corpus the singular value decomposition is used to smooth the frequency data and automatically generated synonyms are used to explore variations of the word pairslra achieves 56 on the 374 analogy questions statistically equivalent to the average human score of 57on the related problem of classifying semantic relations lra achieves similar gains over the vsmthere are at least two kinds of similarityattributional similarity is correspondence between attributes and relational similarity is correspondence between relations when two words have a high degree of attributional similarity we call them synonymswhen two word pairs have a high degree of relational similarity we say they are analogousverbal analogies are often written in the form abcd meaning a is to b as c is to d for example trafficstreetwaterriverbedtraffic flows over a street water flows over a riverbeda street carries traffic a riverbed carries waterthere is a high degree of relational similarity between the word pair trafficstreet and the word pair waterriverbedin fact this analogy is the basis of several mathematical theories of traffic flow in section 2 we look more closely at the connections between attributional and relational similarityin analogies such as masonstonecarpenterwood it seems that relational similarity can be reduced to attributional similarity since mason and carpenter are attributionally similar as are stone and woodin general this reduction failsconsider the analogy trafficstreetwaterriverbedtraffic and water are not attributionally similarstreet and riverbed are only moderately attributionally similarmany algorithms have been proposed for measuring the attributional similarity between two words measures of attributional similarity have been studied extensively due to their applications in problems such as recognizing synonyms information retrieval determining semantic orientation grading student essays measuring textual cohesion and word sense disambiguation on the other hand since measures of relational similarity are not as well developed as measures of attributional similarity the potential applications of relational similarity are not as well knownmany problems that involve semantic relations would benefit from an algorithm for measuring relational similaritywe discuss related problems in natural language processing information retrieval and information extraction in more detail in section 3this article builds on the vector space model of information retrievalgiven a query a search engine produces a ranked list of documentsthe documents are ranked in order of decreasing attributional similarity between the query and each documentalmost all modern search engines measure attributional similarity using the vsm turney and littman adapt the vsm approach to measuring relational similaritythey used a vector of frequencies of patterns in a corpus to represent the relation between a pair of wordssection 4 presents the vsm approach to measuring similarityin section 5 we present an algorithm for measuring relational similarity which we call latent relational analysis the algorithm learns from a large corpus of unlabeled unstructured text without supervisionlra extends the vsm approach of turney and littman in three ways the connecting patterns are derived automatically from the corpus instead of using a fixed set of patterns singular value decomposition is used to smooth the frequency data given a word pair such as trafficstreet lra considers transformations of the word pair generated by replacing one of the words by synonyms such as trafficroad or traffichighwaysection 6 presents our experimental evaluation of lra with a collection of 374 multiplechoice word analogy questions from the sat college entrance exam1 an example of a typical sat question appears in table 1in the educational testing literature the first pair is called the stem of the analogythe correct choice is called the solution and the incorrect choices are distractorswe evaluate lra by testing its ability to select the solution and avoid the distractorsthe average performance of collegebound senior high school students on verbal sat questions corresponds to an accuracy of about 57lra achieves an accuracy of about 56on these same questions the vsm attained 47one application for relational similarity is classifying semantic relations in nounmodifier pairs in section 7 we evaluate the performance of lra with a set of 600 nounmodifier pairs from nastase and szpakowicz the problem is to classify a nounmodifier pair such as laser printer according to the semantic relation between the head noun and the modifier the 600 pairs have been manually labeled with 30 classes of semantic relationsfor example laser printer is classified as instrument the printer uses the laser as an instrument for printingwe approach the task of classifying semantic relations in nounmodifier pairs as a supervised learning problemthe 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbor in the training setlra is used to measure distance lra achieves an accuracy of 398 on the 30class problem and 580 on the 5class problemon the same 600 nounmodifier pairs the vsm had accuracies of 278 and 457 we discuss the experimental results limitations of lra and future work in section 8 and we conclude in section 9in this section we explore connections between attributional and relational similaritymedin goldstone and gentner distinguish attributes and relations as follows attributes are predicates taking one argument whereas relations are predicates taking two or more arguments attributes are used to state properties of objects relations express relations between objects or propositionsgentner notes that what counts as an attribute or a relation can depend on the contextfor example large can be viewed as an attribute of x large or a relation between x and some standard y larger thanthe amount of attributional similarity between two words a and b depends on the degree of correspondence between the properties of a and ba measure of attributional similarity is a function that maps two words a and b to a real number sima e r the more correspondence there is between the properties of a and b the greater their attributional similarityfor example dog and wolf have a relatively high degree of attributional similaritythe amount of relational similarity between two pairs of words ab and cd depends on the degree of correspondence between the relations between a and b and the relations between c and d a measure of relational similarity is a function that maps two pairs ab and cd to a real number simr e r the more correspondence there is between the relations of ab and cd the greater their relational similarityfor example dogbark and catmeow have a relatively high degree of relational similaritycognitive scientists distinguish words that are semantically associated from words that are semantically similar although they recognize that some words are both associated and similar both of these are types of attributional similarity since they are based on correspondence between attributes budanitsky and hirst describe semantic relatedness as follows recent research on the topic in computational linguistics has emphasized the perspective of semantic relatedness of two lexemes in a lexical resource or its inverse semantic distanceit is important to note that semantic relatedness is a more general concept than similarity similar entities are usually assumed to be related by virtue of their likeness but dissimilar entities may also be semantically related by lexical relationships such as meronymy and antonymy or just by any kind of functional relationship or frequent association as these examples show semantic relatedness is the same as attributional similarity here we prefer to use the term attributional similarity because it emphasizes the contrast with relational similaritythe term semantic relatedness may lead to confusion when the term relational similarity is also under discussionresnik describes semantic similarity as follows semantic similarity represents a special case of semantic relatedness for example cars and gasoline would seem to be more closely related than say cars and bicycles but the latter pair are certainly more similarrada et al suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonomic links to the exclusion of other link types that view will also be taken here although admittedly it excludes some potentially useful informationthus semantic similarity is a specific type of attributional similaritythe term semantic similarity is misleading because it refers to a type of attributional similarity yet relational similarity is not any less semantic than attributional similarityto avoid confusion we will use the terms attributional similarity and relational similarity following medin goldstone and gentner instead of semantic similarity or semantically similar we prefer the term taxonomical similarity which we take to be a specific type of attributional similaritywe interpret synonymy as a high degree of attributional similarityanalogy is a high degree of relational similarityalgorithms for measuring attributional similarity can be lexiconbased corpusbased or a hybrid of the two intuitively we might expect that lexiconbased algorithms would be better at capturing synonymy than corpusbased algorithms since lexicons such as wordnet explicitly provide synonymy information that is only implicit in a corpushowever experiments do not support this intuitionseveral algorithms have been evaluated using 80 multiplechoice synonym questions taken from the test of english as a foreign language an example of one of the 80 toefl questions appears in table 2table 3 shows the best performance on the toefl questions for each type of attributional similarity algorithmthe results support the claim that lexiconbased algorithms have no advantage over corpusbased algorithms for recognizing synonymywe may distinguish near analogies from far analogies in an analogy abcd where there is a high degree of relational similarity between ab and cd if there is also a high degree of attributional similarity between a and c and between b and d then abcd is a near analogy otherwise it is a far analogyit seems possible that sat analogy questions might consist largely of near analogies in which case they can be solved using attributional similarity measureswe could score each candidate analogy by the average of the attributional similarity sima between a and c and between b and d this kind of approach was used in two of the thirteen modules in turney et al an example of a typical toefl question from the collection of 80 questionsstem levied to evaluate this approach we applied several measures of attributional similarity to our collection of 374 sat questionsthe performance of the algorithms was measured by precision recall and f defined as follows note that recall is the same as percent correct table 4 shows the experimental results for our set of 374 analogy questionsfor example using the algorithm of hirst and stonge 120 questions were answered correctly 224 incorrectly and 30 questions were skippedwhen the algorithm assigned the same similarity to all of the choices for a given question that question was skippedthe precision was 120 and the recall was 120the first five algorithms in table 4 are implemented in pedersens wordnetsimilarity package2 the sixth algorithm used the waterloo multitext system as described in terra and clarke the difference between the lowest performance and random guessing is statistically significant with 95 confidence according to the fisher exact test however the difference between the highest performance and the vsm approach is also statistically significant with 95 confidencewe conclude that there are enough near analogies in the 374 sat questions for attributional similarity to perform better than random guessing but not enough near analogies for attributional similarity to perform as well as relational similaritythis section is a brief survey of the many problems that involve semantic relations and could potentially make use of an algorithm for measuring relational similaritythe problem of recognizing word analogies is given a stem word pair and a finite list of choice word pairs selecting the choice that is most analogous to the stemthis problem was first attempted by a system called argus using a small handbuilt semantic networkargus could only solve the limited set of analogy questions that its programmer had anticipatedargus was based on a spreading activation model and did not explicitly attempt to measure relational similarityturney et al combined 13 independent modules to answer sat questionsthe final output of the system was based on a weighted combination of the outputs of each individual modulethe best of the 13 modules was the vsm which is described in detail in turney and littman the vsm was evaluated on a set of 374 sat questions achieving a score of 47in contrast with the corpusbased approach of turney and littman veale applied a lexiconbased approach to the same 374 sat questions attaining a score of 43veale evaluated the quality of a candidate analogy abcd by looking for paths in wordnet joining a to b and c to d the quality measure was based on the similarity between the ab paths and the cd pathsturney introduced latent relational analysis an enhanced version of the vsm approach which reached 56 on the 374 sat questionshere we go beyond turney by describing lra in more detail performing more extensive experiments and analyzing the algorithm and related work in more depthfrench cites structure mapping theory and its implementation in the structure mapping engine as the most influential work on modeling of analogy makingthe goal of computational modeling of analogy making is to understand how people form complex structured analogiessme takes representations of a source domain and a target domain and produces an analogical mapping between the source and targetthe domains are given structured propositional representations using predicate logicthese descriptions include attributes relations and higherorder relations the analogical mapping connects source domain relations to target domain relationsfor example there is an analogy between the solar system and rutherfords model of the atom the solar system is the source domain and rutherfords model of the atom is the target domainthe basic objects in the source model are the planets and the sunthe basic objects in the target model are the electrons and the nucleusthe planets and the sun have various attributes such as mass and mass and various relations such as revolve and attractslikewise the nucleus and the electrons have attributes such as charge and charge and relations such as revolve and attractssme maps revolve to revolve and attracts to attractseach individual connection to revolve in an analogical mapping implies that the connected relations are similar thus smt requires a measure of relational similarity in order to form mapsearly versions of sme only mapped identical relations but later versions of sme allowed similar nonidentical relations to match however the focus of research in analogy making has been on the mapping process as a whole rather than measuring the similarity between any two particular relations hence the similarity measures used in sme at the level of individual connections are somewhat rudimentarywe believe that a more sophisticated measure of relational similarity such as lra may enhance the performance of smelikewise the focus of our work here is on the similarity between particular relations and we ignore systematic mapping between sets of relations so lra may also be enhanced by integration with smemetaphorical language is very common in our daily life so common that we are usually unaware of it gentner et al argue that novel metaphors are understood using analogy but conventional metaphors are simply recalled from memorya conventional metaphor is a metaphor that has become entrenched in our language dolan describes an algorithm that can recognize conventional metaphors but is not suited to novel metaphorsthis suggests that it may be fruitful to combine dolans algorithm for handling conventional metaphorical language with lra and sme for handling novel metaphorslakoff and johnson give many examples of sentences in support of their claim that metaphorical language is ubiquitousthe metaphors in their sample sentences can be expressed using satstyle verbal analogies of the form abcd the first column in table 5 is a list of sentences from lakoff and johnson and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogythe task of classifying semantic relations is to identify the relation between a pair of wordsoften the pairs are restricted to nounmodifier pairs but there are many interesting relations such as antonymy that do not occur in nounmodifier pairshowever nounmodifier pairs are interesting due to their high frequency in englishfor instance wordnet 20 contains more than 26000 nounmodifier pairs although many common nounmodifiers are not in wordnet especially technical termsrosario and hearst and rosario hearst and fillmore classify nounmodifier relations in the medical domain using medical subject headings and unified medical language system as lexical resources for representing each nounmodifier pair with a feature vectorthey trained a neural network to distinguish 13 classes of semantic relationsnastase and szpakowicz explore a similar approach to classifying general nounmodifier pairs using wordnet and rogets thesaurus as lexical resourcesvanderwende used handbuilt rules together with a lexical knowledge base to classify nounmodifier pairsnone of these approaches explicitly involved measuring relational similarity but any classification of semantic relations necessarily employs some implicit notion of relational similarity since members of the same class must be relationally similar to some extentbarker and szpakowicz tried a corpusbased approach that explicitly used a measure of relational similarity but their measure was based on literal matching which limited its ability to generalizemoldovan et al also used a measure of relational similarity based on mapping each noun and modifier into semantic classes in wordnetthe nounmodifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm to improve the mapping of the noun and modifier into wordnetturney and littman used the vsm to measure relational similaritywe take the same approach here substituting lra for the vsm in section 7lauer used a corpusbased approach to paraphrase noun modifier pairs by inserting the prepositions of for in at on from with and aboutfor example reptile haven was paraphrased as haven for reptileslapata and keller achieved improved results on this task by using the database of altavistas search engine as a corpuswe believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding textif we can identify the semantic relations between the given word and its context then we can disambiguate the given wordyarowskys observation that collocations are almost always monosemous is evidence for this viewfederici montemagni and pirrelli present an analogybased approach to word sense disambiguationfor example consider the word plantout of context plant could refer to an industrial plant or a living organismsuppose plant appears in some text near fooda typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism in this case the decision may not be clear since industrial plants often produce food and living organisms often serve as foodit would be very helpful to know the relation between food and plant in this examplein the phrase food for the plant the relation between food and plant strongly suggests that the plant is a living organism since industrial plants do not need foodin the text food at the plant the relation strongly suggests that the plant is an industrial plant since living organisms are not usually considered as locationsthus an algorithm for classifying semantic relations should be helpful for word sense disambiguationthe problem of relation extraction is given an input document and a specific relation r to extract all pairs of entities that have the relation r in the documentthe problem was introduced as part of the message understanding conferences in 1998zelenko aone and richardella present a kernel method for extracting the relations personaffiliation and organizationlocationfor example in the sentence john smith is the chief scientist of the hardcom corporation there is a personaffiliation relation between john smith and hardcom corporation this is similar to the problem of classifying semantic relations except that information extraction focuses on the relation between a specific pair of entities in a specific document rather than a general pair of words in general texttherefore an algorithm for classifying semantic relations should be useful for information extractionin the vsm approach to classifying semantic relations we would have a training set of labeled examples of the relation personaffiliation for instanceeach example would be represented by a vector of pattern frequenciesgiven a specific document discussing john smith and hardcom corporation we could construct a vector representing the relation between these two entities and then measure the relational similarity between this unlabeled vector and each of our labeled training vectorsit would seem that there is a problem here because the training vectors would be relatively dense since they would presumably be derived from a large corpus but the new unlabeled vector for john smith and hardcom corporation would be very sparse since these entities might be mentioned only once in the given documenthowever this is not a new problem for the vsm it is the standard situation when the vsm is used for information retrievala query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vectorthere are wellknown techniques in information retrieval for coping with this disparity such as weighting schemes for query vectors that are different from the weighting schemes for document vectors in their article on classifying semantic relations moldovan et al suggest that an important application of their work is question answering as defined in the text retrieval conference qa track the task is to answer simple questions such as where have nuclear incidents occurred by retrieving a relevant document from a large corpus and then extracting a short string from the document such as the three mile island nuclear incident caused a doe policy crisismoldovan et al propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged textthey argue that the desired semantic relation can easily be inferred from the surface form of the questiona question of the form where is likely to be looking for entities with a location relation and a question of the form what did make is likely to be looking for entities with a product relationin section 7 we show how lra can recognize relations such as location and product hearst presents an algorithm for learning hyponym relations from a corpus and berland and charniak describe how to learn meronym relations from a corpusthese algorithms could be used to automatically generate a thesaurus or dictionary but we would like to handle more relations than hyponymy and meronymywordnet distinguishes more than a dozen semantic relations between words and nastase and szpakowicz list 30 semantic relations for nounmodifier pairshearst and berland and charniak use manually generated rules to mine text for semantic relationsturney and littman also use a manually generated set of 64 patternslra does not use a predefined set of patterns it learns patterns from a large corpusinstead of manually generating new rules or patterns for each new semantic relation it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relationsa nearest neighbor algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations given the appropriate labeled training datagirju badulescu and moldovan present an algorithm for learning meronym relations from a corpuslike hearst and berland and charniak they use manually generated rules to mine text for their desired relationhowever they supplement their manual rules with automatically learned constraints to increase the precision of the rulesveale has developed an algorithm for recognizing certain types of word analogies based on information in wordnethe proposes to use the algorithm for analogical information retrievalfor example the query muslim church should return mosque and the query hindu bible should return the vedasthe algorithm was designed with a focus on analogies of the form adjectivenounadjectivenoun such as christianchurchmuslimmosquea measure of relational similarity is applicable to this taskgiven a pair of words a and b the task is to return another pair of words x and y such that there is high relational similarity between the pair ax and the pair ybfor example given a muslim and b church return x mosque and y christianmarx et al developed an unsupervised algorithm for discovering analogies by clustering words from two different corporaeach cluster of words in one corpus is coupled onetoone with a cluster in the other corpusfor example one experiment used a corpus of buddhist documents and a corpus of christian documentsa cluster of words such as hindu mahayana zen from the buddhist corpus was coupled with a cluster of words such as catholic protestant from the christian corpusthus the algorithm appears to have discovered an analogical mapping between buddhist schools and traditions and christian schools and traditionsthis is interesting work but it is not directly applicable to sat analogies because it discovers analogies between clusters of words rather than individual wordsa semantic frame for an event such as judgement contains semantic roles such as judge evaluee and reason whereas an event such as statement contains roles such as speaker addressee and message the task of identifying semantic roles is to label the parts of a sentence according to their semantic roleswe believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations thus a measure of relational similarity should help us to identify semantic rolesmoldovan et al argue that semantic roles are merely a special case of semantic relations since semantic roles always involve verbs or predicates but semantic relations can involve words of any part of speechthis section examines past work on measuring attributional and relational similarity using the vsmthe vsm was first developed for information retrieval and it is at the core of most modern search engines in the vsm approach to information retrieval queries and documents are represented by vectorselements in these vectors are based on the frequencies of words in the corresponding queries and documentsthe frequencies are usually transformed by various formulas and weights tailored to improve the effectiveness of the search engine the attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectorsfor a given query the search engine sorts the matching documents in order of decreasing cosinethe vsm approach has also been used to measure the attributional similarity of words pantel and lin clustered words according to their attributional similarity as measured by a vsmtheir algorithm is able to discover the different senses of polysemous words using unsupervised learninglatent semantic analysis enhances the vsm approach to information retrieval by using the singular value decomposition to smooth the vectors which helps to handle noise and sparseness in the data svd improves both documentquery attributional similarity measures and wordword attributional similarity measures lra also uses svd to smooth vectors as we discuss in section 5let r1 be the semantic relation between a pair of words a and b and let r2 be the semantic relation between another pair c and d we wish to measure the relational similarity between r1 and r2the relations r1 and r2 are not given to us our task is to infer these hidden relations and then compare themin the vsm approach to relational similarity we create vectors r1 and r2 that represent features of r1 and r2 and then measure the similarity of r1 and r2 by the cosine of the angle 0 between r1 and r2 we create a vector r to characterize the relationship between two words x and y by counting the frequencies of various short phrases containing x and y turney and littman use a list of 64 joining terms such as of for and to to form 128 phrases that contain x and y such as x of y y of x x for y y for x x to y and y to xthese phrases are then used as queries for a search engine and the number of hits is recorded for each querythis process yields a vector of 128 numbersif the number of hits for a query is x then the corresponding element in the vector r is logseveral authors report that the logarithmic transformation of frequencies improves cosinebased similarity measures turney and littman evaluated the vsm approach by its performance on 374 sat analogy questions achieving a score of 47since there are five choices for each question the expected score for random guessing is 20to answer a multiplechoice analogy question vectors are created for the stem pair and each choice pair and then cosines are calculated for the angles between the stem pair and each choice pairthe best guess is the choice pair with the highest cosinewe use the same set of analogy questions to evaluate lra in secti on 6computational linguistics volume 32 number 3 the vsm was also evaluated by its performance as a distance measure in a supervised nearest neighbor classifier for nounmodifier semantic relations the evaluation used 600 handlabeled nounmodifier pairs from nastase and szpakowicz a testing pair is classified by searching for its single nearest neighbor in the labeled training datathe best guess is the label for the training pair with the highest cosinelra is evaluated with the same set of nounmodifier pairs in section 7turney and littman used the altavista search engine to obtain the frequency information required to build vectors for the vsmthus their corpus was the set of all web pages indexed by altavistaat the time the english subset of this corpus consisted of about 5 1011 wordsaround april 2004 altavista made substantial changes to their search engine removing their advanced search operatorstheir search engine no longer supports the asterisk operator which was used by turney and littman for stemming and wildcard searchingaltavista also changed their policy toward automated searching which is now forbidden3 turney and littman used altavistas hit count which is the number of documents matching a given query but lra uses the number of passages matching a queryin our experiments with lra we use a local copy of the waterloo multitext system running on a 16 cpu beowulf cluster with a corpus of about 5 1010 english wordsthe wmts is a distributed search engine designed primarily for passage retrieval the text and index require approximately one terabyte of disk spacealthough altavista only gives a rough estimate of the number of matching documents the wmts gives exact counts of the number of matching passagesturney et al combine 13 independent modules to answer sat questionsthe performance of lra significantly surpasses this combined system but there is no real contest between these approaches because we can simply add lra to the combination as a fourteenth modulesince the vsm module had the best performance of the 13 modules the following experiments focus on comparing vsm and lralra takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairslra relies on three resources a search engine with a very large corpus of text a broadcoverage thesaurus of synonyms and an efficient implementation of svdwe first present a short description of the core algorithmlater in the following subsections we will give a detailed description of the algorithm as it is applied in the experiments in sections 6 and 7 intended to form near analogies with the corresponding original pairs the motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpusthe hope is that we can find near analogies for the original pairs such that the near analogies cooccur more frequently in the corpusthe danger is that the alternates may have different relations from the originalsthe filtering steps above aim to reduce this riskin our experiments the input set contains from 600 to 2244 word pairsthe output similarity measure is based on cosines so the degree of similarity can range from 1 to 1 before applying svd the vectors are completely nonnegative which implies that the cosine can only range from 0 to 1 but svd introduces negative values so it is possible for the cosine to be negative although we have never observed this in our experimentsin the following experiments we use a local copy of the wmts 4 the corpus consists of about 5 x 1010 english words gathered by a web crawler mainly from us academic web sitesthe web pages cover a very wide range of topics styles genres quality and writing skillthe wmts is well suited to lra because the wmts scales well to large corpora it gives exact frequency counts it is designed for passage retrieval and it has a powerful query syntaxas a source of synonyms we use lins automatically generated thesaurusthis thesaurus is available through an online interactive demonstration or it can be downloaded5 we used the online demonstration since the downloadable version seems to contain fewer wordsfor each word in the input set of word pairs we automatically query the online demonstration and fetch the resulting list of synonymsas a courtesy to other users of lins online system we insert a 20second delay between each two querieslins thesaurus was generated by parsing a corpus of about 5 x 107 english words consisting of text from the wall street journal san jose mercury and ap newswire the parser was used to extract pairs of words and their grammatical relationswords were then clustered into synonym sets based on the similarity of their grammatical relationstwo words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of wordsgiven a word and its part of speech lins thesaurus provides a list of words sorted in order of decreasing attributional similaritythis sorting is convenient for lra since it makes it possible to focus on words with higher attributional similarity and ignore the restwordnet in contrast given a word and its part of speech provides a list of words grouped by the possible senses of the given word with groups sorted by the frequencies of the senseswordnets sorting does not directly correspond to sorting by degree of attributional similarity although various algorithms have been proposed for deriving attributional similarity from wordnet we use rohdes svdlibc implementation of the svd which is based on svdpackc 6 in lra svd is used to reduce noise and compensate for sparsenesswe will go through each step of lra using an example to illustrate the stepsassume that the input to lra is the 374 multiplechoice sat word analogy questions of turney and littman since there are six word pairs per question the input consists of 2244 word pairslet us suppose that we wish to calculate the relational similarity between the pair quartvolume and the pair miledistance taken from the sat question in table 6the lra algorithm consists of the following 12 steps alternates as followsfor each alternate pair send a query to the wmts to find the frequency of phrases that begin with one member of the pair and end with the otherthe phrases cannot have more than max phrase words sort the alternate pairs by the frequency of their phrasesselect the top num filter most frequent alternates and discard the remainder this step tends to eliminate alternates that have no clear semantic relationthe third column in table 7 shows the frequency with which each pair cooccurs in a window of max phrase wordsthe last column in table 7 shows the pairs that are selectedalternate forms of the original pair quartvolumethe first column shows the original pair and the alternate pairsthe second column shows lins similarity score for the alternate word compared to the original wordfor example the similarity between quart and pint is 0210the third column shows the frequency of the pair in the wmts corpusthe fourth column shows the pairs that pass the filtering step a given pairthe phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pairthese phrases give us information about the semantic relations between the words in each paira phrase with no words between the two members of the word pair would give us very little information about the semantic relations table 8 gives some examples of phrases in the corpus that match the pair quartvolume4find patterns for each phrase found in the previous step build patterns from the intervening wordsa pattern is constructed by replacing any or all or none of the intervening words with wild cards if a phrase is n words long there are n 2 intervening words between the members of the given word pair thus a phrase with n words generates 2 patternsfor each pattern count the number of pairs with phrases that match the pattern keep the top num patterns most frequent patterns and discard the rest typically there will be millions of patterns so it is not feasible to keep them all more weight to columns with frequencies that vary substantially from one row to the next and less weight to columns that are uniformtherefore we weight the cell xij by wj 1 hj log which varies from 0 when pij is uniform to 1 when entropy is minimalwe also apply the log transformation to frequencies logfor all i and all j replace the original value xij in x by the new value wj logthis is an instance of the term frequencyinverse document frequency family of transformations which is familiar in information retrieval log is the tf term and wj is the idf term approximates the original matrix x in the sense that it minimizes the approximation errorsthat is xˆ ukekvt k minimizes xˆ x f over all matrices xˆ of rank k where 11 f denotes the frobenius norm we may think of this matrix ukekvtk as a smoothed or compressed version of the original matrixin the subsequent steps we will be calculating cosines for row vectorsfor this purpose we can simplify calculations by dropping v the cosine of two vectors is their dot product after they have been normalized to unit lengththe matrix xxt contains the dot products of all of the row vectorswe can find the dot product of the ith and jth row vectors by looking at the cell in row i column j of the matrix xxtsince vtv i we have xxt uevtt uevtvetut uet which means that we can calculate cosines with the smaller matrix ue instead of using x uevt 10projection calculate ukek this matrix has the same number of rows as x but only k columns we can compare two word pairs by calculating the cosine of the corresponding row vectors in ukekthe row vector for each word pair has been projected from the original 8000 dimensional space into a new 300 dimensional spacethe value k 300 is recommended by landauer and dumais for measuring the attributional similarity between wordswe investigate other values in section 64therefore we have 2 ways to compare a version of ab with a version of cd look for the row vectors in ukek that correspond to the versions of ab and the versions of cd and calculate the 2 cosines for example suppose ab is quartvolume and cd is miledistancetable 10 gives the cosines for the sixteen combinations12calculate relational similarity the relational similarity between ab and cd is the average of the cosines among the 2 cosines from step 11 that are greater than or equal to the cosine of the original pairs ab and cd the requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies which may be introduced in step 1 and may have slipped through the filtering in step 2averaging the cosines as opposed to taking their maximum is intended to provide some resistance to noisefor quartvolume and miledistance the third column in table 10 shows which alternates are used to calculate the averagefor these two pairs the average of the selected cosines is 0677in table 7 we see that pumpingvolume has slipped through the filtering in step 2 although it is not a good alternate for quartvolumehowever table 10 shows that all four analogies that involve pumpingvolume are dropped here in step 12steps 11 and 12 can be repeated for each two input pairs that are to be comparedthis completes the description of lratable 11 gives the cosines for the sample sat questionthe choice pair with the highest average cosine choice is the solution for this question lra answers the question correctlyfor comparison column 2 gives the cosines for the original pairs and column 3 gives the highest cosinefor this particular sat question there is one choice that has the highest cosine for all three columns choice although this is not true in generalnote that the gap between the first choice and the second choice is largest for the average cosines this suggests that the average of the cosines is better at discriminating the correct choice than either the original cosine or the highest cosine this section presents various experiments with 374 multiplechoice sat word analogy questionstable 12 shows the performance of the baseline lra system on the 374 sat questions using the parameter settings and configuration described in section 5lra correctly answered 210 of the 374 questions 160 questions were answered incorrectly and 4 questions were skipped because the stem pair and its alternates were represented by zero vectorsthe performance of lra is significantly better than the lexiconbased approach of veale and the best performance using attributional similarity with 95 confidence according to the fisher exact test as another point of reference consider the simple strategy of always guessing the choice with the highest cooccurrence frequencythe idea here is that the words in the solution pair may occur together frequently because there is presumably a clear and meaningful relation between the solution words whereas the distractors may only occur together rarely because they have no meaningful relationthis strategy is signifcantly worse than random guessingthe opposite strategy always guessing the choice pair with the lowest cooccurrence frequency is also worse than random guessing it appears that the designers of the sat questions deliberately chose distractors that would thwart these two strategieswith 374 questions and six word pairs per question there are 2244 pairs in the input setin step 2 introducing alternate pairs multiplies the number of pairs by four resulting in 8976 pairsin step 5 for each pair ab we add ba yielding 17952 pairshowever some pairs are dropped because they correspond to zero vectors also a few words do not appear in lins thesaurus and some word pairs appear twice in the sat questions the sparse matrix has 17232 rows and 8000 columns with a density of 58 table 13 gives the time required for each step of lra a total of almost 9 daysall of the steps used a single cpu on a desktop computer except step 3 finding the phrases for each word pair which used a 16 cpu beowulf clustermost of the other steps are parallelizable with a bit of programming effort they could also be executed on the beowulf clusterall cpus were 24 ghz intel xeonsthe desktop computer had 2 gb of ram and the cluster had a total of 16 gb of ram from turney and littman as mentioned in section 42 we estimate this corpus contained about 5 1011 english words at the time the vsmav experiments took placevsmwmts refers to the vsm using the wmts which contains about 5 1010 english wordswe generated the vsmwmts results by adapting the vsm to the wmtsthe algorithm is slightly different from turney and littmans because we used passage frequencies instead of document frequenciesall three pairwise differences in recall in table 14 are statistically significant with 95 confidence using the fisher exact test the pairwise differences in precision between lra and the two vsm variations are also significant but the difference in precision between the two vsm variations is not significantalthough vsmav has a corpus 10 times larger than lras lra still performs better than vsmavcomparing vsmav to vsmwmts the smaller corpus has reduced the score of the vsm but much of the drop is due to the larger number of questions that were skipped with the smaller corpus many more of the input word pairs simply do not appear together in short phrases in the corpuslra is able to answer as many questions as vsmav although it uses the same corpus as vsmwmts because lins thesaurus allows lra to substitute synonyms for words that are not in the corpusvsmav required 17 days to process the 374 analogy questions compared to 9 days for lraas a courtesy to altavista turney and littman inserted a 5second delay between each two queriessince the wmts is running locally there is no need for delaysvsmwmts processed the questions in only one daythe average performance of collegebound senior high school students on verbal sat questions corresponds to a recall of about 57 the sat i test consists of 78 verbal questions and 60 math questions analogy questions are only a subset of the 78 verbal sat questionsif we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal sat i questions then we can estimate that the average collegebound senior would correctly answer about 57 of the 374 analogy questionsof our 374 sat questions 190 are from a collection of ten official sat tests on this subset of the questions lra has a recall of 611 compared to a recall of 511 on the other 184 questionsthe 184 questions that are not from claman seem to be more difficultthis indicates that we may be underestimating how well lra performs relative to collegebound senior high school studentsclaman suggests that the analogy questions may be somewhat harder than other verbal sat questions so we may be slightly overestimating the mean human score on the analogy questionstable 15 gives the 95 confidence intervals for lra vsmav and vsmwmts calculated by the binomial exact test there is no significant difference between lra and human performance but vsmav and vsmwmts are significantly below humanlevel performancethere are several parameters in the lra algorithm the parameter values were determined by trying a small number of possible values on a small set of questions that were set asidesince lra is intended to be an unsupervised learning algorithm we did not attempt to tune the parameter values to maximize the precision and recall on the 374 sat questionswe hypothesized that lra is relatively insensitive to the values of the parameterstable 16 shows the variation in the performance of lra as the parameter values are adjustedwe take the baseline parameter settings and vary each parameter one at a time while holding the remaining parameters fixed at their baseline valuesnone of the precision and recall values are significantly different from the baseline according to the fisher exact test at the 95 confidence levelthis supports the hypothesis that the algorithm is not sensitive to the parameter valuesalthough a full run of lra on the 374 sat questions takes 9 days for some of the parameters it is possible to reuse cached data from previous runswe limited the experiments with num sim and max phrase because caching was not as helpful for these parameters so experimenting with them required several weeksas mentioned in the introduction lra extends the vsm approach of turney and littman by exploring variations on the analogies by replacing words with synonyms automatically generating connecting patterns and smoothing the data with svd in this subsection we ablate each of these three components to assess their contribution to the performance of lratable 17 shows the resultswithout svd performance drops but the drop is not statistically significant with 95 confidence according to the fisher exact test however we hypothesize that the drop in performance would be significant with a larger set of word pairsmore word pairs would increase the sample size which would decrease the 95 confidence interval which would likely show that svd is making a significant contributionfurthermore more word pairs would increase the matrix size which would give svd more leveragefor example landauer and dumais apply svd to a matrix of 30473 columns by 60768 rows but our matrix here is 8000 columns by 17232 rowswe are currently gathering more sat questions to test this hypothesiswithout synonyms recall drops significantly but the drop in precision is not significantwhen the synonym component is dropped the number of skipped questions rises from 4 to 22 which demonstrates the value of the synonym component of lra for compensating for sparse datawhen both svd and synonyms are dropped the decrease in recall is significant but the decrease in precision is not significantagain we believe that a larger sample size would show that the drop in precision is significantif we eliminate both synonyms and svd from lra all that distinguishes lra from vsmwmts is the patterns the vsm approach uses a fixed list of 64 patterns to generate 128 dimensional vectors whereas lra uses a dynamically generated set of 4000 patterns resulting in 8000 dimensional vectorswe can see the value of the automatically generated patterns by comparing lra without synonyms and svd to vsmwmts the difference in both precision and recall is statistically significant with 95 confidence according to the fisher exact test the ablation experiments support the value of the patterns and synonyms in lra but the contribution of svd has not been proven although we believe more data will support its effectivenessnonetheless the three components together result in a 16 increase in f we know a priori that if abcd then badc for example mason is to stone as carpenter is to wood implies stone is to mason as wood is to carpentertherefore a good measure of relational similarity simr should obey the following equation in steps 5 and 6 of the lra algorithm we ensure that the matrix x is symmetrical so that equation is necessarily true for lrathe matrix is designed so that the row vector for ab is different from the row vector for ba only by a permutation of the elementsthe same permutation distinguishes the row vectors for cd and dc therefore the cosine of the angle between ab and cd must be identical to the cosine of the angle between ba and dc to discover the consequences of this design decision we altered steps 5 and 6 so that symmetry is no longer preservedin step 5 for each word pair ab that appears in the input set we only have one rowthere is no row for ba unless ba also appears in the input setthus the number of rows in the matrix dropped from 17232 to 8616in step 6 we no longer have two columns for each pattern p one for word1 p word2 and another for word2 p word1 however to be fair we kept the total number of columns at 8000in step 4 we selected the top 8000 patterns distinguishing the pattern word1 p word2 from the pattern word2 p word1 thus a pattern p with a high frequency is likely to appear in two columns in both possible orders but a lower frequency pattern might appear in only one column in only one possible orderthese changes resulted in a slight decrease in performancerecall dropped from 561 to 553 and precision dropped from 568 to 559the decrease is not statistically significanthowever the modified algorithm no longer obeys equation although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the sat questions we prefer to retain symmetry to ensure that equation is satisfiednote that if abcd it does not follow that bacd for example it is false that stone is to mason as carpenter is to wood in general we have the following inequality therefore we do not want ab and ba to be represented by identical row vectors although it would ensure that equation is satisfiedin step 12 of lra the relational similarity between ab and cd is the average of the cosines among the 2 cosines from step 11 that are greater than or equal to the cosine of the original pairs ab and cd that is the average includes only those alternates that are better than the originalstaking all alternates instead of the better alternates recall drops from 561 to 404 and precision drops from 568 to 408both decreases are statistically significant with 95 confidence according to the fisher exact test suppose a word pair ab corresponds to a vector r in the matrix xit would be convenient if inspection of r gave us a simple explanation or description of the relation between a and bfor example suppose the word pair ostrichbird maps to the row vector r it would be pleasing to look in r and find that the largest element corresponds to the pattern is the largest unfortunately inspection of r reveals no such convenient patternswe hypothesize that the semantic content of a vector is distributed over the whole vector it is not concentrated in a few elementsto test this hypothesis we modified step 10 of lrainstead of projecting the 8000 dimensional vectors into the 300 dimensional space ukek we use the matrix ukekvtk this matrix yields the same cosines as ukek but preserves the original 8000 dimensions making it easier to interpret the row vectorsfor each row vector in ukekvtk we select the n largest values and set all other values to zerothe idea here is that we will only pay attention to the n most important patterns in r the remaining patterns will be ignoredthis reduces the length of the row vectors but the cosine is the dot product of normalized vectors so the change to the vector lengths has no impact only the angle of the vectors is importantif most of the semantic content is in the n largest elements of r then setting the remaining elements to zero should have relatively little impacttable 18 shows the performance as n varies from 1 to 3000the precision and recall are significantly below the baseline lra until n 300 in other words for a typical sat analogy question we need to examine the top 300 patterns to explain why lra selected one choice instead of anotherwe are currently working on an extension of lra that will explain with a single pattern why one choice is better than anotherwe have had some promising results but this work is not yet maturehowever we can confidently claim that interpreting the vectors is not trivialturney and littman used 64 manually generated patterns whereas lra uses 4000 automatically generated patternswe know from section 65 that the automatically generated patterns are significantly better than the manually generated patternsit may be interesting to see how many of the manually generated patterns appear within the automatically generated patternsif we require an exact match 50 of the 64 manual patterns can be found in the automatic patternsif we are lenient about wildcards and count the pattern not the as matching not the then 60 of the 64 manual patterns appear within the automatic patternsthis suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns rather than a qualitative difference in the patternsturney and littman point out that some of their 64 patterns have been used by other researchersfor example hearst used the pattern such as to discover hyponyms and berland and charniak used the pattern of the to discover meronymsboth of these patterns are included in the 4000 patterns automatically generated by lrathe novelty in turney and littman is that their patterns are not used to mine text for instances of word pairs that fit the patterns instead they are used to gather frequency data for building vectors that represent the relation between a given pair of wordsthe results in section 68 show that a vector contains more information than any single pattern or small set of patterns a vector is a distributed representationlra is distinct from hearst and berland and charniak in its focus on distributed representations which it shares with turney and littman but lra goes beyond turney and littman by finding patterns automaticallyriloff and jones and yangarber also find patterns automatically but their goal is to mine text for instances of word pairs the same goal as hearst and berland and charniak because lra uses patterns to build distributed vector representations it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of hearst berland and charniak riloff and jones and yangarber therefore lra can simply select the highest frequency patterns it does not need the more sophisticated selection algorithms of riloff and jones and yangarber this section describes experiments with 600 nounmodifier pairs handlabeled with 30 classes of semantic relations in the following experiments lra is used with the baseline parameter values exactly as described in section 55no adjustments were made to tune lra to the nounmodifier pairslra is used as a distance measure in a single nearest neighbor supervised learning algorithmthe following experiments use the 600 labeled nounmodifier pairs of nastase and szpakowicz this data set includes information about the part of speech and wordnet synset of each word but our algorithm does not use this informationtable 19 lists the 30 classes of semantic relationsthe table is based on appendix a of nastase and szpakowicz with some simplificationsthe original table listed several semantic relations for which there were no instances in the data setthese were relations that are typically expressed with longer phrases rather than nounmodifier word pairsfor clarity we decided not to include these relations in table 19in this table h represents the head noun and m represents the modifierfor example in flu virus the head noun is virus and the modifier is flu in english the modifier usually precedes the head nounin the description of purpose v represents an arbitrary verbin concert hall the hall is for presenting concerts or holding concerts nastase and szpakowicz organized the relations into groupsthe five capitalized terms in the relation column of table 19 are the names of five groups of semantic relationswe make use of this grouping in the following experimentsthe following experiments use single nearest neighbor classification with leaveoneout crossvalidationfor leaveoneout crossvalidation the testing set consists of a single nounmodifier pair and the training set consists of the 599 remaining nounmodifiersthe data set is split 600 times so that each nounmodifier gets a turn as the testing word pairthe predicted class of the testing pair is the class of the single nearest neighbor in the training setas the measure of nearness we use lra to calculate the relational similarity between the testing pair and the training pairsthe single nearest neighbor algorithm is a supervised learning algorithm but we are using lra to measure the distance between a pair and its potential neighbors and lra is itself determined in an unsupervised fashion each sat question has five choices so answering 374 sat questions required calculating 374 x 5 x 16 29920 cosinesthe factor of 16 comes from the alternate pairs step 11 in lrawith the nounmodifier pairs using leaveoneout crossvalidation each test pair has 599 choices so an exhaustive application of lra would require calculating 600 x 599 x 16 5750400 cosinesto reduce the amount of computation required we first find the 30 nearest neighbors for each pair ignoring the alternate pairs and then apply the full lra including the alternates to just those 30 neighbors which requires calculating only 359400 288 000 647400 cosinesthere are 600 word pairs in the input set for lrain step 2 introducing alternate pairs multiplies the number of pairs by four resulting in 2400 pairsin step 5 for each pair ab we add ba yielding 4800 pairshowever some pairs are dropped because they correspond to zero vectors and a few words do not appear in lins thesaurusthe sparse matrix has 4748 rows and 8000 columns with a density of 84following turney and littman we evaluate the performance by accuracy and also by the macroaveraged f measure macroaveraging calculates the precision recall and f for each class separately and then calculates the average across all classesmicroaveraging combines the true positive false positive and false negative counts for all of the classes and then calculates precision recall and f from the combined countsmacroaveraging gives equal weight to all classes but microaveraging gives more weight to larger classeswe use macroaveraging because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpusclassification with 30 distinct classes is a hard problemto make the task easier we can collapse the 30 classes to 5 classes using the grouping that is given in table 19for example agent and beneficiary both collapse to participanton the 30 class problem lra with the single nearest neighbor algorithm achieves an accuracy of 398 and a macroaveraged f of 366always guessing the majority class would result in an accuracy of 82 on the 5 class problem the accuracy is 580 and the macroaveraged f is 546always guessing the majority class would give an accuracy of 433 for both the 30 class and 5 class problems lras accuracy is significantly higher than guessing the majority class with 95 confidence according to the fisher exact test table 20 shows the performance of lra and vsm on the 30 class problemvsmav is vsm with the altavista corpus and vsmwmts is vsm with the wmts corpusthe results for vsmav are taken from turney and littman all three pairwise differences in the three f measures are statistically significant at the 95 level according to the paired ttest the accuracy of lra is significantly higher than the accuracies of vsmav and vsmwmts according to the fisher exact test but the difference between the two vsm accuracies is not significanttable 21 compares the performance of lra and vsm on the 5 class problemthe accuracy and f measure of lra are significantly higher than the accuracies andthe experimental results in sections 6 and 7 demonstrate that lra performs significantly better than the vsm but it is also clear that there is room for improvementthe accuracy might not yet be adequate for practical applications although past work has shown that it is possible to adjust the tradeoff of precision versus recall for some of the applications such as information extraction lra might be suitable if it is adjusted for high precision at the expense of low recallanother limitation is speed it took almost 9 days for lra to answer 374 analogy questionshowever with progress in computer hardware speed will gradually become less of a concernalso the software has not been optimized for speed there are several places where the efficiency could be increased and many operations are parallelizableit may also be possible to precompute much of the information for lra although this would require substantial changes to the algorithmthe difference in performance between vsmav and vsmwmts shows that vsm is sensitive to the size of the corpusalthough lra is able to surpass vsmav when the wmts corpus is only about one tenth the size of the av corpus it seems likely that lra would perform better with a larger corpusthe wmts corpus requires one terabyte of hard disk space but progress in hardware will likely make 10 or even 100 terabytes affordable in the relatively near futurefor nounmodifier classification more labeled data should yield performance improvementswith 600 nounmodifier pairs and 30 classes the average class has only 20 exampleswe expect that the accuracy would improve substantially with 5 or 10 times more examplesunfortunately it is time consuming and expensive to acquire handlabeled dataanother issue with nounmodifier classification is the choice of classification scheme for the semantic relationsthe 30 classes of nastase and szpakowicz might not be the best schemeother researchers have proposed different schemes it seems likely that some schemes are easier for machine learning than othersfor some applications 30 classes may not be necessary the 5 class scheme may be sufficientlra like vsm is a corpusbased approach to measuring relational similaritypast work suggests that a hybrid approach combining multiple modules some corpusbased some lexiconbased will surpass any purebred approach in future work it would be natural to combine the corpusbased approach of lra with the lexiconbased approach of veale perhaps using the combination method of turney et al svd is only one of many methods for handling sparse noisy datawe have also experimented with nonnegative matrix factorization probabilistic latent semantic analysis kernel principal components analysis and iterative scaling we had some interesting results with small matrices but none of these methods seemed substantially better than svd and none of them scaled up to the matrix sizes we are using here in step 4 of lra we simply select the top num patterns most frequent patterns and discard the remaining patternsperhaps a more sophisticated selection algorithm would improve the performance of lrawe have tried a variety of ways of selecting patterns but it seems that the method of selection has little impact on performancewe hypothesize that the distributed vector representation is not sensitive to the selection method but it is possible that future work will find a method that yields significant improvement in performancethis article has introduced a new method for calculating relational similarity latent relational analysisthe experiments demonstrate that lra performs better than the vsm approach when evaluated with sat word analogy questions and with the task of classifying nounmodifier expressionsthe vsm approach represents the relation between a pair of words with a vector in which the elements are based on the frequencies of 64 handbuilt patterns in a large corpuslra extends this approach in three ways the patterns are generated dynamically from the corpus svd is used to smooth the data and a thesaurus is used to explore variations of the word pairswith the wmts corpus lra achieves an f of 565 whereas the f of vsm is 403we have presented several examples of the many potential applications for measures of relational similarityjust as attributional similarity measures have proven to have many practical uses we expect that relational similarity measures will soon become widely usedgentner et al argue that relational similarity is essential to understanding novel metaphors many researchers have argued that metaphor is the heart of human thinking we believe that relational similarity plays a fundamental role in the mind and therefore relational similarity measures could be crucial for artificial intelligencein future work we plan to investigate some potential applications for lrait is possible that the error rate of lra is still too high for practical applications but the fact that lra matches average human performance on sat analogy questions is encouragingthanks to michael littman for sharing the 374 sat analogy questions and for inspiring me to tackle themthanks to vivi nastase and stan szpakowicz for sharing their 600 classified nounmodifier phrasesthanks to egidio terra charlie clarke and the school of computer science of the university of waterloo for giving us a copy of the waterloo multitext system and their terabyte corpusthanks to dekang lin for making his dependencybased word similarity lexicon available onlinethanks to doug rohde for svdlibc and michael berry for svdpackthanks to ted pedersen for making his wordnetsimilarity package availablethanks to joel martin for comments on the articlethanks to the anonymous reviewers of computational linguistics for their very helpful comments and suggestions
J06-3003
similarity of semantic relationsthere are at least two kinds of similarityrelational similarity is correspondence between relations in contrast with attributional similarity which is correspondence between attributeswhen two words have a high degree of attributional similarity we call them synonymswhen two pairs of words have a high degree of relational similarity we say that their relations are analogousfor example the word pair masonstone is analogous to the pair carpenterwoodthis article introduces latent relational analysis a method for measuring relational similaritylra has potential applications in many areas including information extraction word sense disambiguation and information retrievalrecently the vector space model of information retrieval has been adapted to measuring relational similarity achieving a score of 47 on a collection of 374 collegelevel multiplechoice word analogy questionsin the vsm approach the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpuslra extends the vsm approach in three ways the patterns are derived automatically from the corpus the singular value decomposition is used to smooth the frequency data and automatically generated synonyms are used to explore variations of the word pairslra achieves 56 on the 374 analogy questions statistically equivalent to the average human score of 57on the related problem of classifying semantic relations lra achieves similar gains over the vsmwe develop a corpus based approach to model relational similarity addressing the distinction between synonyms and antonymswe describe a method that extracts subsequence patterns for noun pairs from a large corpus using query expansion to increase the recall of the search and feature selection and dimensionality reduction to reduce the complexity of the feature space
hierarchical phrasebased translation present a statistical machine translation model that uses that contain subphrases the model is formally a synchronous contextfree grammar but is learned from a parallel text without any syntactic annotations thus it can be seen as combining fundamental ideas from both syntaxbased translation and phrasebased translation we describe our systems training and decoding methods in detail and evaluate it for translation speed and translation accuracy using bleu as a metric of translation accuracy we find that our system performs significantly better than the alignment template system a stateoftheart phrasebased system we present a statistical machine translation model that uses hierarchical phrasesphrases that contain subphrasesthe model is formally a synchronous contextfree grammar but is learned from a parallel text without any syntactic annotationsthus it can be seen as combining fundamental ideas from both syntaxbased translation and phrasebased translationwe describe our systems training and decoding methods in detail and evaluate it for translation speed and translation accuracyusing bleu as a metric of translation accuracy we find that our system performs significantly better than the alignment template system a stateoftheart phrasebased systemthe alignment template translation model and related phrasebased models advanced the state of the art in machine translation by expanding the basic unit of translation from words to phrases that is substrings of potentially unlimited size these phrases allow a model to learn local reorderings translations of multiword expressions or insertions and deletions that are sensitive to local contextthis makes them a simple and powerful mechanism for translationthe basic phrasebased model is an instance of the noisychannel approach following convention we call the source language french and the target language english the translation of a french sentence f into an english sentence e is modeled as the phrasebased translation model p encodes e into f by the following steps other phrasebased models model the joint distribution p or make p and p into features of a loglinear model but the basic architecture of phrase segmentation phrase reordering and phrase translation remains the samephrasebased models can robustly perform translations that are localized to substrings that are common enough to have been observed in trainingbut koehn och and marcu find that phrases longer than three words improve performance little for training corpora of up to 20 million words suggesting that the data may be too sparse to learn longer phrasesabove the phrase level some models perform no reordering some have a simple distortion model that reorders phrases independently of their content and some for example the alignment template system hereafter ats and the ibm phrasebased system have phrasereordering models that add some lexical sensitivitybut as an illustration of the limitations of phrase reordering consider the following mandarin example and its english translation m1111 æ jlf01 p n 0 aozhou shi yu beihan you bangjiao de shaoshu guojia zhiyi australia is with north korea have dipl rels that few countries one of australia is one of the few countries that have diplomatic relations with north koreaif we count zhiyi as a single token then translating this sentence correctly into english requires identifying a sequence of five word groups that need to be reversedwhen we run a phrasebased system ats on this sentence we get the following phrases with translations aozhou shi1 yu beihan2 you bangjiao de shaoshu guojia zhiyi australia has dipl relswith north korea2 is1 one of the few countries where we have used subscripts to indicate the reordering of phrasesthe phrasebased model is able to order has diplomatic relations with north korea correctly and is one of the few countries correctly but does not invert these two groups as it shouldwe propose a solution to these problems that does not interfere with the strengths of the phrasebased approach but rather capitalizes on them because phrases are good for learning reorderings of words we can use them to learn reorderings of phrases as wellin order to do this we need hierarchical phrases that can contain other phrasesfor example a hierarchical phrase pair that might help with the above example is where 1 and 2 are placeholders for subphrases this would capture the fact that chinese prepositional phrases almost always modify verb phrases on the left whereas english prepositional phrases usually modify verb phrases on the rightbecause it generalizes over possible prepositional objects and direct objects it acts both as a discontinuous phrase pair and as a phrasereordering rulethus it is considerably more powerful than a conventional phrase pairsimilarly the hierarchical phrase pair would capture the fact that chinese relative clauses modify nps on the left whereas english relative clauses modify on the right and the pair would render the construction zhiyi in english word orderthese three rules along with some conventional phrase pairs suffice to translate the sentence correctly aozhou shi yu beihan1 you bangjiao2 de shaoshu guojia3 zhiyi australia is one of the few countries3 that have dipl rels2 with n korea1 the system we describe in this article uses rules like and which we formalize in the next section as rules of a synchronous contextfree grammar 1 moreover the system is able to learn them automatically from a parallel text without syntactic annotationbecause our system uses a synchronous cfg it could be thought of as an example of syntaxbased statistical machine translation joining a line of research that has been fruitful but has not previously produced systems that can compete with phrasebased systems in largescale translation tasks such as the evaluations held by nistour approach differs from early syntaxbased statistical translation models in combining the idea of hierarchical structure with key insights from phrasebased mt crucially by incorporating the use of elementary structures with possibly many words we hope to inherit phrasebased mts capacity for memorizing translations from parallel dataother insights borrowed from the current state of the art include minimumerrorrate training of loglinear models and use of an mgram language modelthe conjunction of these various elements presents a considerable challenge for implementation which we discuss in detail in this articlethe result is the first system employing a grammar to perform better than phrasebased systems in largescale evaluations2approaches to syntaxbased statistical mt have varied in their reliance on syntactic theories or annotations made according to syntactic theoriesat one extreme are those exemplified by that of wu that have no dependence on syntactic theory beyond the idea that natural language is hierarchicalif these methods distinguish between different categories they typically do not distinguish very manyour approach as presented here falls squarely into this familyby contrast other approaches exemplified by that of yamada and knight do make use of parallel data with syntactic annotations either in the form of phrasestructure trees or dependency trees because syntactically annotated corpora are comparatively small obtaining parsed parallel text in quantity usually entails running an automatic parser on a parallel corpus to produce noisy annotationsboth of these strands of research have recently begun to explore extraction of larger rules guided by word alignmentsthe extraction method we use which is a straightforward generalization of phrase extraction from wordaligned parallel text has been independently proposed before in various settingsthe method of block is the earliest instance we are aware of though it is restricted to rules with one variablethe same method has also been used by probst et al and xia and mccord in conjunction with syntactic annotations to extract rules that are used for reordering prior to translationfinally galley et al use the same method to extract a very large grammar from syntactically annotated datathe discontinuous phrases used by simard et al have a similar purpose to synchronous grammar rules but they have variables that stand for single words rather than subderivations and they can interleave in nonhierarchical waysthe model is based on a synchronous cfg elsewhere known as a syntaxdirected transduction grammar we give here an informal definition and then describe in detail how we build a synchronous cfg for our modelin a synchronous cfg the elementary structures are rewrite rules with aligned pairs of righthand sides where x is a nonterminal γ and α are both strings of terminals and nonterminals and is a onetoone correspondence between nonterminal occurrences in γ and nonterminal occurrences in αfor example the hierarchical phrase pairs and previously presented could be formalized in a synchronous cfg as where we have used boxed indices to indicate which nonterminal occurrences are linked by the conventional phrase pairs would be formalized as a synchronous cfg derivation begins with a pair of linked start symbolsat each step two linked nonterminals are rewritten using the two components of a single rulewhen denoting links with boxed indices we must consistently reindex the newly introduced symbols apart from the symbols already presentfor an example using these rules see figure 1the bulk of the grammar consists of automatically extracted rulesthe extraction process begins with a wordaligned corpus a set of triples where f is a french sentence e is an english sentence and is a binary relation between positions off and positions of e the word alignments are obtained by running giza on the corpus in both directions and forming the union of the two sets of word alignmentswe then extract from each wordaligned sentence pair a set of rules that are consistent with the word alignmentsthis can be thought of in two stepsfirst we identify initial phrase pairs using the same criterion as most phrasebased systems namely there must be at least one word inside one phrase aligned to a word inside the other but no word inside one phrase can be aligned to a word outside the other phrasefor example suppose our training data contained the fragment example derivation of a synchronous cfgnumbers above arrows are rules used at each step with word alignments as shown in figure 2athe initial phrases that would be extracted are shown in figure 2bmore formally definition 1 given a wordaligned sentence pair let fji stand for the substring of f from position i to position j inclusive and similarly for ejithen a rule is an initial phrase pair of iff second in order to obtain rules from the phrases we look for phrases that contain other phrases and replace the subphrases with nonterminal symbolsfor example given the initial phrases shown in figure 2b we could form the rule where k is an index not used in γ and α is a rule of this scheme generates a very large number of rules which is undesirable not only because it makes training and decoding very slow but also because it creates spurious ambiguitya situation where the decoder produces many derivations that are distinct yet have the same model feature vectors and give the same translationthis can result in kbest lists with very few different translations or feature vectors which is problematic for the minimumerrorrate training algorithm to avoid this we filter our grammar according to the following constraints chosen to balance grammar size and performance on our development set glue ruleshaving extracted rules from the training data we could let x be the grammars start symbol and translate new sentences using only the extracted rulesbut for robustness and for continuity with phrasebased translation models we allow the grammar to divide a french sentence into a sequence of chunks and translate one chunk at a timewe formalize this inside a synchronous cfg using the rules and which we call the glue rules repeated here these rules analyze an s as a sequence of xs which are translated without reorderingnote that if we restricted our grammar to comprise only the glue rules and conventional phrase pairs the model would reduce to a phrasebased model with monotone translation entity rulesfinally for each sentence to be translated we run some specialized translation modules to translate the numbers dates numbers and bylines in the sentence and insert these translations into the grammar as new rules3 such modules are often used by phrasebased systems as well but here their translations can plug into hierarchical phrases for example into the rule allowing it to generalize over numbers of yearsgiven a french sentence f a synchronous cfg will have in general many derivations that yield f on the french side and therefore many possible translations e we now define a model over derivations d to predict which translations are more likely than othersfollowing och and ney we depart from the traditional noisychannel approach and use a more general loglinear model over derivations d 3 these modules are due to you germann and f j ochin a previous paper we reported on translation modules for numbers and namesthe present modules are not the same as those though the mechanism for integrating them is identical209 computational linguistics volume 33 number 2 where the φi are features defined on derivations and the λi are feature weightsone of the features is an mgram language model plm the remainder of the features we will define as products of functions on the rules used in a derivation the factors other than the language model factor can be put into a particularly convenient forma weighted synchronous cfg is a synchronous cfg together with a function w that assigns weights to rulesthis function induces a weight function over derivations it is easy to write dynamicprogramming algorithms to find the highestweight translation or kbest translations with a weighted synchronous cfgtherefore it is problematic that w does not include the language model which is extremely important for translation qualitywe return to this challenge in section 5for our experiments we use a feature set analogous to the default feature set of pharaoh the rules extracted from the training bitext have the following features finally for all the rules there is a word penalty exp where t just counts terminal symbolsthis allows the model to learn a general preference for shorter or longer outputsin order to estimate the parameters of the phrase translation and lexicalweighting features we need counts for the extracted rulesfor each sentence pair in the training data there is in general more than one derivation of the sentence pair using the rules extracted from itbecause we have observed the sentence pair but have not observed the derivations we do not know how many times each derivation has been seen and therefore we do not actually know how many times each rule has been seenfollowing och and others we use heuristics to hypothesize a distribution of possible rules as though we observed them in the training data a distribution that does not necessarily maximize the likelihood of the training data5 ochs method gives a count of one to each extracted phrase pair occurrencewe likewise give a count of one to each initial phrase pair occurrence then distribute its weight equally among the rules obtained by subtracting subphrases from ittreating this distribution as our observed data we use relativefrequency estimation to obtain p and pfinally the parameters λi of the loglinear model are learned by minimumerrorrate training which tries to set the parameters so as to maximize the bleu score of a development setthis gives a weighted synchronous cfg according to that is ready to be used by the decoder4 this feature uses word alignment information which is discarded in the final grammarif a rule occurs in training with more than one possible word alignment koehn och and marcu take the maximum lexical weight we take a weighted average5 this approach is similar to that taken by many parsers such as spatter and its successors which use heuristics to hypothesize an augmented version of the training data but it is especially reminiscent of the data oriented parsing method which hypothesizes a distribution over many possible derivations of each training example from subtrees of varying sizesin brief our decoder is a cky parser with beam search together with a postprocessor for mapping french derivations to english derivationsgiven a french sentence f it finds the english yield of the single best derivation that has french yield f eˆ e arg max p dstff note that this is not necessarily the highestprobability english string which would require a more expensive summation over derivationswe now discuss the details of the decoder focusing attention on efficiently calculating english languagemodel probabilities for possible translations which is the primary technical challengein the following we present several parsers as deductive proof systems a parser in this notation defines a space of weighted items in which some items are designated axioms and some items are designated goals and a set of inference rules of the form which means that if all the items ii are provable with weight wi then i is provable with weight w provided the side condition φ holdsthe parsing process grows a set of provable items it starts with the axioms and proceeds by applying inference rules to prove more and more items until a goal is provenfor example the wellknown cky algorithm for cfgs in chomsky normal form can be thought of as a deductive proof system whose items can take one of two forms the axioms would be 6 treating grammar rules as axioms is not standard practice but advocated by goodman here it has the benefit of simplifying the presentation in section 534 and the inference rules would be and the goal would be s 0 n where s is the start symbol of the grammar and n is the length of the input string f given a synchronous cfg we could convert its frenchside grammar into chomsky normal form and then for each sentence we could find the best parse using ckythen it would be a straightforward matter to revert the best parse from chomsky normal form into the original form and map it into its corresponding english tree whose yield is the output translationhowever because we have already restricted the number of nonterminal symbols in our rules to two it is more convenient to use a modified cky algorithm that operates on our grammar directly without any conversion to chomsky normal formthe axioms inference rules and goals for the basic decoder are shown in figure 3its time complexity is o just as ckys isbecause this algorithm does not yet incorporate a language model let us call it the lm parserthe actual search procedure is given by the pseudocode in figure 4it organizes the proved items into an array chart whose cells chartx i j are sets of itemsthe cells are ordered such that every item comes after its possible antecedents smaller spans before larger spans and x items before s items then the parser can proceed by visiting the chart cells in order and trying to prove all the items for each cellwhenever it proves a new item it adds the item to the search procedure for the lm parser appropriate chart cell in order to reconstruct the derivations later it must also store with each item a tuple of backpointers to the antecedents from which the item was deduced if two items are added to a cell that are equivalent except for their weights or backpointers then they are merged with the merged item taking its weight and backpointers from the better of the two equivalent itemsthe algorithm in figure 4 does not completely search the space of proofs because it has a constraint that prohibits any x from spanning a substring longer than a fixed limit λ on the french side corresponding to the maximum length constraint on initial rules during trainingthis gives the decoding algorithm an asymptotic time complexity of oin principle λ should match the initial phrase length limit used in training but in practice it can be adjusted separately to maximize accuracy or speedwe often want to find not only the best derivation for a french sentence but a list of the kbest derivationsthese are used for minimumerrorrate training and for rescoring with a language model we describe here how to do this using the lazy algorithm of huang and chiang part of this method will also be reused in our algorithm for fast parsing with a language model if we conceive of lists as functions from indices to values we may create a virtual list a function that computes member values on demand instead of storing all the values staticallythe heart of the kbest algorithm is a function mergeproducts which takes a set g of tuples of lists with an operator and returns a virtual list example illustrating mergeproducts where l1 1 26 10 and l2 1 4 7numbers are negative logprobabilitiesit assumes that the input lists are sorted and returns a sorted lista naive implementation of mergeproducts would simply calculate all possible products and sort however if we are only interested in the top part of the result we can implement mergeproducts so that the output values are computed lazily and the input lists are accessed only as neededto do this we must assume that the multiplication operator is monotonic in each of its argumentsby way of motivation consider the simple case g the full set of possible products can be arranged in a twodimensional grid which we could then sort to obtain mergeproductsbut because of our assumptions we know that the first element of mergeproducts must be l11 l21moreover we know that the second element must be either l11 l22 or l12 l21in general if some of the cells have been previously enumerated the next cell must be one of the cells adjacent to the previously enumerated ones and we need not consider the others in this way if we only want to compute the first few elements of mergeproducts we can do so by performing a small number of products and discarding the rest of the gridfigure 6 shows the pseudocode for mergeproducts7 in lines 25 a priority queue is initialized with the best element from each l e g where l ranges over tuples of lists and 1 stands for a vector whose elements all have the value 1 the rest of the function creates the virtual list to enumerate the next element of the list we first insert the elements adjacent to the previously enumerated element if any and then enumerate the best element in the priority queue if any we assume standard implementations of 7 this version corrects the behavior of the previously published version in some boundary conditionsthanks to d smith and jmay for pointing those cases outin the actual implementation an earlier version is used which has the correct behavior but not for cyclic forests function for computing the union of products of sorted lists the priority queue subroutines heapify insert and extractbest the kbest list generator is then easy to define first we generate a parse forest then we simply apply mergeproducts recursively to the whole forest using memoization to ensure that we generate only one kbest list for each item in the forestthe pseudocode in figure 7 will find only the weights for the kbest derivations extending it to output the translations as well is a matter of modifying line 5 to package the english sides of rules together with the weights w and replacing the real multiplication operator in line 9 with one that not only multiplies weights but also builds partial translations out of subtranslationswe now turn to the problem of incorporating the language model describing three methods first using the lm parser to obtain a kbest list of translations and rescoring it with the lm second incorporating the lm directly into the grammar in a construction reminiscent of the intersection of a cfg with a finitestate automaton third a hybrid method which we call cube pruning531 rescoringone easy way to incorporate the lm into the model would be to decode first using the lm parser to produce a kbest list of translations then to rescore the kbest list using the lmthis method has the potential to be very fast linear in k however because the number of possible translations is exponential in n we may have to set k extremely high in order to find the true best translation or something acceptably close to it532 intersectiona more principled solution would be to calculate the lm probabilities onlineto do this we view an mgram lm as a weighted finite state machine m in which each state corresponds to a sequence of english terminal symbolswe can then intersect the english side of our weighted cfg g with this finitestate machine to produce a new weighted cfg that incorporates m thus plm would be part of the rule weights just like the other featuresin principle this method should admit no search errors though in practice the blowup in the effective size of the grammar necessitates pruning of the search space which can cause search errorsthe classic construction for intersecting a cfg with a finitestate machine is due to barhillel perles and shamir but we use a slightly different construction proposed by wu for inversion transduction grammar and bigram lmswe present an adaptation of his algorithm to synchronous cfgs with two nonterminals per righthand side and general mgram lmsfirst assume that the lm expects a whole sentence to be preceded by startofsentence symbols and followed by a single endofsentence symbol the grammar can be made to do this simply by adding a rule and making s the new start symbolfirst we define two functions p and q which operate on strings over t you where t is the english terminal alphabet and is a special placeholder symbol that stands for an elided part of an english stringvalues of p and q in the cgisf examplethe function p calculates lm probabilities for all the complete mgrams in a string the function q elides symbols when all their mgrams have been accounted forthese functions let us correctly calculate the lm score of a sentence piecemealfor example let m 3 and c g i s f stand for colorless green ideas sleep furiously then table 1 shows some values of p and qthen we may extend the lm parser as shown in figure 8 to use p and q to calculate lm probabilitieswe call this parser the lm parserthe items are of the form x i j e signifying that a subtree rooted in x has been recognized spanning from i to j on the french side and its english translation is e the theoretical running time of this algorithm is o because a deduction can combine up to two starred strings which each have up to 2 terminal symbolsthis is far too slow to use in practice so we must use beamsearch to prune the search space down to a reasonable size533 pruningthe chart is organized into cells each of which contains all the items standing for x spanning fji1the rule items are also organized into cells each of which contains all the rules with the same french side and lefthand sidefrom here on let us inference rules for the lm parserhere wxx means the string w with the string x substituted for the symbol xthe function q is defined in the text consider the item scores as costs that is negative log probabilitiesthen for each cell we throw out any item that has a score worse than in the lm parser the score of an item x i j e in the chart does not reflect the lm probability of generating the first words of e thus two items x i j e and x i j e are not directly comparableto enable more meaningful comparisons we define a heuristic when comparing items for pruning we add this heuristic function to the score of each item534 cube pruningnow we can develop a compromise between the rescoring and intersection methodsconsider figure 9ato the left of the grid we have four rules with the same french side and above we have three items with the same category and span that is they belong to the same chart cellany of the twelve combinations of these rules and items can be used to deduce a new item and all these new items will go into the same chart cell the intersection method would compute all twelve items and add them to the new chart cell where most of them will likely be pruned awayin actuality the grid may be a cube with up to b3 elements whereas the target chart cell can hold at most b items thus the vast majority of computed items are prunedbut it is possible to compute only a small corner of the cube and preemptively prune the rest of the items without computing them a method we refer to as cube pruningthe situation pictured in figure 9a is very similar to kbest list generationthe four rules to the left of the grid can be thought of like a 4best list for a single lm rule item the three items above the grid like a 3best list for the single lm item x 68 and the new items to be deduced like a kbest list for x 5 8 except that we do not know what k is in advanceif we could use mergeproducts to enumerate the new items bestfirst then we could enumerate them until one of them was pruned from the new cell then the rest of items which would have a worse score than the pruned item could be preemptively prunedmergeproducts expects its input lists to be sorted bestfirst and the operator to be monotonic in each of its argumentsfor cube pruning we sort items according to their lm score including the heuristic function h the operator we use takes one or more antecedent items and forms their consequent item according to example illustrating hybrid method for incorporating the lmnumbers are negative the lm parsernote that the lm makes this only approximately monotonicthis means that the enumeration of new items will not necessarily be bestfirstto alleviate this problem we stop the enumeration not as soon as an item falls outside the beam but as soon as an item falls outside the beam by a margin of e this quantity e expresses our guess as to how much the scores of the enumerated items can fluctuate because of the lma simpler approach and probably better in practice would be simply to set e 0 that is to ignore any fluctuation but increase r and b to compensatesee figure 9b for an example of cube pruningthe upperleft grid cell is enumerated first as in the kbest example in section 52 but the choice of the second is different because of the added lm coststhen the third item is enumerated and merged with the first supposing a threshold beam of are 5 and a margin of e 05 we quit upon considering the next item because with a score of 77 it falls outside the beam by more than e the rest of the grid is then discardedthe pseudocode is given in figure 10the function inferlm is used as the operator it takes a tuple of antecedent lm items and returns a consequent lm item according to the inference rules in figure 8the procedure reparselm takes a lm chart chart as input and produces a lm chart chartthe variables you v stand for items in lm and you v for items in lm and the relation v i v is defined as follows for each cell in the input chart it takes the single item from the cell and constructs the virtual list l of all of its lm counterparts then it adds the top items of l to the target cell until the cell is judged to be full the implementation of our system named hiero is in python a bytecodeinterpreted language and optimized using psyco a justintime compiler and pyrex a pythonlike compiled language with c code from the sri language modeling toolkit in this section we report on experiments with mandarintoenglish translationour evaluation metric is caseinsensitive bleu4 as defined by nist that is using the shortest reference sentence length for the brevity penaltywe ran the grammar extractor of section 32 on the parallel corpora listed in table 2 with the exception of the united nations data for a total of 28 million words 8 we then filtered this grammar for our development set which was the 2002 nist mt evaluation dryrun data and our test sets which were the data from the 20032005 nist mt evaluationssome example rules are shown in table 3 and the sizes of the filtered grammars are shown in table 4we also used the sri language modeling toolkit to train two trigram language models with modified kneserney smoothing one on 28 billion words from the english gigaword corpus and the other on the english side of the parallel text table 5 shows the average decoding time on part of the development set for the three lmincorporation methods described in section 53 on a single processor of a dual 3 ghz xeon machinefor these experiments only the gigaword language model was usedwe set b 30 are 1 for x cells b 15 are 1 for s cells and b 100 for rules except where noted in table 5note that values for r and e are only meaningful relative to the scale of the feature weights here the language model weight was 006the feature weights were obtained by minimumerrorrate training using the cubepruning decoderfor the lm rescoring decoder parsing and kbest list generation used feature weights optimized for the lm model but rescoring used the same weights as the other experimentswe tested the rescoring method the intersection method and the cubepruning method the lm rescoring decoder is the fastest but has the poorest bleu scoreidentifying and rescoring the kbest derivations is very quick the execution time is dominated by reconstructing the output strings for the kbest derivations so it is possible that further optimization could reduce these timesthe intersecting decoder has the best score but runs very slowlyfinally the cubepruning decoder runs almost as fast as the rescoring decoder and translates almost as well as the intersecting decoderamong these tests e 01 gives the best results but in general the optimal setting will depend on the other beam settings and the scale of the feature weightswe compared hiero against two baselines the stateoftheart phrasebased system ats and hiero itself run as a conventional phrasebased system with monotone translation the ats baseline was trained on all the parallel data listed in table 1 for a total of 159 million words the second language model was also trained on the english side of the whole bitextphrases of up to 10 in length on the french side were extracted from the parallel text and minimumerrorrate training was performed on the development set for 17 features the same as used in the nist 2004 and 2005 evaluations9 these features are similar to the features used for our system but also include features for phrasereordering ibm model 1 in both directions a missing word penalty and a feature that controls a fallback lexiconthe other baseline which we call hiero monotone is the same as hiero except with the limitation that extracted rules cannot have any nonterminal symbols on their righthand sidesin other words only conventional phrases can be extracted of length up to 5these phrases are combined using the glue rules only which makes the grammar equivalent to a conventional phrasebased model with monotone translationthus this system represents the nearest phrasebased equivalent to our model to provide a controlled test of the effect of hierarchical phraseswe performed minimumerrorrate training separately on hiero and hiero monotone to maximize their bleu scores on the development set the feature weights for hiero are shown in table 6the beam settings used for both decoders were are 30 b 30 for x cells are 30 b 15 for s cells b 100 for rules and e 3on the test set we found that hiero improves over both baselines in all three tests all improvements are statistically significant using the sign test as described by collins koehn and kuˇcerova syntaxbased statistical machine translation is a twofold challengeit is a modeling challenge in part because of the difficulty of coordinating syntactic structures with potentially messy parallel corpora it is an implementation challenge because of the added complexity introduced by hierarchical structureshere we have addressed the modeling challenge by taking only the fundamental idea from syntax that language is hierarchically structured and integrating it conservatively into a phrasebased model typical of the current state of the artthis fusion does no violence to the latter indeed we have presented our approach as a logical outgrowth of the phrasebased approachmoreover hierarchical structure improves translation accuracy significantlyfeature weights obtained by minimumerrorrate training language model 100 language model 103 the choice to use hierarchical structures that are more complex than flat structures as well as rules that contain multiple lexical items instead of one an mgram model whose structure cuts across the structure of contextfree derivations and large amounts of training data for meaningful comparison with modern systemsthese all threaten to make training a synchronous grammar and translating with it intractablewe have shown how through training with simple methods inspired by phrasebased models and translating using a modified cky with cube pruning this challenge can be metclearly however we have only scratched the surface of the modeling challengethe fact that moving from flat structures to hierarchical structures significantly improves translation quality suggests that more specific ideas from syntax may be valuable as wellthere are many possibilities for enriching the simple framework that the present model providesbut the course taken here is one of organic development of an approach known to work well at largescale tasks and we plan to stay this course in future work towards more syntactically informed statistical machine translationi would like to thank liang huang philipp koehn adam lopez nitin madnani daniel marcu christof monz dragos munteanu philip resnik michael subotin wei wang and the anonymous reviewersthis work was partially supported by onr muri contract fcpo810548265 by department of defense contract rd025700 and under the gale program of the defense advanced research projects agency contract hr 001106c0022s d g
J07-2003
hierarchical phrasebased translation we present a statistical machine translation model that uses hierarchical phrasesphrases that contain subphrasesthe model is formally a synchronous contextfree grammar but is learned from a parallel text without any syntactic annotationsthus it can be seen as combining fundamental ideas from both syntaxbased translation and phrasebased translationwe describe our systems training and decoding methods in detail and evaluate it for translation speed and translation accuracyusing bleu as a metric of translation accuracy we find that our system performs significantly better than the alignment template system a stateoftheart phrasebased systemthe hierarchical phrasebased model makes an advance of statistical machine translation by employing hierarchical phrases which not only uses phrases to learn local translations but also uses hierarchical phrases to capture reorderings of words and subphrases which can cover a large scope
ccgbank a corpus of ccg derivations and dependency structures extracted from the penn treebank article presents an algorithm for translating the penn treebank into a corpus of combinatory categorial grammar derivations augmented with local and longrange wordword dependencies the resulting corpus ccgbank includes 994 of the sentences in the penn treebank it is available from the linguistic data consortium and has been used to train widecoverage statistical parsers that obtain stateoftheart rates of dependency recovery in order to obtain linguistically adequate ccg analyses and to eliminate noise and inconsistencies in the original annotation an extensive analysis of the constructions and annotations in the penn treebank was called for and a substantial number of changes to the treebank were necessary we discuss the implications of our findings for the extraction of other linguistically expressive grammars from the treebank and for the design offuture treebanks this article presents an algorithm for translating the penn treebank into a corpus of combinatory categorial grammar derivations augmented with local and longrange wordword dependenciesthe resulting corpus ccgbank includes 994 of the sentences in the penn treebankit is available from the linguistic data consortium and has been used to train widecoverage statistical parsers that obtain stateoftheart rates of dependency recoveryin order to obtain linguistically adequate ccg analyses and to eliminate noise and inconsistencies in the original annotation an extensive analysis of the constructions and annotations in the penn treebank was called for and a substantial number of changes to the treebank were necessarywe discuss the implications of our findings for the extraction of other linguistically expressive grammars from the treebank and for the design offuture treebanksin order to understand a newspaper article or any other piece of text it is necessary to construct a representation of its meaning that is amenable to some form of inferencethis requires a syntactic representation which is transparent to the underlying semantics making the local and longrange dependencies between heads arguments and modifiers explicitit also requires a grammar that has sufficient coverage to deal with the vocabulary and the full range of constructions that arise in free text together with a parsing model that can identify the correct analysis among the many alternatives that such a widecoverage grammar will generate even for the simplest sentencesgiven our current machine learning techniques such parsing models typically need to be trained on relatively large treebanksthat is text corpora handlabeled with detailed syntactic structuresbecause such annotation requires linguistic expertise and is therefore difficult to produce we are currently limited to at most a few treebanks per languageone of the largest and earliest such efforts is the penn treebank which contains a onemillion word subcorpus of wall street journal text that has become the de facto standard training and test data for statistical parsersits annotation which is based on generic phrasestructure grammar and function tags on nonterminal categories providing syntactic role information is designed to facilitate the extraction of the underlying predicateargument structurestatistical parsing on the penn treebank has made great progress by focusing on the machinelearning or algorithmic aspects however this has often resulted in parsing models and evaluation measures that are both based on reduced representations which simplify or ignore the linguistic information represented by function tags and null elements in the original treebankthe reasons for this shift away from linguistic adequacy are easy to tracethe very healthy turn towards quantitative evaluation interacts with the fact that just about every dimension of linguistic variation exhibits a zipfian distribution where a very small proportion of the available alternatives accounts for most of the datathis creates a temptation to concentrate on capturing the few highfrequency cases at the top end of the distribution and to ignore the long tail of rare events such as nonlocal dependenciesdespite the fact that these occur in a large number of sentences they affect only a small number of words and have thus a small impact on overall dependency recoveryalthough there is now a sizable literature on trace and functiontag insertion algorithms and integrated parsing with function tags or null elements such approaches typically require additional pre or postprocessing steps that are likely to add further noise and errors to the parser outputa completely integrated approach that is based on a syntactic representation which allows direct recovery of the underlying predicateargument structure might therefore be preferablesuch representations are provided by grammar formalisms that are more expressive than simple phrasestructure grammar like lexicalfunctional grammar headdriven phrasestructure grammar treeadjoining grammar minimalist programrelated grammars or combinatory categorial grammar however until very recently only handwritten grammars which lack the wide coverage and robustness of treebank parsers were available for these formalisms because treebank annotation for individual formalisms is prohibitively expensive there have been a number of efforts to extract tags lfgs and more recently hpsgs from the penn treebank statistical parsers that are trained on these tag and hpsg corpora have been presented by chiang and miyao and tsujii whereas the lfg parsing system of cahill et al uses a postprocessing step on the output of a treebank parser to recover predicateargument dependenciesin this article we present an algorithmic method for obtaining a corpus of ccg derivations and dependency structures from the penn treebank together with some observations that we believe carry wider implications for similar attempts with other grammar formalisms and corporaearlier versions of the resulting corpus ccgbank have already been used to build a number of widecoverage statistical parsers which recover both local and longrange dependencies directly and in a single passccg is a linguistically expressive but efficiently parseable lexicalized grammar formalism that was specifically designed to provide a basegenerative account of coordinate and relativized constructions like the following ccg directly captures the nonlocal dependencies involved in these and other constructions including control and raising via an enriched notion of syntactic types without the need for syntactic movement null elements or tracesit also provides a surfacecompositional syntaxsemantics interface in which monotonic rules of semantic composition are paired onetoone with rules of syntactic compositionthe corresponding predicateargument structure or logical form can therefore be directly obtained from any derivation if the semantic interpretation of each lexical entry is knownin this article and in ccgbank we approximate such semantic interpretations with dependency graphs that include most semantically relevant nonanaphoric local and longrange dependenciesalthough certain decisions taken by the builders of the original penn treebank mean that the syntactic derivations that can be obtained from the penn treebank are not always semantically correct subsequent work by bos et al and bos has demonstrated that the output of parsers trained on ccgbank can also be directly translated into logical forms such as discourse representation theory structures which can then be used as input to a theorem prover in applications like question answering and textual entailment recognitiontranslating the treebank into this more demanding formalism has revealed certain sources of noise and inconsistency in the original annotation that have had to be corrected in order to permit induction of a linguistically correct grammarbecause of this preprocessing the dependency structures in ccgbank are likely to be more consistent than those extracted directly from the treebank via heuristics such as those given by magerman and collins and therefore may also be of immediate use for dependencybased approacheshowever the structure of certain constructions such as compound nouns or fragments is deliberately underspecified in the penn treebankalthough we have attempted to semiautomatically restore the missing structure wherever possible in many cases this would have required additional manual annotation going beyond the scope of our projectwe suspect that these properties of the original treebank will affect any similar attempt to extract dependency structures or grammars for other expressive formalismsthe penn treebank is the earliest corpus of its kind we hope that our experiences will extend its useful life and help in the design of future treebankscombinatory categorial grammar was originally developed as a nearcontextfree theory of natural language grammar with a very free definition of derivational structure adapted to the analysis of coordination and unbounded dependency without movement or deletion transformationsit has been successfully applied to the analysis of coordination relative clauses and related constructions intonation structure binding and control and quantifier scope alternation in a number of languagessee steedman and baldridge for a recent reviewextensions of ccg to other languages and wordorders are discussed by hoffman kang bozsahin komagata steedman trechsel baldridge and c akıcı the derivations in ccgbank follow the analyses of steedman except where notedcategorial grammars are strongly lexicalized in the sense that the grammar is entirely defined by a lexicon in which words are associated with one or more specific categories which completely define their syntactic behaviorthe set of categories consists of basic categories and complex categories of the form xy or xy representing functors with argument category y and result category x functor categories of the form xy expect their argument y to its right whereas those of the form xy expect y to their left2 these functor categories encode subcategorization information that is the number and directionality of expected argumentsenglish intransitive verbs and verb phrases have the category snp they take a np to their left as argument and yield a sentenceenglish transitive verbs have the category np they take an np to their right to yield a verb phrase which in turn takes a np to its left to form a sentence s each syntactic category also has a corresponding semantic interpretation hence the lexical entry for ditransitive give can be written as follows3 in our translation algorithm we use simple wordword dependency structures to approximate the underlying semantic interpretationa universal set of syntactic combinatory rules defines how constituents can be combinedall variants of categorial grammar since ajdukiewicz and barhillel include function application where a functor xy or xy is applied to an argument y these rules give rise to derivations like the following4 this derivation is isomorphic to a traditional contextfree derivation tree like the following ccg additionally introduces a set of rule schemata based on the combinators of combinatory logic which enable succinct analyses of extraction and coordination constructionsit is a distinctive property of ccg that all syntactic rules are purely typedriven unlike traditional structuredependent transformationscomposition and substitution allow two functors to combine into another functor whereas typeraising is a unary rule that exchanges the roles of functor and argument for example the following is the derivation of a relative clause related to we will see further examples of their use latersuch rules induce additional derivational ambiguity even in canonical sentences like however our translation algorithm yields normal form derivations which use composition and typeraising only when syntactically necessaryfor coordination we will use a binarized version of the following ternary rule schema5 for further explanation and linguistics and computational motivation for this theory of grammar the reader is directed to steedman the syntactic derivations in ccgbank are accompanied with bilexical headdependency structures which are defined in terms of the lexical heads of functor categories and their argumentsthe derivation in corresponds to the following dependency structure which includes the longrange dependency between give and money the dependency structures in ccgbank are intended to include all nonanaphoric local and longrange dependencies relevant to determining semantic predicateargument relations and hence approximate more finegrained semantic representationsin this they differ crucially from the bilexical surface dependencies used by the parsing models of collins and charniak and returned by the dependency parser of mcdonald crammer and pereira in order to obtain such nonlocal dependencies certain types of lexical category such as relative pronouns or raising and control verbs require additional coindexation information we believe that ccgbanks extensive annotation of nonlocal predicateargument dependencies is one of its most useful features for researchers using other expressive grammar formalisms including lfg hpsg and tag facilitating comparisons in terms of error analyses of particular constructions or types of dependency such as nonsubject extracted relative clausesbecause these dependency structures provide a suitable approximation of the underlying semantics and because each interpretation unambiguously corresponds to one dependency structure we furthermore follow lin and carroll minnen and briscoe in regarding them as a fairer and ultimately more useful standard against which to evaluate the output of parsers trained on ccgbank than the syntactic derivations themselvesthe wall street journal subcorpus of the penn treebank contains about 50000 sentences or 1 million words annotated with partofspeech tags and phrasestructure trees these trees are relatively flat modals and auxiliaries introduce a new vp level whereas verb modifiers and arguments typically appear all at the same level as sisters of the main verba similarly flat annotation style is adopted at the sentence levelnps are flat as well with all complex modifiers appearing at the same np level and compound nouns typically lacking any internal structurethe translation algorithm needs to identify syntactic heads and has to distinguish between complements and modifiersin the treebank this information is not explicitalthough some nonterminal nodes carry additional function tags such as sbj or tmp truly problematic cases such as prepositional phrases are often marked with tags such as clr or dir which are not always reliable or consistent indicators that a constituent is a modifier or an argumentthe treebank uses various types of null elements and traces to encode nonlocal dependenciesthese are essential for our algorithm since they make it possible to obtain correct ccg derivations for relative clauses whquestions and coordinate constructions such as right node raisingtheir treatment is discussed in sections 62 and 63in order to obtain ccg derivations from the penn treebank we need to define a mapping from phrase structure trees to ccg derivations including a treatment of the null elements in the treebankwe also need to modify the treebank where its syntactic analyses differ from ccg and clean up certain sources of noise that would otherwise result in incorrect ccg derivationswe will begin by ignoring null elements and assume that penn treebank trees are entirely consistent with ccg analysesthe basic algorithm then consists of four steps similar algorithms for phrasestructure trees without traces or other null elements have been suggested by buszkowski and penn and osborne and briscoe we illustrate this basic algorithm using the previous example then we will extend this algorithm to deal with coordination and introduce a modification to cope with the fact that certain word classes such as participials can act as modifiers of a large number of constituent typessection 5 summarizes the most important preprocessing steps that were necessary to obtain the desired ccg analyses from the treebank treessection 6 extends this basic algorithm to deal with the null elements in the treebankfirst the constituent type of each node complement or adjunct is determined using heuristics adapted from magerman and collins which take the label of a node and its parent into account6 we assume that np daughters of vps are complements unless they carry a function tag such as loc dir tmp and so on but treat all pps as adjuncts unless they carry the clr function tagin our example we therefore treat passing as transitive even though it should subcategorize for the pp this binarization process inserts dummy nodes into the tree such that all children to the left of the head branch off in a rightbranching tree and then all children to the right of the head branch off in a leftbranching tree7 we assign ccg categories to the nodes in this binary tree in the following manner 431 the root nodethe category of the root node is determined by the label of the root of the treebank tree 8 if the root node has the category s it typically carries a feature that distinguishes different types of sentences such as declaratives whquestions yesno questions or fragments in our running example the root is sdcl because its treebank label is s and its head word the auxiliary has the pos tag vbz432 head and complementthe category of a complement child is defined by a similar mapping from treebank labels to categories for example np np pp pp9 the ccg category of the head is a function which takes the category of the complement as argument and returns the category of the parent nodethe direction of the slash is given by the position of the complement relative to the head the vp that is headed by the main verb passing is a complement of the auxiliarybecause the pos tag of passing is vbg the ccg category of the complement vp is sngnp and the lexical category of is is therefore is just passing the buck to young people other vp features include to b spt pss or ng 433 head and adjunctaccording to the treebank annotation and the assumptions of the algorithm our example has two vp adjuncts the adverb just and because of its dir function tag the pp to young peoplein both cases the adjunct category depends on the category of the parent and the category of the head child is copied from the parent given a parent category c the category of an adjunct child is a unary functor cc if the adjunct child is to the left of the head child or cc if it is to the right function composition reduces the number of lexical categories of adjuncts of the head in most cases the category c is equal to the parent category c without any features such as dcl ng and so forth and the modifier combines with the head via simple function applicationas shown in figure 1 in many cases a more elegant analysis can be obtained if we allow modifiers to compose with the headfor example regularly has the category in sentences such as i visit certain places regularly because it modifies the verb phrase visit certain places which has the category sdclnpbut in the corresponding relative clause places that i visit regularly or with heavy np shift regularly modifies visit that is a constituent with category npwithout function composition the category of regularly would have to be npnp but composition allows the ordinary category to also work in this casetherefore if the parent category c is of the form x the algorithm strips off all outermost forward arguments from c to obtain csimilarly if c is of the form x all outermost backward arguments are stripped off from c to obtain c434 head and punctuation markwith the exception of some dashes and parentheses the category of a punctuation mark is identical to its pos tag and the head has the same category as its parent435 the final derivationfigure 2 shows the complete ccg derivation of our examplethe category assignment procedure corresponds to a topdown normalform derivation which almost always uses function applicationin the basic case presented here composition is only used to provide a uniform analysis of adjunctslongrange dependencies represented in the penn treebank by traces such as t and rnr require extensions to the basic algorithm which result in derivations that make use of typeraising composition and substitution rules like those in wherever syntactically necessarywe defer explanation of these rules until section 6 which presents the constructions that motivate themfinally we need to obtain the wordword dependencies which approximate the underlying predicateargument structurethis is done by a bottomup procedure which simply retraces the steps in the ccg derivation that we have now obtainedthe ccg derivation with corresponding dependencies and dependency graph for example all categories in ccgbank including results and arguments of complex categories are associated with a corresponding list of lexical headsthis list can be empty or it can consist of one or more tokenslexical categories have one lexical head the word itselffor example he for the first np and is for the all dependencies are defined in terms of the heads of lexical functor categories and of their argumentsin order to distinguish the slots filled by different arguments we number the arguments of complex lexical categories from left to right in the category notation for example np2 or 2np3in lexical functor categories such as that of the auxiliary the lexical head of all result categories is identical to the lexical head of the entire category but in functor categories that represent modifiers such as the adverb the head of the result comes from the argument we use indices on the categories to represent this information iiin ccgbank modifier categories are easily identified by the fact that they are of the form xx or where x does not have any of the features described previously such as dcl bsimilarly determiners take a noun as argument to form a noun phrase whose lexical head comes from the noun npnbinithus the lexical head of the noun phrase the buck is buck not thewe also use this coindexation mechanism for lexical categories that project nonlocal dependenciesfor instance the category of the auxiliary mediates a dependency between the subject and the main verb like all lexical categories of auxiliaries modals and subjectraising verbs the head of the subject np is coindexed with the head of subject inside the vp argument the set of categories that project such dependencies is not acquired automatically but is given to the algorithm which creates the actual dependency structuresa complete list of the lexical entries in sections 0221 of the treebank which use this coindexation mechanism to project nonlocal dependencies is given in the ccgbank manual we believe that in practice this mechanism is largely correct even though it is based on the assumption that all lexical categories that have the same syntactic type project the same dependenciesit may be possible to use the indices on the pronull elements in the treebank to identify and resolve ambiguous cases we leave this to future research10 function application and composition typically result in the instantiation of the lexical head of an argument of some functor category and therefore create new dependencies whereas coordination creates a new category whose lexical head lists are concatenations of the head lists of the conjunctswhen the np2 passing is combined with the np the buck the lexical head of the np2 is instantiated with bucksimilarly when the adverb just 2 is applied to passing the buck a dependency between just and passing is created however because 2 is a modifier category the head of the resulting sngnp is passing not just in the next step this sngnp is combined with the auxiliary 2the np in the 2 argument of the auxiliary unifies with the np1 argument of passingbecause the np in the 2 is also coindexed with the subject np1 of the auxiliary the np of the resulting sdclnp now has two unfilled dependencies to the subject np1 of is and passingwhen the entire verb phrase is combined with the subject he fills both slots figure 2 shows the resulting ccg derivation and the corresponding list of word word dependencies for our example sentenceit is the latter structure that we claim approximates for present purposes the predicateargument structure or interpretation of the sentence and provides the gold standard against which parsers can be evaluatedin order to deal with coordination both the tree binarization and the category assignment have to be modifiedin ccgbank coordination is represented by the following binary rule schemata rather than the ternary rule compare to steedman 11 in order to obtain this analysis from treebank trees a separate node that spans only the conjuncts and the conjunction or punctuation marks is inserted if necessaryidentifying the conjuncts often requires a considerable amount of preprocessingthese trees are then transformed into strictly rightbranching binary treesthe dummy nodes inserted during binarization receive the same category as the conjuncts but additionally carry a feature conj an additional modification of the grammar is necessary to deal with unlike coordinate phrases namely coordinate constructions where the conjuncts do not belong to the same syntactic category such constructions are difficult for any formalismthis phenomenon could be handled elegantly with a feature hierarchy over categories as proposed by copestake villavicencio and mcconville because the induction of such a hierarchy was beyond the scope of our project we modify our grammar slightly and allow the algorithm to use instantiations of a special coordination rule schema such as this enables us to analyze the previous example as in ccg all languagespecific information is associated with the lexical categories of wordsthere are many syntactic regularities associated with word classes however which may potentially generate a large number of lexical entries for each item in that classone particularly frequent example of this is clausal adjunctsfigure 3 illustrates how the basic algorithm described above leads to a proliferation of adjunct categoriesfor example a past participle such as used would receive a different category in a reduced relative like figure 3 from its standard category as a consequence modifiers of used would also receive different categories depending on what occurrence of used they modifythis is undesirable because we are only guaranteed to acquire a complete lexicon if we have seen all participles in all their possible surface positionssimilar regularities have been recognized and given a categorial analysis by carpenter who advocates lexical rules to account for the use of predicatives as adjunctsin a statistical model the parameters for such lexical rules are difficult to estimatewe therefore follow the approach of aone and wittenburg and implement these typechanging typechanging rules reduce the number of lexical category types required for complex adjuncts operations in the derivational syntax where these generalizations are captured in a few rulesif these rules apply recursively to their own output they can generate an infinite set of category types leading to a shift in generative power from contextfree to recursively enumerable like aone and wittenburg we therefore consider only a finite number of instantiations of these typechanging rules namely those which arise when we extend the category assignment procedure in the following way for any sentential or verb phrase modifier to which the original algorithm assigns category xx apply the following typechanging rule in reverse where s is the category that this constituent obtains if it is treated like a head node by the basic algorithms has the appropriate verbal features and can be snp or snpsome of the most common typechanging rules are the following for various types of reduced relative modifier hockenmaier and steedman ccgbank in order to obtain the correct predicateargument structure the heads of corresponding arguments in the input and output category are unified in written english certain types of npextraposition require a comma before or after the extraposed noun phrase factories booked 23674 billion in orders in september np nearly the same as the 23679 billion in august because any predicative noun phrase could be used in this manner this construction is also potentially problematic for the coverage of our grammar and lexiconhowever the fact that a comma is required allows us to use a small number of binary typechanging rules such asthe translation algorithm presumes that the trees in the penn treebank map directly to the desired ccg derivationshowever this is not always the case either because of noise in the treebank annotation differences in linguistic analysis or because ccg like any other expressive linguistic formalism requires information that is not present in the treebank analysisbefore translation a number of preprocessing steps are therefore requireddisregarding the most common preprocessing step preprocessing affects almost 43 of all sentenceshere we summarize the most important preprocessing steps for those constructions that do not involve nonlocal dependenciespreprocessing steps required for constructions involving nonlocal dependencies are mentioned in section 6remaining problems are discussed in section 7more detailed and complete descriptions can be found in the ccgbank manualannotation errors and inconsistencies in the treebank affect the quality of any extracted grammar or lexiconthis is especially true for formalisms with an extended domain of locality such as tag or ccg where a single elementary tree or lexical category may contain information that is distributed over a number of distinct phrasestructure rulespartofspeech tagging errorsratnaparkhi estimates a pos tagging error rate of 3 in the treebankthe translation algorithm is sensitive to these errors and inconsistencies because pos tagging errors can lead to incorrect categories or to incorrect features on verbal categories for instance if a simple past tense form occurs in a verb phrase which itself is the daughter of a verb phrase whose head is an inflected verb it is highly likely that it should be a past participle insteadusing the verb form itself and the surrounding context we have attempted to correct such errors automaticallyin 7 of all sentences our algorithm modifies at least one pos tagquotation marksalthough not strictly coming under the heading of noise quotation marks because a number of problems for the translation algorithmalthough it is tempting to analyze them similarly to parentheticals quotations often span sentence boundaries and consequently quotation marks appear to be unbalanced at the sentence levelwe therefore decided to eliminate them during the preprocessing stageunlike a handwritten grammar the grammar that is implicit in a treebank has to cover all constructions that occur in the corpusexpressive formalisms such as ccg provide explicit analyses that contain detailed linguistic informationfor example ccg derivations assign a lexical head to every constituent and define explicit functorargument relations between constituentsin a phrasestructure grammar analyses can be much coarser and may omit more finegrained structures if they are assumed to be implicit in the given analysisfurthermore constructions that are difficult to analyze do not need to be given a detailed analysisin both cases the missing information has to be added before a treebank tree can be translated into ccgif the missing structure is implicit in the treebank analysis this step is relatively straightforward but constructions such as parentheticals multiword expressions and fragments require careful reanalysis in order to avoid lexical coverage problems and overgenerationdetecting coordinationalthough the treebank does not explicitly indicate coordination it can generally be inferred from the presence of a conjunctionhowever in listlike nominal coordinations the conjuncts are only separated by commas or semicolons and may be difficult to distinguish from appositivesthere are also a number of verbphrase or sentential coordinations in the treebank where shared arguments or modifiers simply appear at the same level as conjuncts and the conjunction12 in ccg the conjuncts and conjunction form a separate constituentin 18 of all sentences additional preprocessing is necessary to obtain this structurenoun phrases and quantifier phrasesin the penn treebank nonrecursive noun phrases have remarkably little internal structure some but not all of the structure that is required to obtain a linguistically adequate analysis can be inferred automaticallythe ccgbank grammar distinguishes noun phrases np from nouns n and treats determiners as functions from nouns hockenmaier and steedman ccgbank to noun phrases therefore we need to insert an additional noun level which also includes the adjuncts dutch and publishing which receive both the category nn however because nominal compounds in the treebank have no internal bracketing we always assume a rightbranching analysis and are therefore not able to obtain the correct dependencies for cases such as deathsqps are another type of constituent where the treebank annotation lacks internal structure we use a number of heuristics to identify the internal structure of these constituents for example to detect conjuncts and prepositionsthe above example is then rebracketed fragments124 of the sentences in the penn treebank correspond to or contain fragmentary utterances for which no proper analysis could be given frags are often difficult to analyze and the annotation is not very consistentthe ccgbank manual lists heuristics that we used to infer additional structurefor example if a node is labeled frag and there is only one daughter as in the first example we treat the tree as if it was labeled with the label of its daughter parentheticalsparentheticals are insertions that are often enclosed in parentheses or preceded by a dashunless the parenthetical element itself is of a type that could be a modifier by itself we assume that the opening parenthesis or first dash takes the parenthetical element as argument and yields a modifier of the appropriate type this results in the following derivation which ignores the fact that parentheses are usually balanced the thirdhighest in the developing world we use a similar treatment for other constituents that appear after colons and dashes such as sentencefinal appositives or parentheticals that are not marked as prnoverall these changes affect 87 of all sentencesmultiword expressionsunder the assumption that every constituent has a lexical head that corresponds to an individual orthographic word multiword expressions require an analysis where one of the items subcategorizes for a specific syntactic type that can only correspond to the other lexical itemwe only attempted an analysis for expressions that are either very frequent or where the multiword expression has a different subcategorization behavior from the head word of the expressionthis includes some closedclass items including connectives comparatives monetary expressions and dates affecting 238 of all sentencesadditionally there are a number of constructions whose treebank annotation differs from the standard ccg analysis for linguistic reasonsthis includes small clauses as well as piedpiping subject extraction from embedded sentences and argument cluster coordination small clausesthe treebank treats constructions such as the following as small clauses pollard and sag and steedman argue against this analysis on the basis of extractions like what does the country want forgiven which suggest that these cases should rather be treated as involving two complementswe eliminate the small clause and transform the trees such that the verb takes both np children of the small clause as complements thereby obtaining the lexical category npnp for makesbecause our current grammar treats predicative nps like ordinary nps we are not able to express the relationship between it and supplier or between pool and hostagea correct analysis would assign a functor category snomnp to predicative np arguments of verbs like makes not only in these examples but also in copular sentences and appositivesthe other case where small clauses are used in the treebank includes absolute with and though constructions here we also assume that the subordinating conjunction takes the individual constituents in the small clause as complements and with obtains therefore the category ppnpagain a predicative analysis of the pp might be desirable in order to express the dependencies between limit and in effecteliminating small clauses affects 82 of sentencesthe treatment of nonlocal dependencies is one of the most important points of difference between grammar formalismsthe treebank uses a large inventory of null element types and traces including coindexation to represent longrange dependencieshockenmaier and steedman ccgbank because standard treebank parsers use probabilistic versions of contextfree grammar they are generally trained and tested on a version of the treebank in which these null elements and indices are deleted or ignored or in the case of collins model 3 only partially capturednonlocal dependencies are therefore difficult to recover from their outputin ccg longrange dependencies are represented without null elements or traces and coindexation is restricted to arguments of the same lexical functor categoryalthough this mechanism is less expressive than the potentially unrestricted coindexation used in the treebank it allows parsers to recover nonanaphoric longrange dependencies directly without the need for further postprocessing or trace insertionpassivein the treebank the surface subject of a passive sentence is coindexed with a null element in direct object position our translation algorithm uses the presence of the null element to identify passive mode but ignores it otherwise assigning the ccg category spssnp to noted13 the dependency between the subject and the participial is mediated through the lexical category of the copula 14 in order to reduce lexical ambiguity and deal with data sparseness we treat optional bypps which contain the logical subject as adjuncts rather than arguments of the passive participle15 here is the resulting ccg derivation together with its dependency structure 13 in the case of verbs like pay for which take a pp argument the null element appears within the ppin order to obtain the correct lexical category of paid we treat the null element like an argument of the preposition and percolate it up to the pp level14 we assume that the fact that the subject np argument of passive participials with category spssnp identifies the patient rather than agent is represented in the semantic interpretation of noted for example axnotedx one where one is simply a placeholder for a bindable argument like the relational grammarians chˆomeur relation15 extractions such as who was he paid by require the bypp to be treated as an argument and it would in fact be better to use a lexical rule to generate ppby from spssnp and vice versainfinitival and participial vps gerundsin the treebank participial phrases gerunds imperatives and tovp arguments are annotated as sentences with a null subject we treat these like verb phrases with the appropriate feature depending on the partofspeech tag of the verbcontrol and raisingccgbank does not distinguish between control and raisingin the treebank subjectcontrol and subjectraising verbs also take an s complement with a null subject that is coindexed with the subject of the main clause we ignore the coindexation in the treebank and treat all control verbs as nonarbitrary controlas indicated by the index i we assume that all verbs which subcategorize for a verb phrase complement and take no direct object mediate a dependency between their subject and their complementbecause the copula and to mediate similar dependencies between their subjects and complements but do not fill their own subject dependencies japanese has the following dependencies in the treebank objectraising verbs take a small clause argument with nonempty subjectfollowing our treatment of small clauses we modify this tree so that we obtain the lexical category npi for wanted which mediates the dependency between debt and forgiven16 extraposition of appositivesappositive noun phrases can be extraposed out of a sentence or verb phrase resulting in an anaphoric dependencythe penn treebank analyzes these as adverbial small clauses with a coindexed null subject we also treat these appositives as sentential modifiershowever the corresponding ccg derivation deliberately omits the dependency between dummies and drivers17 this derivation uses one of the special binary typechanging rules that takes into account that these appositives can only occur adjacent to commasthe penn treebank analyzes whquestions relative clauses topicalization of complements tough movement cleft and parasitic gaps in terms of movementthese constructions are frequent the entire treebank contains 16056 t traces including 8877 np traces 4120 s traces 2465 advp traces 422 pp traces and 210 other t tracessections 0221 contain 5288 full subject relative clauses as well as 459 full and 873 reduced object relative clausesthe dependencies involved in these constructions however are difficult to obtain from the output of standard parsers such as collins or charniak and require additional postprocessing that may introduce further noise and errorsin those cases where the trace corresponds to a moved argument the corresponding longrange dependencies can be recovered directly from the correct ccg derivationin the treebank the moved constituent is coindexed with a trace which is inserted at the extraction site 17 we regard this type of dependency as anaphoric rather than syntactic on the basis of its immunity to such syntactic restrictions as subject islandsccg has a similarly uniform analysis of these constructions albeit one that does not require syntactic movementin the ccg derivation of the example the relative pronoun has the category whereas the verb bought just bears the standard transitive category npthe subject np and the incomplete vp combine via typeraising and forward composition into an sdclnp which the relative pronoun then takes as its argument the coindexation on the lexical category of the relative pronoun guarantees that the missing object unifies with the modified np and we obtain the desired dependencies this analysis of movement in terms of functors over incomplete constituents allows ccg to use the same category for the verb when its arguments are extracted as when they are in situthis includes not only relative clauses and whquestions but also piedpiping tough movement topicalization and cleftsfor our translation algorithm the t traces are essential they indicate the presence of a longrange dependency for a particular argument of the verb and allow us to use a mechanism similar to gpsgs slashfeature passing so that longrange dependencies are represented in the goldstandard dependency structures of the test and training datathis is crucial to correctly inducing and evaluating grammars and parsers for any expressive formalism including tag gpsg hpsg lfg and mpga detailed description of this mechanism and of our treatment of other constructions that use t traces can be found in the ccgbank manualthis algorithm works also if there is a coordinate structure within the relative clause such that there are two t traces resulting in the following longrange dependencies that the verb takes the vp and the np argument in reversed order and change the tree accordingly before translation resulting in the correct ccg analysis we obtain the following longrange dependencies because our grammar does not use baldridges modalities or steedmans equivalent rulebased restrictions which prohibit this category from applying to in situ nps this may lead to overgeneralizationhowever such examples are relatively frequent there are 97 instances of np in sections 0221 and to omit this category would reduce coverage and recovery of longrange extractionsby percolating the t trace up to the sqlevel in a similar way to relative clauses and treating which as syntactic head of the whnp we obtain the desired ccg analysis we coindex the head of the extracted np with that of the noun ni and the subject of do with the subject of its complement npi to obtain the following dependencies in this example we need to rebracket the treebank tree so that details of forms a constituent18 apply a special rule to assign the category np to the preposition and combine it via typeraising and composition with detailsthis constituent is then treated as an argument of the relative pronoun with appropriate coindexation j we obtain the following nonlocal dependencies19 because adjuncts generally do not extract unboundedly20 the corresponding traces can be ignored by the translation procedureinstead the dependency between when and dropped is directly established by the fact that dropped is the head of the complement sdcl whextraction which use the same lexical categories as for in situ complements they also provide an analysis of right node raising constructions without introducing any new lexical categoriesin the treebank analysis of right node raising the shared constituent is coindexed with two rnr traces in both of its canonical positions we need to alter the translation algorithm slightly to deal with rnr traces in a manner essentially equivalent to the earlier treatment of t whtracesdetails are in the ccgbank manualthe ccg derivation for the above example is as follows the right node raising dependencies are as follows our algorithm works also if the shared constituent is an adjunct or if two conjoined noun phrases share the same head which is also annotated with rnr tracesalthough there are only 209 sentences with rnr traces in the entire treebank right node raising is actually far more frequent because rnr traces are not used when the conjuncts consist of single verb tokensthe treebank contains 349 vps in which a verb form is immediately followed by a conjunction and another verb form and has an np sister in ccgbank sections 0221 alone contain 444 sentences with verbal or adjectival right node raisingright node raising is also marked in the penn treebank using rnr traces for parasitic gap constructions such as the following these sentences require rules based on the substitution combinator s our treatment of right node raising traces deals with the first case correctly via the backward crossing rule s since the pps are both argumentsunfortunately as we saw in section 3 the treebank classifies such pps as directional adverbials hence we translate them as adjuncts and lose such examples of which there are at least three more all also involving from and to as in the case of leftward extraction including such longrange dependencies in the dependency structure is crucial to correct induction and evaluation of all expressive grammar formalismsalthough no leftwardextracting parasitic gaps appear to occur in the treebank our grammar and model predicts examples like the following and will cover them when encountered conflict which the system was held to cause rather than resolve 641 argument cluster coordinationif two vps with the same head are conjoined the second verb can be omittedthe treebank encodes these constructions as a vpcoordination in which the second vp lacks a verbthe daughters of the second conjunct are coindexed with the corresponding elements in the first conjunct using a index in the ccg account of this construction 5 million right away and additional amounts in the future form constituents which are then coordinatedthese constituents are obtained by typeraising and composing the arguments in each conjunct yielding a functor which takes a verb with the appropriate category to its left to yield a verb phrase then the argument clusters are conjoined and combine with the verb via function application21 this construction is one in which the ccgbank headdependency structure fails to capture the full set of predicateargument structure relations that would be implicit in a full logical form that is the dependency structure does not express the fact that right away takes scope over 5 million and in future over additional amounts rather than the other way aroundhowever this information is included in the full surfacecompositional semantic interpretation that is built by the combinatory rulesbecause the treebank constituent structure does not correspond to the ccg analysis we need to transform the tree before we can translate itduring preprocessing we create a copy of the entire argument cluster which corresponds to the constituent structure of the ccg analysisduring normal category assignment we use the first conjunct in its original form to obtain the correct categories of all constituentsin a later stage we use typeraising and composition to combine the constituents within each argument clusterfor a detailed description of this algorithm and a number of variations on the original treebank annotation that we did not attempt to deal with the interested reader is referred to the ccgbank manualthere are 226 instances of argumentcluster coordination in the entire penn treebankthe algorithm delivers a correct ccg derivation for 146 of thesetranslation failures are due to the fact that the algorithm can at present only deal with this construction if the two conjuncts are isomorphic in structure which is not always the casethis is unfortunate because ccg is particularly suited for this constructionhowever we believe that it would be easier to manually reannotate those sentences that are not at present translated than to try to adapt the algorithm to deal with all of them individually this construction cannot be handled with the standard combinatory rules of ccg that are assumed for englishinstead steedman proposes an analysis of gapping that uses a unificationbased decomposition rulecategorial decomposition allows a category type to be split apart into two subparts and is used to yield an analysis of gapping that is very similar to that of argument cluster coordination22 22 it is only the syntactic types that are decomposed or recovered in this way the corresponding semantic entities and in particular the interpretation for the gapped verb group can talk must be available from the left conjuncts information structure via anaphorathat is decomposition adds very little to the categorial information available from the right conjunct except to make the syntactic types yield an s the real work is done in the semanticsbecause the derivation is not a tree anymore and the decomposed constituents do not correspond to actual constituents in the surface string this analysis is difficult to represent in a treebankthe 107 sentences that contain sentential gapping are therefore omitted in the current version of ccgbank even though special coordination rules that mimic the decomposition analysis are conceivablebesides the cases discussed herein the treebank contains further kinds of null elements all of which the algorithm ignoresthe null element ich which appears 1240 times is used for extraposition of modifierslike ellipsis this is a case of a semantic dependency which we believe to be anaphoric and therefore not reflected in the syntactic categoryfor this reason we treat any constituent that is coindexed with an ich as an adjunctthe null element ppa is used for genuine attachment ambiguitiessince the treebank manual states that the actual constituent should be attached at the more likely attachment site we chose to ignore any ppa null elementour algorithm also ignores the null element which occurs 582 times and indicates a missing predicate or a piece thereof it is used for vp ellipsis and can also occur in conjunction with a vp proform do or in comparatives 23 we can now define the complete translation algorithm including the modifications necessary to deal with traces and argument clusters 23 we believe that both conjuncts in the first example are complete sentences which are related anaphoricallytherefore the syntactic category of do is sdclnp not vpin the second example indicates a semantic argument of expected that we do not reflect in the syntactic categorythe successive steps have the following more detailed character preprocesstree correct tagging errors ensure the constituent structure conforms to the ccg analysiseliminate quotescreate copies of coordinated argument clusters that correspond to the ccg analysis determineconstituenttypes for each node determine its constituent type makebinary binarize the tree percolatetraces determine the ccg category of t and rnr traces in complement position and percolate them up to the appropriate level in the tree assigncategories assign ccg categories to nodes in the tree starting at the root nodenodes that are coindexed with rnr traces receive the category of the corresponding tracesargument clusters are ignored in this step treatargumentclusters assign categories to argument clusters cuttracesandunaryrules cut out constituents that are not part of the ccg derivation such as traces null elements and the copy of the first conjunct in argument cluster coordinationeliminate resulting unary projections of the form x x verifyderivation discard those trees for which the algorithm does not produce a valid ccg derivationin most cases this is due to argument cluster coordination that is not annotated in a way that our algorithm can deal with assigndependencies coindex specific classes of lexical categories to project nonlocal dependencies and generate the wordword dependencies that constitute the underlying predicateargument structurein a number of cases missing structure or a necessary distinction between different constructions needed to inform the translation is missing and cannot be inferred deterministically from the treebank analysis without further manual reannotationwe discuss these residual problems here because they are likely to present obstacles to the extraction of linguistically adequate grammars in any formalismour translation algorithm requires a distinction between complements and adjunctsin many cases this distinction is easily read off the treebank annotation but it is in general an open linguistic problem because the treebank annotation does not explicitly distinguish between complements and adjuncts researchers typically develop their own heuristicssee for example kinyon and prolo for prepositional phrases we rely on the clr function tag to identify complements although it is unclear whether the treebank annotators were able to use this tag consistentlynot all pp arguments seem to have this function tag and some pps that have this tag may have been better considered adjuncts for tag chen bangalore and vijayshanker show that different heuristics yield grammars that differ significantly in size coverage and linguistic adequacywe have not attempted such an investigationin a future version of ccgbank it may be possible to follow shen and joshi in using the semantic roles of the proposition bank to distinguish arguments and adjunctsparticleverb constructions are difficult to identify in the treebank because particles can be found as prt advpclr and advptherefore verbs in the ccgbank grammar do not subcategorize for particles which are instead treated as adverbial modifierscompound nouns are often inherently ambiguous and in most cases the treebank does not specify their internal structure in order to obtain the correct analysis manual reannotation would be requiredbecause this was not deemed feasible within our project compound nouns are simply translated into strictly rightbranching binary trees which yields the correct analysis in some but not all casesthis eschews the computational problem that a grammar for compound nouns induces all possible binary bracketings but is linguistically incorrecta similar problem arises in compound nouns that involve internal coordination we include the following rule in our grammar which yields a default dependency structure corresponding to nn coordination conj n n this rule allows us to translate the above tree as follows nn cotton n n conj and n n fibers the treebank markup of np appositives is indistinguishable from that of np lists therefore our current grammar does not distinguish between appositives and np coordination even though appositives should be analyzed as predicative modifiersthis leads to a reduction of ambiguity in the grammar but is semantically incorrect our current grammar does not implement number agreement one problem that prevented us from including number agreement is the abovementioned inability to distinguish np lists and appositivesin the penn treebank all relative clauses are attached at the noun phrase levelthis is semantically undesirable because a correct interpretation of restrictive relative clauses can only be obtained if they modify the noun whereas nonrestrictive relative clauses are noun phrase modifiersbecause this distinction requires manual inspection on a casebycase basis we were unable to modify the treebank analysisthus all ccgbank relative pronouns have categories of the form rather than this will make life difficult for those trying to provide a montaguestyle semantics for relative modifierslike most other problems that we were not able to overcome this limitation of the treebank ultimately reflects the sheer difficulty of providing a consistent and reliable annotation for certain linguistic phenomena such as modifier scope771 heavy np shiftin english noun phrase arguments can be shifted to the end of the sentence if they become too heavy this construction was studied extensively by ross the ccg analysis uses backward crossed composition to provide an analysis where brings has its canonical lexical category np because the penn treebank does not indicate heavy np shift the corresponding ccgbank derivation does not conform to the desired analysis and requires additional lexical categories which may lead to incorrect overgeneralizations24 this will also be a problem in using the penn treebank or ccgbank for any theory of grammar that treats heavy np shift as extraction or movement8coverage size and evaluation here we first examine briefly the coverage of the translation algorithm on the entire penn treebankthen we examine the ccg grammar and lexicon that are obtained from ccgbankalthough the grammar of ccg is usually thought of as consisting only of the combinatory rule schemata such as and we are interested here in the instantiation of these rules in which the variables x and y are bound to values such as s and np because statistical parsers such as hockenmaier and steedmans or clark and currans are trained on counts of such instantiationswe report our results on sections 0221 the standard training set for penn treebank parsers and use section 00 to evaluate coverage of the training set on unseen datasections 0221 contains 39604 sentences whereas section 00 consists of 1913 sentences ccgbank contains 48934 of the 49208 sentences in the entire penn treebankthe missing 274 sentences could not be automatically translated to ccgthis includes 107 instances of sentential gapping a construction our algorithm does not cover and 66 instances of nonsentential gapping or argumentcluster coordination the remaining translation failures include trees that consist of sequences of nps that are not separated by commas some fragments and a small number of constructions involving longrange dependencies such as whextraction parasitic gaps or argument cluster coordinations where the translation did not yield a valid ccg derivation because a complement had been erroneously identified as an adjunct24 backward crossed composition is also used by steedman and baldridge to account for constraints on preposition stranding in englishbecause this rule in its unrestricted form leads to overgeneralization baldridge restricts crossing rules via the x modalitythe current version of ccgbank does not implement modalities but because the grammar that is implicit in ccgbank only consists of particular seen rule instantiations it may not be affected by such overgeneration problemsa ccg lexicon specifies the lexical categories of words and therefore contains the entire languagespecific grammarhere we examine the size and coverage of the lexicon that consists of the wordcategory pairs that occur in ccgbankthis lexicon could be used by any ccg parser although morphological generalization and ways to treat unknown words are likely to be necessary to obtain a more complete lexiconnumber of entriesthe lexicon extracted from sections 0221 has 74669 entries for 44210 word types many words have only a small number of categories but because a number of frequent closedclass items have a large number of categories the expected number of lexical categories per token is 192number and growth of lexical category typeshow likely is it that we have observed the complete inventory of category types in the english languagethere are 1286 lexical category types in sections 0221figure 4 examines the growth of the number of lexical category types as a function of the amount of data translated into ccgthe loglog plot the growth of lexical category types and rule instantiations a loglog plot of the rank order and frequency of the lexical category types and instantiations of combinatory rules in ccgbank of the rank order and frequency of the lexical categories in figure 5 indicates that the underlying distribution is roughly zipfian with a small number of very frequent categories and a long tail of rare categorieswe note 439 categories that occur only once and only 556 categories occur five times or moreinspection suggests that although some of the category types that occur only once are due to noise or annotation errors most are correct and are in fact required for certain constructionstypical examples of rare but correct and necessary categories are relative pronouns in piedpiping constructions or verbs which take expletive subjectslexical coverage on unseen datathe lexicon extracted from sections 0221 contains the necessary categories for 940 of all tokens in section 00 the missing entries that would be required for the remaining 6 of tokens fall into two classes 1728 or 38 correspond to completely unknown words that do not appear at all in section 0221 whereas the other 22 of tokens do appear in the training set but not with the categories required in section 00all statistical parsers have to be able to accept unknown words in their input regardless of the underlying grammar formalismtypically frequency information for rare words in the training data is used to estimate parameters for unknown words however in a lexicalized formalism such as ccg there is the additional problem of missing lexical entries for known wordsbecause lexical categories play such an essential role in ccg even a small fraction of missing lexical entries can have a significant effect on coverage since the parser will not be able to obtain the correct analysis for any sentence that contains such a tokenhockenmaier and steedman show that this lexical coverage problem does in practice have a significant impact on overall parsing accuracyhowever because many of the known words with missing entries do not appear very often in the training data hockenmaier demonstrates that this problem can be partially alleviated if the frequency threshold below which rare words are treated as unseen is set to a much higher value than for standard treebank parsersan alternative approach advocated by clark and curran is to use a supertagger which predicts lexical ccg categories in combination with a discriminative parsing modelsize and growth of instantiated syntactic rule setstatistical ccg parsers such as hockenmaier and steedman or clark and curran are trained on counts of specific instantiations of combinatory rule schemata by categorytypesit is therefore instructive to consider the frequency distribution of these categoryinstantiated rulesthe grammar for sections 0221 has 3262 instantiations of general syntactic combinatory rules like those in with specific categoriesof these 1146 appear only once and 2027 appear less than five timesalthough there is some noise many of the ccg rules that appear only once are linguistically correct and should be used by the parserthey include certain instantiations of typeraising coordination or punctuation rules or rules involved in argument cluster coordinations piedpiping constructions or questions all of which are rare in the wall street journalas can be seen from figure 5 the distribution of rule frequencies is again roughly zipfian with the 10 most frequent rules accounting for 592 of all rule instantiations the growth of rule instantiations is shown in figure 4if function tags are ignored the grammar for the corresponding sections of the original treebank contains 12409 phrasestructure rules out of which 6765 occur only once these rules also follow a zipfian distribution the fact that both category types and rule instances are also zipfian for ccgbank despite its binarized rules shows that the phenomenon is not just due to the treebank annotation with its very flat rulessyntactic rule coverage on unseen datasyntactic rule coverage for unseen data is almost perfect 51932 of the 51984 individual rule instantiations in section 00 have been observed in section 0221out of the 52 missing rule instantiation tokens six involve coordination and three punctuationone missing rule is an instance of substitution two missing rules are instances of typeraised argument types combining with a verb of a rare typethis paper has presented an algorithm which translates penn treebank phrasestructure trees into ccg derivations augmented with wordword dependencies that approximate the underlying predicateargument structurein order to eliminate some of the noise in the original annotation and to obtain linguistically adequate derivations that conform to the correct analyses proposed in the literature considerable preprocessing was necessaryeven though certain mismatches between the syntactic annotations in the penn treebank and the underlying semantics remain and will affect any similar attempt to obtain expressive grammars from the treebank we believe that ccgbank the resulting corpus will be of use to the computational linguistics community in the following waysccgbank has already enabled the creation of several robust and accurate widecoverage ccg parsers including hockenmaier and steedman clark hockenmaier and steedman hockenmaier and clark and curran although the construction of full logical forms was beyond the scope of this project ccgbank can also be seen as a resource which may enable the automatic construction of full semantic interpretations by widecoverage parsersunlike most penn treebank parsers such as collins or charniak these ccgbank parsers return not only syntactic derivations but also local and longrange dependencies including those that arise under relativization and coordinationalthough these dependencies are only an approximation of the full semantic interpretation that can in principle be obtained from a ccg they may prove useful for tasks such as summarization and question answering furthermore bos et al and bos have demonstrated that the output of ccgbank parsers can be successfully translated into kamp and reyles discourse representation theory structures to support question answering and the textual entailment task we hope that these results can be ported to other corpora and other similarly expressive grammar formalismswe also hope that our experiences will be useful in designing guidelines for future treebanksalthough implementational details will differ across formalisms similar problems and questions to those that arose in our work will be encountered in any attempt to extract expressive grammars from annotated corporabecause ccgbank preserves most of the linguistic information in the treebank in a somewhat less noisy form we hope that others will find it directly helpful for inducing grammars and statistical parsing models for other linguistically expressive formalismsthere are essentially three ways in which this might workfor lexicalized grammars it may in some cases be possible to translate the subcategorization frames in the ccg lexicon directly into the target theoryfor typelogical grammars this is little more than a matter of transducing the syntactic types for the lexicon into the appropriate notationfor formalisms like ltag the relation is more complex but the work of joshi and kulick who unfold ccg categories into tag elementary trees via partial proof trees and shen and joshi who define ltag spines that resemble categories suggest that this is possibletransduction into hpsg signs is less obvious but also seems possible in principlea second possibility is to transduce ccgbank itself into a form appropriate to the target formalismthere seems to be a similar ordering over alternative formalisms from straightforward to less straightforward for this approachwe would also expect that dependency grammars melˇcuk and pertsov 1987 hudson 1984 and parsers could be trained and tested with little extra work on the dependencies in ccgbankfinally we believe that existing methods for translating the penn treebank from scratch into other grammar formalisms will benefit from including preprocessing similar to that described hereas some indication of the relative ease with which these techniques transfer we offer the observation that the 900kword german tiger dependency corpus has recently been translated into ccg using very similar techniques by hockenmaier and c akıcı has derived a turkish lexicon from the a similarly preprocessed version of the metusabancı turkish dependency treebank a fundamental assumption behind attempts at the automatic translation of syntactically annotated corpora into different grammatical formalisms such as ccg tag hpsg or lfg is that the analyses that are captured in the original annotation can be mapped directly into the desired analyses in the target formalismthis can only hold if all constructions that are treated in a similar manner in the original corpus are also treated in a similar manner in the target formalismfor the penn treebank our research and the work of others have shown that such a correspondence exists in most casesalthough the output of most current treebank parsers is linguistically impoverished the treebank annotation itself is notit is precisely the linguistic richness and detail of the original annotationin particular the additional information present in the null elements and function tags that are ignored by most other parsersthat has made the creation of ccgbank possiblethe translation process would have been easier if some of the annotation had been more explicit and precise and consistent an important conclusion that follows for the builders of future treebanks is that the tradition established by the penn treebank of including all linguistically relevant dependencies should be continued with if anything even closer adherence to semantically informed linguistic insights into predicateargument structural relationsour results also indicate that corpora of at least the order of magnitude of the penn treebank are necessary to obtain grammars and parsers that are sufficiently expressive robust and wide in coverage to recover these relations completelywe would like to thank our colleagues in edinburgh and philadelphiain particular jason baldridge johan bos stephen clark james curran michael white mitch marcus ann bies martha palmer and aravind joshifor numerous conversations and feedback on the corpuswe would also like to thank the linguistic data consortium for their help in publishing ccgbank and the computational linguistics reviewers for their extensive comments on earlier versions of this paperwe gratefully acknowledge the financial support provided by epsrc grant grm96889jh also acknowledges support by an epsrc studentship and the edinburgh language technology group and by nsf itr grant 0205456 at the university of pennsylvaniamjs acknowledges support from the scottish enterprise edinburghstanford link and eu ist grant pacoplus
J07-3004
ccgbank a corpus of ccg derivations and dependency structures extracted from the penn treebankthis article presents an algorithm for translating the penn treebank into a corpus of combinatory categorial grammar derivations augmented with local and longrange wordword dependenciesthe resulting corpus ccgbank includes 994 of the sentences in the penn treebankit is available from the linguistic data consortium and has been used to train wide coverage statistical parsers that obtain stateoftheart rates of dependency recoveryin order to obtain linguistically adequate ccg analyses and to eliminate noise and inconsistencies in the original annotation an extensive analysis of the constructions and annotations in the penn treebank was called for and a substantial number of changes to the treebank were necessarywe discuss the implications of our findings for the extraction of other linguistically expressive grammars from the treebank and for the design of future treebanksthe ccgbankstyle dependency is a directed graph of headchild relations labelled with the head lexical category and the argument slot filled by the childccgbank is a corpus of ccg derivations that was semiautomatically converted from the wall street journal section of the penn treebank
widecoverage efficient statistical parsing with ccg and loglinear models this article describes a number of loglinear parsing models for an automatically extracted lexicalized grammar the models are full parsing models in the sense that probabilities are defined for complete parses rather than for independent events derived by decomposing the parse tree discriminative training is used to estimate the models which requires incorrect parses for each sentence in the training data as well as the correct parse the lexicalized grammar formalism used is combinatory categorial grammar and the grammar is automatically extracted from ccgbank a ccg version of the penn treebank the combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement which is satisfied using a parallel implementation of the bfgs optimization algorithm running on a beowulf cluster dynamic programming over a packed chart in combination with the parallel implementation allows us to solve one of the largestscale estimation problems in the statistical parsing literature in under three hours a key component of the parsing system for both training and testing is a maximum entropy supertagger which assigns ccg lexical categories to words in a sentence the supertagger makes the discriminative training feasible and also leads to a highly efficient parser surprisingly given ccgs spurious ambiguity the parsing speeds are significantly higher than those reported for comparable parsers in the literature we also extend the existing parsing techniques for ccg by developing a new model and efficient parsing algorithm which exploits all derivations including ccgs nonstandard derivations this model and parsing algorithm when combined with normalform constraints give stateoftheart accuracy for the recovery of predicateargument dependencies from ccgbank the parser is also evaluated on depbank and compared against the rasp parser outperforming rasp overall and on the majority of relation types the evaluation on depbank raises a number of issues regarding parser evaluation this article provides a comprehensive blueprint for building a widecoverage ccg parser we demonstrate that both accurate and highly efficient parsing is possible with ccg this article describes a number of loglinear parsing models for an automatically extracted lexicalized grammarthe models are full parsing models in the sense that probabilities are defined for complete parses rather than for independent events derived by decomposing the parse treediscriminative training is used to estimate the models which requires incorrect parses for each sentence in the training data as well as the correct parsethe lexicalized grammar formalism used is combinatory categorial grammar and the grammar is automatically extracted from ccgbank a ccg version of the penn treebankthe combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement which is satisfied using a parallel implementation of the bfgs optimization algorithm running on a beowulf clusterdynamic programming over a packed chart in combination with the parallel implementation allows us to solve one of the largestscale estimation problems in the statistical parsing literature in under three hoursa key component of the parsing system for both training and testing is a maximum entropy supertagger which assigns ccg lexical categories to words in a sentencethe supertagger makes the discriminative training feasible and also leads to a highly efficient parsersurprisingly given ccgs spurious ambiguity the parsing speeds are significantly higher than those reported for comparable parsers in the literaturewe also extend the existing parsing techniques for ccg by developing a new model and efficient parsing algorithm which exploits all derivations including ccgs nonstandard derivationsthis model and parsing algorithm when combined with normalform constraints give stateoftheart accuracy for the recovery of predicateargument dependencies from ccgbankthe parser is also evaluated on depbank and compared against the rasp parser outperforming rasp overall and on the majority of relation typesthe evaluation on depbank raises a number of issues regarding parser evaluationthis article provides a comprehensive blueprint for building a widecoverage ccg parserwe demonstrate that both accurate and highly efficient parsing is possible with ccgloglinear models have been applied to a number of problems in nlp for example pos tagging named entity recognition chunking and parsing loglinear models are also referred to as maximum entropy models and random fields in the nlp literaturethey are popular because of the ease with which complex discriminating features can be included in the model and have been shown to give good performance across a range of nlp tasksloglinear models have previously been applied to statistical parsing but typically under the assumption that all possible parses for a sentence can be enumeratedfor manually constructed grammars this assumption is usually sufficient for efficient estimation and decodinghowever for widecoverage grammars extracted from a treebank enumerating all parses is infeasiblein this article we apply the dynamic programming method of miyao and tsujii to a packed chart however because the grammar is automatically extracted the packed charts require a considerable amount of memory up to 25 gbwe solve this massive estimation problem by developing a parallelized version of the estimation algorithm which runs on a beowulf clusterthe lexicalized grammar formalism we use is combinatory categorial grammar a number of statistical parsing models have recently been developed for ccg and used in parsers applied to newspaper text in this article we extend existing parsing techniques by developing loglinear models for ccg as well as a new model and efficient parsing algorithm which exploits all ccgs derivations including the nonstandard onesestimating a loglinear model involves computing expectations of feature valuesfor the conditional loglinear models used in this article computing expectations requires a sum over all derivations for each sentence in the training databecause there can be a massive number of derivations for some sentences enumerating all derivations is infeasibleto solve this problem we have adapted the dynamic programming method of miyao and tsujii to packed ccg chartsa packed chart efficiently represents all derivations for a sentencethe dynamic programming method uses inside and outside scores to calculate expectations similar to the insideoutside algorithm for estimating the parameters of a pcfg from unlabeled data generalized iterative scaling is a common choice in the nlp literature for estimating a loglinear model initially we used generalized iterative scaling for the parsing models described here but found that convergence was extremely slow sha and pereira present a similar finding for globally optimized loglinear models for sequencesas an alternative to gis we use the limitedmemory bfgs algorithm as malouf demonstrates general purpose numerical optimization algorithms such as bfgs can converge much faster than iterative scaling algorithms despite the use of a packed representation the complete set of derivations for the sentences in the training data requires up to 25 gb of ram for some of the models in this articlethere are a number of ways to solve this problempossibilities include using a subset of the training data repeatedly parsing the training data for each iteration of the estimation algorithm or reading the packed charts from disk for each iterationthese methods are either too slow or sacrifice parsing performance and so we use a parallelized version of bfgs running on an 18node beowulf cluster to perform the estimationeven given the large number of derivations and the large feature sets in our models the estimation time for the bestperforming model is less than three hoursthis gives us a practical framework for developing a statistical parsera corollary of ccgs basegenerative treatment of longrange dependencies in relative clauses and coordinate constructions is that the standard predicateargument relations can be derived via nonstandard surface derivationsthe addition of spurious derivations in ccg complicates the modeling and parsing problemsin this article we consider two solutionsthe first following hockenmaier is to define a model in terms of normalform derivations in this approach we recover only one derivation leading to a given set of predicateargument dependencies and ignore the restthe second approach is to define a model over the predicateargument dependencies themselves by summing the probabilities of all derivations leading to a given set of dependencieswe also define a new efficient parsing algorithm for such a model based on goodman which maximizes the expected recall of dependenciesthe development of this model allows us to test for the purpose of selecting the correct predicateargument dependencies whether there is useful information in the additional derivationswe also compare the performance of our best loglinear model against existing ccg parsers obtaining the highest results to date for the recovery of predicate argument dependencies from ccgbanka key component of the parsing system is a maximum entropy ccg supertagger which assigns lexical categories to words in a sentencethe role of the supertagger is twofoldfirst it makes discriminative estimation feasible by limiting the number of incorrect derivations for each training sentence the supertagger can be thought of as supplying a number of incorrect but plausible lexical categories for each word in the sentencesecond it greatly increases the efficiency of the parser which was the original motivation for supertagging one possible criticism of ccg has been that highly efficient parsing is not possible because of the additional spurious derivationsin fact we show that a novel method which tightly integrates the supertagger and parser leads to parse times significantly faster than those reported for comparable parsers in the literaturethe parser is evaluated on ccgbank in order to facilitate comparisons with parsers using different formalisms we also evaluate on the publicly available depbank using the briscoe and carroll annotation consistent with the rasp parser the dependency annotation is designed to be as theoryneutral as possible to allow easy comparisonhowever there are still considerable difficulties associated with a crossformalism comparison which we describeeven though the ccg dependencies are being mapped into another representation the accuracy of the ccg parser is over 81 fscore on labeled dependencies against an upper bound of 848the ccg parser also outperforms rasp overall and on the majority of dependency typesthe contributions of this article are as followsfirst we explain how to estimate a full loglinear parsing model for an automatically extracted grammar on a scale as large as that reported anywhere in the nlp literaturesecond the article provides a comprehensive blueprint for building a widecoverage ccg parser including theoretical and practical aspects of the grammar the estimation process and decodingthird we investigate the difficulties associated with crossformalism parser comparison evaluating the parser on depbankand finally we develop new models and decoding algorithms for ccg and give a convincing demonstration that through use of a supertagger highly efficient parsing is possible with ccgthe first application of loglinear models to parsing is the work of ratnaparkhi and colleagues similar to della pietra della pietra and lafferty ratnaparkhi motivates loglinear models from the perspective of maximizing entropy subject to certain constraintsratnaparkhi models the various decisions made by a shiftreduce parser using loglinear distributions defined over features of the local context in which a decision is madethe probabilities of each decision are multiplied together to give a score for the complete sequence of decisions and beam search is used to find the most probable sequence which corresponds to the most probable derivationa different approach is proposed by abney who develops loglinear models for attributevalue grammars such as headdriven phrase structure grammar rather than define a model in terms of parser moves abney defines a model directly over the syntactic structures licensed by the grammaranother difference is that abney uses a global model in which a single loglinear model is defined over the complete space of attributevalue structuresabneys motivation for using loglinear models is to overcome various problems in applying models based on pcfgs directly to attributevalue grammarsa further motivation for using global models is that these do not suffer from the label bias problem which is a potential problem for ratnaparkhis approachabney defines the following model for a syntactic analysis w where fi is a feature or feature function and βi is its corresponding weight z is a normalizing constant also known as the partition functionin much work using loglinear models in nlp including ratnaparkhis the features of a model are indicator functions which take the value 0 or 1however in abneys models and in the models used in this article the feature functions are integer valued and count the number of times some feature appears in a syntactic analysis1 abney calls the feature functions frequency functions and like abney we will not always distinguish between a feature and its corresponding frequency functionthere are practical difficulties with abneys proposal in that finding the maximumlikelihood solution during estimation involves calculating expectations of feature values which are sums over the complete space of possible analysesabney suggests a metropolishastings sampling procedure for calculating the expectations but does not experiment with an implementationjohnson et al propose an alternative solution which is to maximize the conditional likelihood functionin this case the likelihood function is the product of the conditional probabilities of the syntactic analyses in the data each probability conditioned on the respective sentencethe advantage of this method is that calculating the conditional feature expectations only requires a sum over the syntactic analyses for the sentences in the training datathe conditionallikelihood estimator is also consistent for the conditional distributions the same solution is arrived at by della pietra della pietra and lafferty via a maximum entropy argumentanother feature of johnson et als approach is the use of a gaussian prior term to avoid overfitting which involves adding a regularization term to the likelihood function the regularization term penalizes models whose weights get too large in absolute valuethis smoothing method for loglinear models is also proposed by chen and rosenfeld calculating the conditional feature expectations can still be problematic if the grammar licenses a large number of analyses for some sentencesthis is not a problem for johnson et al because their grammars are handwritten and constraining enough to allow the analyses for each sentence to be enumeratedhowever for grammars with wider coverage it is often not possible to enumerate the analyses for each sentence in the training dataosborne investigates training on a sample of the analyses for each sentence for example the topn most probable according to some other probability model or simply a random samplethe ccg grammar used in this article is automatically extracted has wide coverage and can produce an extremely large number of derivations for some sentences far too many to enumeratewe adapt the featureforest method of miyao and tsujii which involves using dynamic programming to efficiently calculate the feature expectationsgeman and johnson propose a similar method in the context of lfg parsing an implementation is described in kaplan et al miyao and tsujii have carried out a number of investigations similar to the work in this articlein miyao and tsujii loglinear models are developed for automatically extracted grammars for lexicalized tree adjoining grammar and head driven phrase structure grammar one of miyao and tsujiis motivations is to model predicateargument dependencies including longrange dependencies which was one of the original motivations of the widecoverage ccg parsing projectmiyao and tsujii present another loglinear model for an automatically extracted ltag which uses a simple unigram model of the elementary trees together with a loglinear model of the attachmentsmiyao and tsujii address the issue of practical estimation using an automatically extracted hpsg grammara simple unigram model of lexical categories is used to limit the size of the charts for training in a similar way to how we use a ccg supertagger to restrict the size of the chartsthe main differences between miyao and tsujiis work and ours aside from the different grammar formalisms are as followsthe ccg supertagger is a key component of our parsing systemit allows practical estimation of the loglinear models as well as highly efficient parsingthe maximum entropy supertagger we use could also be applied to miyao and tsujiis grammars although whether similar performance would be obtained depends on the characteristics of the grammar see subsequent sections for more discussion of this issue in relation to ltagthe second major difference is in our use of a cluster and parallelized estimation algorithmwe have found that significantly increasing the size of the parse space available for discriminative estimation which is possible on the cluster improves the accuracy of the resulting parseranother advantage of parallelization as discussed in section 55 is the reduction in estimation timeagain our parallelization techniques could be applied to miyao and tsujiis frameworkmalouf and van noord present similar work to ours in the context of an hpsg grammar for dutchone similarity is that their parsing system uses an hmm tagger before parsing similar to our supertaggerone difference is that we use a maximum entropy tagger which allows more flexibility in terms of the features that can be encoded for example we have found that using penn treebank pos tags as features significantly improves supertagging accuracyanother difference is that malouf and van noord use the random sampling method of osborne to allow practical estimation whereas we construct the complete parse forest but use the supertagger to limit the size of the chartstheir work is also on a somewhat smaller scale with the dutch alpino treebank containing 7100 sentences compared with the 36000 sentences we use for trainingkaplan et al present similar work to ours in the context of an lfg grammar for englishthe main difference is that the lfg grammar is handbuilt resulting in less ambiguity than an automatically extracted grammar and thus requiring fewer resources for model estimationone downside of handbuilt grammars is that they are typically less robust which kaplan et al address by developing a fragment grammar together with a skimming mode which increases coverage on section 23 of the penn treebank from 80 to 100kaplan et al also present speed figures for their parser comparing with the collins parsercomparing parser speeds is difficult because of implementation and accuracy differences but their highest reported speed is around 2 sentences per second on sentences from section 23the parse speeds that we present in section 103 are an order of magnitude highermore generally the literature on statistical parsing using linguistically motivated grammar formalisms is large and growingstatistical parsers have been developed for tag lfg and hpsg among othersthe motivation for using these formalisms is that many nlp tasks such as machine translation information extraction and question answering could benefit from the more sophisticated linguistic analyses they providethe formalism most closely related to ccg from this list is tagtag grammars have been automatically extracted from the penn treebank using techniques similar to those used by hockenmaier also the supertagging idea which is central to the efficiency of the ccg parser originated with tag chen et al describe the results of reranking the output of an hmm supertagger using an automatically extracted ltagthe accuracy for a single supertag per word is slightly over 80this figure is increased to over 91 when the tagger is run in nbest mode but at a considerable cost in ambiguity with 8 supertags per wordnasr and rambow investigate the potential impact of ltag supertagging on parsing speed and accuracy by performing a number of oracle experimentsthey find that with the perfect supertagger extremely high parsing accuracies and speeds can be obtainedinterestingly the accuracy of ltag supertaggers using automatically extracted grammars is significantly below the accuracy of the ccg supertaggerone possible way to increase the accuracy of ltag supertagging is to use a maximum entropy rather than hmm tagger but this is likely to result in an improvement of only a few percentage pointsthus whether the difference in supertagging accuracy is due to the nature of the formalisms the supertagging methods used or properties of the extracted grammars is an open questionrelated work on statistical parsing with ccg will be described in section 3combinatory categorial grammar is a typedriven lexicalized theory of grammar based on categorial grammar ccg lexical clark and curran widecoverage efficient statistical parsing entries consist of a syntactic category which defines valency and directionality and a semantic interpretationin this article we are concerned with the syntactic component see steedman for how a semantic interpretation can be composed during a syntactic derivation and also bos et al for how semantic interpretations can be built for newspaper text using the widecoverage parser described in this articlecategories can be either basic or complexexamples of basic categories are s n np and pp complex categories are built recursively from basic categories and indicate the type and directionality of arguments and the type of the resultfor example the following category for the transitive verb bought specifies its first argument as a noun phrase to its right its second argument as a noun phrase to its left and its result as a sentence in the theory of ccg basic categories are regarded as complex objects that include syntactic features such as number gender and casefor the grammars in this article categories are augmented with some additional information such as head information and also features on s categories which distinguish different types of sentence such as declarative infinitival and whquestionthis additional information will be described in later sectionscategories are combined in a derivation using combinatory rulesin the original categorial grammar which is contextfree there are two rules of functional application where x and y denote categories the first rule is forward application and the second rule is backward application forward composition is often used in conjunction with typeraising as in figure 2in this case typeraising takes a subject noun phrase and turns it into a functor looking to the right for a verb phrase the fund is then able to combine with reached using forward composition giving the fund reached the category sdclnp it is exactly this type of constituent which the object relative pronoun category is looking for to its right note that the fund reached is a perfectly reasonable constituent in ccg having the type sdclnpthis allows analyses for sentences such as the fund reached but investors disagreed with the agreement even though this construction is often described as nonconstituent coordination in this example the fund reached and investors disagreed with have the same type allowing them to be coordinated resulting in the fund reached but investors disagreed with having the type sdclnpnote also that it is this flexible notion of constituency which leads to socalled spurious ambiguity because even the simple sentence the fund reached an agreement will have more than one derivation with each derivation leading to the same set of predicateargument dependenciesforward composition is generalized to allow additional arguments to the right of the z category in for example the following combination allows analysis of sentences such as i offered and may give a flower to a policeman may give ppnp this example shows how the categories for may and give combine resulting in a category of the same type as offered which can then be coordinatedsteedman gives a more precise definition of generalized forward compositionfurther combinatory rules in the theory of ccg include backward composition and backward crossed composition clark and curran widecoverage efficient statistical parsing backward composition provides an analysis for sentences involving argument cluster coordination such as i gave a teacher an apple and a policeman a flower backward crossed composition is required for heavy np shift and coordinations such as i shall buy today and cook tomorrow the mushroomsin this coordination example from steedman backward crossed composition is used to combine the categories for buy np and today and similarly for cook and tomorrow producing categories of the same type which can be coordinatedthis rule is also generalized in an analogous way to forward compositionfinally there is a coordination rule which conjoins categories of the same type producing a further category of that typethis rule can be implemented by assuming the following category schema for a coordination term x where x can be any categoryall of the combinatory rules described above are implemented in our parserother combinatory rules such as substitution have been suggested in the literature to deal with certain linguistic phenomena but we chose not to implement themthe reason is that adding new combinatory rules reduces the efficiency of the parser and we felt that in the case of substitution for example the small gain in grammatical coverage was not worth the reduction in speedsection 93 discusses some of the choices we made when implementing the grammarone way of dealing with the additional ambiguity in ccg is to only consider normalform derivationsinformally a normalform derivation is one which uses typeraising and composition only when necessaryeisner describes a technique for eliminating spurious ambiguity entirely by defining exactly one normalform derivation for each semantic equivalence class of derivationsthe idea is to restrict the combination of categories produced by composition more specifically any constituent which is the result of a forward composition cannot serve as the primary functor in another forward composition or forward applicationsimilarly any constituent which is the result of a backward composition cannot serve as the primary functor in another backward composition or backward applicationeisner only deals with a grammar without typeraising and so the constraints cannot guarantee a normalform derivation when applied to the grammars used in this articlehowever the constraints can still be used to significantly reduce the parsing spacesection 93 describes the various normalform constraints used in our experimentsa recent development in the theory of ccg is the multimodal treatment given by baldridge and baldridge and kruijff following the typelogical approaches to categorial grammar one possible extension to the parser and grammar described in this article is to incorporate the multimodal approach baldridge suggests that as well as having theoretical motivation a multimodal approach can improve the efficiency of ccg parsingccg was designed to deal with the longrange dependencies inherent in certain constructions such as coordination and extraction and arguably provides the most linguistically satisfactory account of these phenomenalongrange dependencies are relatively common in text such as newspaper text but are typically not recovered by treebank parsers such as collins and charniak this has led to a number of proposals for postprocessing the output of the collins and charniak parsers in which trace sites are located and the antecedent of the trace determined an advantage of using ccg is that the recovery of longrange dependencies can be integrated into the parsing process in a straightforward manner rather than be relegated to such a postprocessing phase another advantage of ccg is that providing a compositional semantics for the grammar is relatively straightforwardit has a completely transparent interface between syntax and semantics and because ccg is a lexicalized grammar formalism providing a compositional semantics simply involves adding semantic representations to the lexical entries and interpreting the small number of combinatory rulesbos et al show how this can be done for the grammar and parser described in this articleof course some of these advantages could be obtained with other grammar formalisms such as tag lfg and hpsg although ccg is especially wellsuited to analysing coordination and longrange dependenciesfor example the analysis of nonconstituent coordination described in the previous section is as far as we know unique to ccgfinally the lexicalized nature of ccg has implications for the engineering of a widecoverage parserlater we show that use of a supertagger prior to parsing can produce an extremely efficient parserthe supertagger uses statistical sequence tagging techniques to assign a small number of lexical categories to each word in the sentencebecause there is so much syntactic information in lexical categories the parser is required to do less work once the lexical categories have been assigned hence srinivas and joshi in the context of tag refer to supertagging as almost parsingthe parser is able to parse 20 wall street journal sentences per second on standard hardware using our bestperforming model which compares very favorably with other parsers using linguistically motivated grammarsa further advantage of the supertagger is that it can be used to reduce the parse space for estimation of the loglinear parsing modelsby focusing on those parses which result from the most probable lexical category sequences we are able to perform effective discriminative training without considering the complete parse space which for most sentences is prohibitively largethe idea of supertagging originated with ltag however in contrast to the ccg grammars used in this article the automatically extracted ltag grammars have as yet been too large to enable effective supertagging we are not aware of any other work which has demonstrated the parsing efficiency benefits of supertagging using an automatically extracted grammarthe work in this article began as part of the edinburgh widecoverage ccg parsing project there has been some other work on defining stochastic categorial grammars but mainly in the context of grammar learning an early attempt from the edinburgh project at widecoverage ccg parsing is presented in clark hockenmaier and steedman in order to deal with the problem of the additional nonstandard ccg derivations a conditional model of dependency structures is presented based on collins in which the dependencies are modeled directly and derivations are not modeled at allthe conditional probability of a dependency structure π given a sentence s is factored into two partsthe first part is the probability of the lexical category sequence c and the second part is the dependency structure d giving p ppintuitively the category sequence is genclark and curran widecoverage efficient statistical parsing erated first conditioned on the sentence and then attachment decisions are made to form the dependency linksthe probability of the category sequence is estimated using a maximum entropy model following the supertagger described in clark the probabilities of the dependencies are estimated using relative frequencies following collins the model was designed to include some longrange predicateargument dependencies as well as local dependencieshowever there are a number of problems with the model as the authors acknowledgefirst the model is deficient losing probability mass to dependency structures not generated by the grammarsecond the relative frequency estimation of the dependency probabilities is ad hoc and cannot be seen as maximum likelihood estimation or some other principled methoddespite these flaws the parser based on this model was able to recover ccg predicateargument dependencies at around 82 overall fscore on unseen wsj texthockenmaier and hockenmaier and steedman present a generative model of normalform derivations based on various techniques from the statistical parsing literature a ccg binary derivation tree is generated topdown with the probability of generating particular child nodes being conditioned on some limited context from the previously generated structurehockenmaiers parser uses rule instantiations read off ccgbank and some of these will be instances of typeraising and composition hence the parser can produce nonnormalform derivationshowever because the parsing model is estimated over normalform derivations any nonnormalform derivations will receive low probabilities and are unlikely to be returned as the most probable parsehockenmaier compares a number of generative models starting with a baseline model based on a pcfgvarious extensions to the baseline are considered increasing the amount of lexicalization generating a lexical category at its maximal projection conditioning the probability of a rule instantiation on the grandparent node adding features designed to deal with coordination and adding distance to the dependency featuressome of these extensions such as increased lexicalization and generating a lexical category at its maximal projection improved performance whereas others such as the coordination and distance features reduced performancehockenmaier conjectures that the reduced performance is due to the problem of data sparseness which becomes particularly severe for the generative model when the number of features is increasedthe best performing model outperforms that of clark hockenmaier and steedman recovering ccg predicateargument dependencies with an overall fscore of around 84 using a similar evaluationhockenmaier presents another generative model of normalform derivations which is based on the dependencies in the predicateargument structure including longrange dependencies rather than the dependencies defined by the local trees in the derivationhockenmaier also argues that compared to hockenmaier and steedman the predicateargument model is better suited to languages with freer word order than englishthe model was also designed to test whether the inclusion of predicateargument dependencies improves parsing accuracyin fact the results given in hockenmaier are lower than previous resultshowever hockenmaier reports that the increased complexity of the model reduces the effectiveness of the dynamic programming used in the parser and hence a more aggressive beam search is required to produce reasonable parse timesthus the reduced accuracy could be due to implementation difficulties rather than the model itselfthe use of conditional loglinear models in this article is designed to overcome some of the weaknesses identified in the approach of clark hockenmaier and steedman and to offer a more flexible framework for including features than the generative models of hockenmaier for example adding longrange dependency features to the loglinear model is straightforwardwe also showed in clark and curran that in contrast with hockenmaier adding distance to the dependency features in the loglinear model does improve parsing accuracyanother feature of conditional loglinear models is that they are trained discriminatively by maximizing the conditional probability of each goldstandard parse relative to the incorrect parses for the sentencegenerative models in contrast are typically trained by maximizing the joint probability of the pairs even though the sentence does not need to be inferredthe treebank used in this article performs two roles it provides the lexical category set used by the supertagger plus some unary typechanging rules and punctuation rules used by the parser and it is used as training data for the statistical modelsthe treebank is ccgbank a ccg version of the penn treebank penn treebank conversions have also been carried out for other linguistic formalisms including tag lfg and hpsg ccgbank was created by converting the phrasestructure trees in the penn treebank into ccg normalform derivationssome preprocessing of the phrasestructure trees was required in order to allow the correct ccg analyses for some constructions such as coordinationhockenmaier gives a detailed description of the procedure used to create ccgbankfigure 3 shows an example normalform derivation for an ccgbank sentencethe derivation has been inverted so that it is represented as a binary treesentence categories in ccgbank carry features such as dcl for declarative wq for whquestions and for for small clauses headed by for see hockenmaier for the complete lists categories also carry features in verb phrases for example sbnp clark and curran widecoverage efficient statistical parsing is a bareinfinitive stonp is a toinfinitive spssnp is a past participle in passive modenote that whenever an s or snp category is modified any feature on the s is carried through to the result category this is true in our parser alsofinally determiners specify that the resulting noun phrase is nonbare npnbn although this feature is largely ignored by the parser described in this articleas well as instances of the standard ccg combinatory rulesforward and backward application forward and backward composition backwardcrossed composition typeraising coordination of like typesccgbank contains a number of unary typechanging rules and rules for dealing with punctuationthe typechanging rules typically change a verb phrase into a modifierthe following examples taken from hockenmaier demonstrate the most common rulesthe bracketed expression has the typechanging rule applied to it the millions of dollars it generates another common typechanging rule in ccgbank which appears in figure 3 changes a noun category n into a noun phrase npappendix a lists the unary typechanging rules used by our parserthere are also a number of rules in ccgbank for absorbing punctuationfor example figure 3 contains a rule which takes a comma followed by a declarative sentence and returns a declarative sentence there are a number of similar comma rules for other categoriesthere are also similar punctuation rules for semicolons colons and bracketsthere is also a rule schema which treats a comma as a coordination appendix a contains the complete list of punctuation rules used in the parsera small number of local trees in ccgbankconsisting of a parent and one or two childrendo not correspond to any of the ccg combinatory rules or the typechanging rules or punctuation rulesthis is because some of the phrase structure subtrees in the penn treebank are difficult to convert to ccg combinatory rules and because of noise introduced by the treebank conversion processdependency structures perform two roles in this articlefirst they are used for parser evaluation the accuracy of a parsing model is measured using precision and recall over ccg predicateargument dependenciessecond dependency structures form the core of the dependency model probabilities are defined over dependency structures and the parsing algorithm for this model returns the highest scoring dependency structurewe define a ccg dependency structure as a set of ccg predicateargument dependenciesthey are defined as sets rather than multisets because the lexical items in a dependency are considered to be indexed by sentence position this is important for evaluation purposes and for the dependency model determining which derivations lead to a given set of dependencieshowever there are situations where the lexical items need to be considered independently of sentence position for example when defining feature functions in terms of dependenciessuch cases should be clear from the contextwe define ccg predicateargument relations in terms of the argument slots in ccg lexical categoriesthus the transitive verb category np has two predicate argument relations associated with it one corresponding to the object np argument and one corresponding to the subject np argumentin order to distinguish different argument slots the arguments are numbered from left to rightthus the subject relation for a transitive verb is represented as np21the predicateargument dependencies are represented as 5tuples where hf is the lexical item of the lexical category expressing the dependency relation f is the lexical category s is the argument slot ha is the head word of the argument and l encodes whether the dependency is nonlocalfor example the dependency encoding company as the object of bought is represented as follows the subscripts on the lexical items indicate sentence position and the final field indicates that the dependency is a local dependencyhead and dependency information is represented on the lexical categories and dependencies are created during a derivation as argument slots are filledlongrange dependencies are created by passing head information from one category to another using unificationfor example the expanded category for the control verb persuade is the head of the infinitival complements subject is identified with the head of the object using the variable x unification then passes the head of the object to the subject of the infinitival as in standard unificationbased accounts of controlin the current implementation the head and dependency markup depends on the category only and not the lexical itemthis gives semantically incorrect dependencies in some cases for clark and curran widecoverage efficient statistical parsing example the control verbs persuade and promise have the same lexical category which means that promise brooks to go is assigned a structure meaning promise brooks that brooks will gothe kinds of lexical items that use the head passing mechanism are raising auxiliary and control verbs modifiers and relative pronounsamong the constructions that project unbounded dependencies are relativization and right node raisingthe following relative pronoun category shows how heads are coindexed for objectextraction in a sentence such as the company which ibm bought the coindexing will allow company to be returned as the object of bought which is represented using the following dependency the final field indicates the category which mediated the longrange dependency in this case the object relative pronoun categorythe dependency annotation also permits complex categories as argumentsfor example the marked up category for about is if 5000 has the category 5000 the dependency relation marked on the y1 argument in allows the dependency between about and 5000 to be capturedin the current implementation every argument slot in a lexical category corresponds to a dependency relationthis means for example that the parser produces subjects of toinfinitival clauses and auxiliary verbsin the sentence ibm may like to buy lotus ibm will be returned as the subject of may like to and buythe only exception is during evaluation when some of these dependencies are ignored in order to be consistent with the predicateargument dependencies in ccgbank and also depbankin future work we may investigate removing some of these dependencies from the parsing model and the parser outputthis section describes two parsing models for ccgthe first defines the probability of a dependency structure and the secondthe normalform modeldefines the probability of a single derivationin many respects modeling single derivations is simpler than modeling dependency structures as the rest of the article will demonstratehowever there are a number of reasons for modeling dependency structuresfirst for many applications predicateargument dependencies provide a more useful output than derivations and the parser evaluation is over dependencies hence it would seem reasonable to optimize over the dependencies rather than the derivationsecond we want to investigate for the purposes of parse selection whether there is useful information in the nonstandard derivationswe can test this by defining the probability of a dependency structure in terms of all the derivations leading to that structure rather than emphasising a single derivationthus the probability of a dependency structure π given a sentence s is defined as follows where o is the set of derivations which lead to πthis approach is different from that of clark hockenmaier and steedman who define the probability of a dependency structure simply in terms of the dependenciesone reason for modeling derivations in addition to predicateargument dependencies is that derivations may contain useful information for inferring the correct dependency structurefor both the dependency model and the normalform model the probability of a parse is defined using a loglinear formhowever the meaning of parse differs in the two casesfor the dependency model a parse is taken to be a pair as in equation for the normalform model a parse is simply a derivation2 we define a conditional loglinear model of a parse ω q given a sentence s as follows where λ f ei λifithe function fi is the integervalued frequency function of the ith feature λi is the weight of the ith feature and zs is a normalizing constant which ensures that p is a probability distribution where ρ is the set of possible parses for s for the normalform model features are defined over single derivations including local wordword dependencies arising from lexicalized rule instantiationsthe feature set is derived from the goldstandard normalform derivations in ccgbankfor the dependency model features are defined over dependency structures as well as derivations and the feature set is derived from all derivations leading to goldstandard dependency structures including nonstandard derivationssection 7 describes the feature types in more detailfor the dependency model the training data consists of goldstandard dependency structures namely sets of ccg predicateargument dependencies as described earlierwe follow riezler et al in using a discriminative estimation method by maximizing the conditional loglikelihood of the model given the data minus a gaussian prior 2 we could model predicateargument dependencies together with the derivation but we wanted to use features from the derivation only following hockenmaier and steedman clark and curran widecoverage efficient statistical parsing term to prevent overfitting thus given training sentences s1 sm goldstandard dependency structures π1 πm and the definition of the probability of a dependency structure from equation the objective function is where l is the loglikelihood of model λ g is the gaussian prior term and n is the number of featureswe use a single smoothing parameter σ so that σi σ for all i however grouping the features into classes and using a different σ for each class is worth investigating and may improve the resultsoptimization of the objective function whether using iterative scaling or more general numerical optimization methods requires calculation of the gradient of the objective function at each iterationthe components of the gradient vector are as follows the first two terms are expectations of feature fi the second expectation is over all derivations for each sentence in the training data and the first is over only the derivations leading to the goldstandard dependency structure for each sentencethe estimation process attempts to make the expectations in equation equal another way to think of the estimation process is that it attempts to put as much mass as possible on the derivations leading to the goldstandard structures the gaussian prior term prevents overfitting by penalizing any model whose weights get too large in absolute valuethe estimation process can also be thought of in terms of the framework of della pietra della pietra and lafferty because setting the gradient in equation to zero yields the usual maximum entropy constraints namely that the expected value of each feature is equal to its empirical value however in this case the empirical values are themselves expectations over all derivations leading to each goldstandard dependency structurefor the normalform model the training data consists of goldstandard normalform derivationsthe objective function and gradient vector for the normalform model are where dj is the the goldstandard normalform derivation for sentence sj and θ is the set of possible derivations for sjnote that θ could contain some nonnormalform derivations however because any nonnormalform derivations will be considered incorrect the resulting model will typically assign low probabilities to nonnormalform derivationsthe empirical value in equation is simply a count of the number of times the feature appears in the goldstandard normalform derivationsthe second term in equation is an expectation over all derivations for each sentencethe limited memory bfgs algorithm is a general purpose numerical optimization algorithm in contrast to iterative scaling algorithms such as gis which update the parameters one at a time on each iteration lbfgs updates the parameters all at once on each iterationit does this by considering the topology of the feature space and moving in a direction which is guaranteed to increase the value of the objective functionthe simplest way in which to consider the shape of the feature space is to move in the direction in which the value of the objective function increases most rapidly this leads to the method of steepestascenthence steepestascent uses the first partial derivative of the objective function to determine parameter updateslbfgs improves on steepestascent by also considering the second partial derivative in fact calculation of the hessian can be prohibitively expensive and so lbfgs estimates this derivative by observing the change in a fixed number of previous gradients malouf gives a more thorough description of numerical optimization methods applied to loglinear modelshe also presents a convincing demonstration that general purpose numerical optimization methods can greatly outperform iterative scaling methods for many nlp tasks3 malouf uses standard numerical computation libraries clark and curran widecoverage efficient statistical parsing as the basis of his implementationone of our aims was to provide a self contained estimation code base and so we implemented our own version of the lbfgs algorithm as described in nocedal and wright the lbfgs algorithm requires the following values at each iteration the expected value and the empirical expected value of each feature for calculating the gradient and the value of the likelihood functionfor the normalform model the empirical expected values and the likelihood can be easily obtained because these only involve the single goldstandard derivation for each sentencefor the dependency model the computations of the empirical expected values and the likelihood function are more complex because these involve sums over just those derivations leading to the goldstandard dependency structureswe explain how these derivations can be found in section 54the next section explains how ccg charts can be represented in a way which allows efficient estimationthe packed charts perform a number of rolesfirst they compactly represent every pair by grouping together equivalent chart entriesentries are equivalent when they interact in the same manner with both the generation of subsequent parse structure and the statistical parse selectionin practice this means that equivalent entries have the same span form the same structures that is the remaining derivation plus dependencies in any subsequent parsing and generate the same features in any subsequent parsingback pointers to the daughters indicate how an individual entry was created so that any derivation plus dependency structure can be recovered from the chartthe second role of the packed charts is to allow recovery of the highest scoring derivation or dependency structure without enumerating all derivationsand finally packed charts are an instance of a feature forest which miyao and tsujii show can be used to efficiently estimate expected values of features even though the expectation may involve a sum over an exponential number of trees in the forestone of the contributions of this section is showing how miyao and tsujiis feature forest approach can be applied to a particular grammar formalism namely ccgas chiang points out miyao and tsujii do not provide a way of constructing a feature forest given a sentence but provide the mathematical tools for estimation once the feature forest has been constructedin our packed charts entries are equivalent when they have the same category type identical head and identical unfilled dependenciesthe equivalence test must account for heads and unfilled dependencies because equivalent entries form the same dependencies in any subsequent parsingindividual entries in the chart are obtained by combining canonical representatives of equivalence classes using the rules of the grammarequivalence classes in the chart are sets of equivalent individual entriesa feature forest φ is defined as a tuple where the interpretation of a packed chart as a feature forest is straightforwardfirst only entries which are part of a derivation spanning the whole sentence are relevantthese entries can be found by traversing the chart topdown starting with the entries which span the sentenceindividual entries in a cell are the conjunctive nodes which are either pairs at the leaves or have been obtained by combining two equivalence classes the equivalence classes of individual entries are the disjunctive nodesand finally the equivalence classes at the roots of the ccg derivations are the root disjunctive nodesfor each feature function defined over parses there is a corresponding feature function defined over conjunctive nodes that is for each fi ω 4 n there is a corresponding fi c 4 n which counts the number of times feature fi appears on a particular conjunctive nodethe value of fi for a parse is then the sum of the values offi for each conjunctive node in the parsethe features used in the parsing model determine the definition of the equivalence relation used for grouping individual entriesin our models features are defined in terms of individual dependencies and local rule instantiations where a rule instantiation is the local tree arising from the application of a rule in the grammarnote that features can be defined in terms of longrange dependencies even though such dependencies may involve words which are a long way apart in the sentenceour earlier definition of equivalence is consistent with these feature typesas an example consider the following composition of will with buy using the forward composition rule the equivalence class of the resulting individual entry is determined by the ccg category plus heads in this case np plus the dependencies yet to be filledthe dependencies are not shown but there are two subject dependencies on the first np one encoding the subject of will and one encoding the subject of buy and there is an object dependency on the second np encoding the object of buyentries in the same equivalence class are identical for the purposes of creating new dependencies for the remainder of the parsingit is possible to extend the locality of the features beyond single rule instantiations and local dependenciesfor example the definition of equivalence given earlier allows the incorporation of longrange dependencies as featuresthe equivalence test considers unfilled dependencies which are both local and longrange thus any individual entries which have different longrange dependencies waiting to be filled will be in different equivalence classesone of the advantages of loglinear models is that it is easy to include such features hockenmaier describes the difficulties in including such features in a generative modelone of the early motivations of the edinburgh ccg parsing project was to see if the longrange dependencies recovered by a ccg parser could improve the accuracy of a parsing modelin fact we have found that adding longrange dependencies to any of the models described in this article has no impact on accuracyone possible explanation is that the longrange dependencies are so rare that a much larger amount of training data would be required for these dependencies to have an impactof course the fact that ccg enables recovery of longrange dependencies is still a useful property even if these dependencies are not currently useful as features because it improves the utility of the parser outputthere is considerable flexibility in defining the features for a parsing model in our loglinear framework as the longrange dependency example demonstrates but the need for dynamic programming for both estimation and decoding reduces the range of features which can be usedany extension to the locality of the features would reduce the effectiveness of the chart packing and any dynamic programming performed over the charttwo possible extensions which we have not investigated include defining dependency features which account for all three elements of the triple in a ppattachment and defining a rule feature which includes the grandparent node another alternative for future work is to compare the dynamic programming approach taken here with the beamsearch approach of collins and roark which allows more global featuresfor estimating both the normalform model and the dependency model the following expectation of each feature fi with respect to some model a is required where ρ is the set of all parses for sentence s and λ is the vector of weights for athis is essentially the same calculation for both models even though for the dependency model features can be defined in terms of dependencies as well as the derivationsdependencies can be stored as part of the individual entries at which they are created hence all features can be defined in terms of the individual entries which make up the derivationscalculating eλ fi requires summing over all derivations ω which include fi for each sentence s in the training datathe key to performing this sum efficiently is to write the sum in terms of inside and outside scores for each conjunctive nodethe inside and outside scores can be defined recursivelyif the inside score for a conjunctive node c is denoted φc and the outside score denoted ψc then the expected value of fi can be written as follows where cs is the set of conjunctive nodes in the packed chart for sentence s the inside score for a conjunctive node φc is defined in terms of the inside scores of cs disjunctive node daughters where λ f i λifiif the conjunctive node is a leaf node the inside score is just the exponentiation of the sum of the feature weights on that nodethe outside score for a conjunctive node ψc is the outside score for its disjunctive node mother the calculation of the outside score for a disjunctive node ψd is a little more involved it is defined as a sum over the conjunctive mother nodes of the product of the outside score of the mother the inside score of the disjunctive node sister and the feature weights on the motherfor example the outside score of d4 in figure 4 is the sum of two product termsthe first term is the product of the outside score of c5 the inside score of d5 and the feature weights at c5 and the second term is the product of the outside score of c2 the inside score of d3 and the feature weights at c2the definition is as follows the outside score for a root disjunctive node is 1 otherwise the normalization constant zs is the sum of the inside scores for the root disjunctive nodes in order to calculate inside scores the scores for daughter nodes need to be calculated before the scores for mother nodes this can easily be achieved by ordering the nodes in the bottomup cky parsing orderfor the dependency model the computations of the empirical expected values and the loglikelihood function require sums over just those derivations leading to the goldstandard dependency structurewe will refer to such derivations as correct derivationsas far as we know this problem of identifying derivations in a packed chart which lead to a particular dependency structure has not been addressed before in the nlp literaturefigure 5 gives an algorithm for finding nodes in a packed chart which appear in correct derivations cdeps returns the number of correct dependencies on conjunctive node c and returns the incorrect marker if there are any incorrect dependencies on c dmax returns the maximum number of correct dependencies produced by any subderivation headed by c and returns if there are no subderivations producing algorithm for finding nodes in correct derivations only correct dependencies dmax returns the same value but for disjunctive node d recursive definitions of these functions are given in figure 5 the base case occurs when conjunctive nodes have no disjunctive daughtersthe algorithm identifies all those root nodes heading derivations which produce just the correct dependencies and traverses the chart topdown marking the nodes in those derivationsthe insight behind the algorithm is that for two conjunctive nodes in the same equivalence class if one node heads a subderivation producing more correct dependencies than the other node then the node with less correct dependencies cannot be part of a correct derivationthe conjunctive and disjunctive nodes appearing in correct derivations form a new feature forest which we call a correct forestthe correct forest forms a subset of the complete forest the correct and complete forests can be used to estimate the required loglikelihood value and feature expectationslet eφλ fi be the expected value of fi over the forest can be obtained by calculating eφj where log zφ and log zψ are the normalization constants for take up a considerable amount of memoryone solution is to only keep a small number of charts in memory at any one time and to keep reading in the charts on each iterationhowever given that the lbfgs algorithm takes hundreds of iterations to converge this approach would be infeasibly slowour solution is to keep all charts in memory by developing a parallel version of the lbfgs training algorithm and running it on an 18node beowulf clusteras well as solving the memory problem another significant advantge of parallelization is the reduction in estimation time using 18 nodes allows our bestperforming model to be estimated in less than three hourswe use the the message passing interface standard for the implementation the parallel implementation is a straightforward extension of the bfgs algorithmeach machine in the cluster deals with a subset of the training data holding the packed charts for that subset in memorythe key stages of the algorithm are the calculations of the model expectations and the likelihood functionfor a singleprocess version these are calculated by summing over all the training instances in one placefor a multiprocess version these are summed in parallel and at the end of each iteration the parallel sums are combined to give a master sumproducing a master operation across a cluster using mpi is a reduce operationin our case every node needs to be holding a copy of the master sum so we use an all reduce operationthe mpi library handles all aspects of the parallelization including finding the optimal way of summing across the nodes of the beowulf cluster in fact the parallelization only adds around twenty lines of code to the singleprocess implementationbecause of the simplicity of the parellel communication between the nodes parallelizing the estimation code is an example of an embarrassingly parallel problemone difficult aspect of the parallel implementation is that debugging can be much harder in which case it is often easier to test a nonmpi version of the program firstfor the normalform model the viterbi algorithm is used to find the most probable derivation from a packed chartfor each equivalence class we record the individual entry at the root of the subderivation which has the highest score for the classthe equivalence classes were defined so that any other individual entry cannot be part of the highest scoring derivation for the sentencethe score for a subderivation d is ei λifi where fi is the number of times the ith feature occurs in the subderivationthe highestscoring subderivations can be calculated recursively using the highestscoring equivalence classes that were combined to create the individual entryfor the dependency model the highest scoring dependency structure is requiredclark and curran outline an algorithm for finding the most probable dependency structure which keeps track of the highest scoring set of dependencies for each node in the chartfor a set of equivalent entries in the chart this involves summing over all conjunctive node daughters which head subderivations leading to the same set of high scoring dependenciesin practice large numbers of such conjunctive nodes lead to very long parse timesas an alternative to finding the most probable dependency structure we have developed an algorithm which maximizes the expected labeled recall over dependenciesour algorithm is based on goodmans labeled recall algorithm for the phrasestructure parseval measuresas far as we know this is the first application of goodmans approach to finding highest scoring dependency structureswatson carroll and briscoe have also applied our algorithm to the grammatical relations output by the rasp parserthe dependency structure πmax which maximizes the expected recall is where πi ranges over the dependency structures for s the expectation for a single dependency structure π is realized as a weighted intersection over all possible dependency structures πi for s the intuition is that if πi is the gold standard then the number of dependencies recalled in π is π πibecause we do not know which πi is the gold standard then we calculate the expected recall by summing the recall of π relative to each πi weighted by the probability of πithe expression can be expanded further the reason for this manipulation is that the expected recall score for π is now written in terms of a sum over the individual dependencies in π rather than a sum over each dependency structure for s the inner sum is over all derivations which contain a particular individual dependency τthus the final score for a dependency structure π is a sum of the scores for each dependency τ in π and the score for a dependency τ is the sum of the probabilities of those derivations producing τthis latter sum can be calculated efficiently using inside and outside scores where φc is the inside score and ψc is the outside score for node c c is the set of conjunctive nodes in the packed chart for sentence s and deps is the set of dependencies on conjunctive node c the intuition behind the expected recall score is that a dependency structure scores highly if it has dependencies produced by high probability derivations4 the reason for rewriting the score in terms of individual dependencies is to make use of the packed chart the score for an individual dependency can be calculated using dynamic programming and the highest scoring dependency structure can be found using dynamic programming alsothe algorithm which finds πmax is essentially the same as the viterbi algorithm described earlier efficiently finding a derivation which produces the highest scoring set of dependenciesthe loglinear modeling framework allows considerable flexibility for representing the parse space in terms of featuresin this article we limit the features to those defined over local rule instantiations and single predicateargument dependenciesthe feature sets described below differ for the dependency and normalform modelsthe dependency model has features defined over the ccg predicateargument dependencies whereas the dependencies for the normalform model are defined in terms of local rule instantiations in the derivationanother difference is that the rule features for the normalform model are taken from the goldstandard normalform derivations whereas the dependency model contains rule features from nonnormalform derivationsthere are a number of features defined over derivations which are common to the dependency model and the normalform model5 first there are features which represent each pair in a derivation and generalizations of these which represent pairssecond there are features representing the root category of a derivation which we also extend with the head word of the root category this latter feature is then generalized using the pos tag of the head third there are features which encode rule instantiationslocal trees consisting of a parent and one or two childrenin the derivationthe first set of rule features encode the combining categories and the result category the second set of features extend the first by also encoding the head of the result category and the third set generalizes the second using pos tagstable 1 gives an example for each of these feature typesthe dependency model also has ccg predicateargument dependencies as features defined as 5tuples as in section 34in addition these features are generalized in three ways using pos tags with the wordword pair replaced with wordpos posword and pospostable 2 gives some exampleswe extend the dependency features further by adding distance informationthe distance features encode the dependency relation and the word associated with the lexical category plus some measure of distance between the two dependent wordswe use three distance measures which count the following the number of intervening words with four possible values 0 1 2 or more the number of intervening punctuation marks with four possible values 0 1 2 or more and the number of intervening verbs with three possible values 0 1 or moreeach of these features is again generalized by replacing the word associated with the lexical category with its pos tag5 each feature has a corresponding frequency function defined in equation which counts the number of times the feature appears in a derivationfor the normalform model we follow hockenmaier and steedman by defining dependency features in terms of the local rule instantiations by adding the heads of the combining categories to the rule instantiation features6 these are generalized in three ways using pos tags as shown in table 3there are also the three distance measures which encode the distance between the two head words of the combining categories as for the dependency modelhere the distance feature encodes the combining categories the result category the head of the result category and the distance between the two head wordsfor the features in the normalform model a frequency cutoff of two was applied that is a feature had to occur at least twice in the goldstandard normalform derivations to be included in the modelthe same cutoff was applied to the features in the dependency model except for the rule instantiation feature typesfor these features the counting was done across all derivations licensed by the goldstandard lexical category sequences and a frequency cutoff of 10 was appliedthe larger cutoff was used because the productivity of the grammar can lead to very large numbers of these featureswe also only included those features which had a nonzero empirical count that is those features which occured on at least one correct derivationthese feature types and frequency cutoffs led to 475537 features for the normalform model and 632591 features for the dependency modelparsing with lexicalized grammar formalisms such as ccg is a twostep process first elementary syntactic structuresin ccgs case lexical categoriesare assigned to each word in the sentence and then the parser combines the structures togetherthe first step can be performed by simply assigning to each word all lexical categories the word is seen with in the training data together with some strategy for dealing with rare and unknown words because the number of lexical categories assigned to a word can be high some strategy is needed to make parsing practical hockenmaier for example uses a beam search to discard chart entries with low scoresin this article we take a different approach by using a supertagger to perform step oneclark and curran describe the supertagger which uses loglinear models to define a distribution over the lexical category set for each local fiveword context containing the target word the features used in the models are the words and pos tags in the fiveword window plus the two previously assigned lexical categories to the leftthe conditional probability of a sequence of lexical categories given a sentence is then defined as the product of the individual probabilities for each categorythe most probable lexical category sequence can be found efficiently using a variant of the viterbi algorithm for hmm taggerswe restrict the categories which can be assigned to a word by using a tag dictionary for words seen at least k times in the training data the tagger can only assign categories which have been seen with the word in the datafor words seen less than k times an alternative based on the words pos tag is used the tagger can only assign categories which have been seen with the pos tag in the datawe have found the tag dictionary to be beneficial in terms of both efficiency and accuracya value of k 20 was used in the experiments described in this articlethe lexical category set used by the supertagger is described in clark and curran and curran clark and vadas it includes all lexical catgeories which appear at least 10 times in sections 0221 of ccgbank resulting in a set of 425 categoriesthe clark and curran paper shows this set to have very high coverage on unseen datathe accuracy of the supertagger on section 00 of ccgbank is 926 with a sentence accuracy of 368sentence accuracy is the percentage of sentences whose words are all tagged correctlythese figures include punctuation marks for which the lexical category is simply the punctuation mark itself and are obtained using gold standard pos tagswith automatically assigned pos tags using the pos tagger of curran and clark the accuracies drop to 915 and 325an accuracy of 9192 may appear reasonable given the large lexical category set however the low sentence accuracy suggests that the supertagger may not be accurate enough to serve as a frontend to a parserclark reports that a significant loss in coverage results if the supertagger is used as a frontend to the parser of hockenmaier and steedman in order to increase the number of words assigned the correct category we develop a ccg multitagger which is able to assign more than one category to each wordhere yi is to be thought of as a constant category whereas yj varies over the possible categories for word jin words the probability of category yi given the sentence is the sum of the probabilities of all sequences containing yithis sum can be calculated efficiently using a variant of the forwardbackward algorithmfor each word in the sentence the multitagger then assigns all those categories whose probability according to equation is within some factor β of the highest probability category for that wordin the implementation used here the forwardbackward sum is limited to those sequences allowed by the tag dictionaryfor efficiency purposes an extra pruning strategy is also used to discard low probability subsequences before the forward backward algorithm is runthis uses a second variablewidth beam of 01βtable 4 gives the perword accuracy of the supertagger on section 00 for various levels of category ambiguity together with the average number of categories per word7 the sent column gives the percentage of sentences whose words are all supertagged correctlythe set of categories assigned to a word is considered correct if it contains the correct categorythe table gives results when using gold standard pos tags and in the final two columns when using pos tags automatically assigned by the pos tagger described in curran and clark the drop in accuracy is expected given the importance of pos tags as featuresthe table demonstrates the significant reduction in the average number of categories that can be achieved through the use of a supertaggerto give one example the number of categories in the tag dictionarys entry for the word is is 45however in the sentence mr vinken is chairman of elsevier nv the dutch publishing group the supertagger correctly assigns one category to is for all values of βin our earlier work the forwardbackward algorithm was not used to estimate the probability in equation curran clark and vadas investigate the improvement obtained from using the forwardbackward algorithm and also address the drop in supertagger accuracy when using automatically assigned pos tagswe show how to maintain some pos ambiguity through to the supertagging phase using a multipos tagger and also how pos tag probabilities can be encoded as realvalued features in the supertaggerthe drop in supertagging accuracy when moving from gold to automatically assigned pos tags is reduced by roughly 50 across the various values of βthe philosophy in earlier work which combined the supertagger and parser was to use an unrestrictive setting of the supertagger but still allow a reasonable compromise between speed and accuracythe idea was to give the parser the greatest possibility of finding the correct parse by initializing it with as many lexical categories as possible but still retain reasonable efficiencyhowever for some sentences the number of categories in the chart gets extremely large with this approach and parsing is unacceptably slowhence a limit was applied to the number of categories in the chart and a more restrictive setting of the supertagger was reverted to if the limit was exceededin this article we consider the opposite approach start with a very restrictive setting of the supertagger and only assign more categories if the parser cannot find an analysis spanning the sentencein this way the parser interacts much more closely with the supertaggerin effect the parser is using the grammar to decide if the categories provided by the supertagger are acceptable and if not the parser requests more categoriesthe advantage of this adaptive supertagging approach is that parsing speeds are much higher without any corresponding loss in accuracysection 103 gives results for the speed of the parserthe algorithm used to build the packed charts is the cky chart parsing algorithm described in steedman the cky algorithm applies naturally to ccg because the grammar is binaryit builds the chart bottomup starting with constituents spanning a single word incrementally increasing the span until the whole sentence is coveredbecause the constituents are built in order of span size at any point in the process all the subconstituents which could be used to create a particular new constituent must be present in the charthence dynamic programming can be used to prevent the need for backtracking during the parsing processthere is a tradeoff between the size and coverage of the grammar and the efficiency of the parserone of our main goals in this work has been to develop a parser which can provide analyses for the vast majority of linguistic constructions in ccgbank but is also efficient enough for largescale nlp applicationsin this section we describe some of the decisions we made when implementing the grammar with this tradeoff in mindfirst the lexical category set we use does not contain all the categories in sections 0221 of ccgbankapplying a frequency cutoff of 10 results in a set of 425 lexical categoriesthis set has excellent coverage on unseen data and is a manageable size for adding the head and dependency information and also mapping to grammatical relations for evaluation purposes second for the normalform model and also the hybrid dependency model described in section 1021 two types of contraints on the grammar rules are usedsection 3 described the eisner constraints in which any constituent which is the result of a forward composition cannot serve as the primary functor in another forward composition or forward application an analogous constraint applies for backward compositionthe second type of constraint only allows two categories to combine if they have been seen to combine in the training dataalthough this constraint only permits category combinations seen in sections 0221 of ccgbank we have found that it is detrimental to neither parser accuracy nor coverageneither of these constraints guarantee a normalform derivation but they are both effective at reducing the size of the charts which can greatly increase parser speed the constraints are also useful for trainingsection 10 shows that having a less restrictive setting on the supertagger when creating charts for discriminative training can lead to more accurate modelshowever the optimal setting on the supertagger for training purposes can only be used when the constraints are applied because otherwise the memory requirements are prohibitivefollowing steedman we place the following constraint on backward crossed composition the y category in cannot be an n or np categorywe also place a similar constraint on backward compositionboth constraints reduce the size of the charts considerably with no impact on coverage or accuracytyperaising is performed by the parser for the categories np pp and sadjnpit is implemented by adding one of three fixed sets of categories to the chart whenever an np pp or sadjnp is presentappendix a gives the category setseach category transformation is an instance of the following two rule schemata appendix a lists the punctuation and typechanging rules implemented in the parserthis is a larger grammar than we have used in previous articles mainly because the improvement in the supertagger since the earlier work means that we can now use a larger grammar but still maintain highly efficient parsingthe statistics relating to model estimation were obtained using sections 0221 of ccgbank as training datathe results for parsing accuracy were obtained using section 00 as development data and section 23 as the final test datathe results for parsing speed were obtained using section 23there are various hyperparameters in the parsing system for example the frequency cutoff for features the σ parameter in the gaussian prior term the r values used in the supertagger and so onall of these were set experimentally using section 00 as development datathe gold standard for the normalform model consists of the normalform derivations in ccgbankfor the dependency model the goldstandard dependency structures are produced by running our ccg parser over the normalform derivationsit is essential that the packed charts for each sentence contain the gold standard for the normalform model this means that our parser must be able to produce the goldstandard derivation from the goldstandard lexical category sequence and for the dependency model this means that at least one derivation in the chart must produce the goldstandard dependency structurenot all rule instantiations in ccgbank can be produced by our parser because some are not instances of combinatory rules and others are very rare punctuation and typechanging rules which we have not implementedhence it is not possible for the parser to produce the gold standard for every sentence in sections 0221 for either the normalform or the dependency modelthese sentences are not used in the training processfor parsing the training data we ensure that the correct category is a member of the set assigned to each wordthe average number of categories assigned to each word is determined by the β parameter in the supertaggera category is assigned to a word if the categorys probability is within β of the highest probability category for that wordhence the value of β has a direct effect on the size of the packed charts smaller β values lead to larger chartsfor training purposes the β parameter determines how many incorrect derivations will be used for each sentence for the discriminative training algorithmwe have found that the β parameter can have a large impact on the accuracy of the resulting models if the β value is too large then the training algorithm does not have enough incorrect derivations to discriminate against if the β value is too small then this introduces too many incorrect derivations into the training process and can lead to impractical memory requirementsfor some sentences the packed charts can become very largethe supertagging approach we adopt for training differs from that used for testing and follows the original approach of clark hockenmaier and steedman if the size of the chart exceeds some threshold the value of β is increased reducing ambiguity and the sentence is supertagged and parsed againthe threshold which limits the size of the charts was set at 300000 individual entriesfor a small number of long sentences the threshold is exceeded even at the largest β value these sentences are not used for trainingfor the normalform model we were able to use 35732 sentences for training and for the dependency model 35889 sentences table 5 gives training statistics for the normalform and dependency models for various sequences of β values when the training algorithm is run to convergence on an 18node clusterthe training algorithm is defined to have converged when the percentage change in the objective function is less than 00001the σ value in equation which was determined experimentally using the development data was set at 13 for all the experiments in this articlethe main reason that the normalform model requires less memory and converges faster than the dependency model is that for the normalform model we applied the two types of normalform restriction described in section 93 first categories can only combine if they appear together in a rule instantiation in sections 221 of ccgbank and second we applied the eisner constraints described in section 3we conclude this section by noting that it is only through the use of the supertagger that we are able to perform the discriminative estimation at all without it the memory requirements would be prohibitive even when using the clusterthis section gives accuracy figures on the predicateargument dependencies in ccgbankoverall results are given as well as results broken down by relation type as in clark hockenmaier and steedman because the purpose of this article is to demonstrate the feasibility of widecoverage parsing with ccg we do not give an evaluation targeted specifically at longrange dependencies such an evaluation was presented in clark steedman and curran for evaluation purposes the threshold parameter which limits the size of the charts was set at 1000000 individual entriesthis value was chosen to maximize the coverage of the parser so that the evaluation is performed on as much of the unseen data as possiblethis was also the threshold parameter used for the speed experiments in section 103all of the intermediate results were obtained using section 00 of ccgbank as development datathe final test result showing the performance of the best performing model was obtained using section 23evaluation was performed by comparing the dependency output of the parser against the predicateargument dependencies in ccgbankwe report precision recall and fscores for labeled and unlabeled dependencies and also category accuracythe category accuracy is the percentage of words assigned the correct lexical category by the parser the labeled dependency scores take into account the lexical category containing the dependency relation the argument slot the word associated with the lexical category and the argument head word all four must be correct to score a pointfor the unlabeled scores only the two dependent words are consideredthe fscore is the balanced harmonic mean of precision and recall 2prthe scores are given only for those sentences which were parsed successfullywe also give coverage values showing the percentage of sentences which were parsed successfullyusing the ccgbank dependencies for evaluation is a departure from our earlier work in which we generated our own gold standard by running the parser over the derivations in ccgbank and outputting the dependenciesin this article we wanted to use a gold standard which is easily accessible to other researchershowever there are some differences between the dependency scheme used by our parser and ccgbankfor example our parser outputs some coordination dependencies which are not in ccgbank also because the parser currently encodes every argument slot in each lexical category as a dependency relation there are some relations such as the subject of to in a toinfinitival construction which are not in ccgbank eitherin order to provide a fair evaluation we ignore those dependency relationsthis still leaves some minor differenceswe can measure the remaining differences as follows comparing the ccgbank dependencies in section 00 against those generated by running our parser over the derivations in 00 gives labeled precision and recall values of 9980 and 9918 respectivelythus there are a small number of dependencies in ccgbank which the current version of the parser can never get right1021 dependency model vs normalform modeltable 6 shows the results for the normalform and dependency models evaluated against the predicateargument dependencies in ccgbankgold standard pos tags were used the lf column gives the labeled fscore with automatically assigned pos tags for comparisondecoding with the dependency model involves finding the maximumrecall dependency structure and decoding with the normalform model involves finding the most probable derivation as described in section 6the β value refers to the setting of the supertagger used for training and is the first in the sequence of βs from table 5the β values used during the testing are those in table 4 and the new efficient supertagging strategy of taking the highest β value first was usedwith the same β values used for training the results for the dependency model are slightly higher than for the normalform modelhowever the coverage of the normalform model is higher one clear result from the table is that increasing the chart size used for training by using smaller β values can significantly improve the results in this case around 15 fscore for the normalform modelthe training of the dependency model already uses most of the ram available on the clusterhowever it is possible to use smaller β values for training the dependency model if we also apply the two types of normalform restriction used by the normalform modelthis hybrid model still uses the features from the dependency model it is still trained using dependency structures as the gold standard and decoding is still performed using the maximumrecall algorithm the only difference is that the derivations in the charts are restricted by the normalform constraints table 5 gives the training statistics for this model compared to the dependency and normalform modelsthe number of sentences we were able to use for training this model was 36345 the accuracy of this hybrid dependency model is given in table 7these are the highest results we have obtained to date on section 00we also give the results for the normalform model from table 6 for comparisontable 8 gives the results for the hybrid dependency model broken down by relation type using the same relations given in clark hockenmaier and steedman automatically assigned pos tags were used1022 final test resultstable 9 gives the final test results on section 23 for the hybrid dependency modelthe coverage for these results is 9963 which corresponds to 2398 of the 2407 sentences in section 23 receiving an analysiswhen using automatically assigned pos tags the coverage is slightly lower 9958we used version 12 of ccgbank to obtain these resultsresults are also given for hockenmaiers parser which used an earlier slightly different version of the treebankwe wanted to use the latest version to enable other researchers to compare with our resultsthe results in this section were obtained using a 32 ghz intel xeon p4table 10 gives parse times for the 2407 sentences in section 23 of ccgbankin order not to optimize speed by compromising accuracy we used the hybrid dependency model together with both kinds of normalform constraints and the maximumrecall decodertimes are given for both automatically assigned pos tags and goldstandard pos tags the sents and words columns give the number of sentences and the number of words parsed per secondfor all of the figures reported on section 23 unless stated otherwise we chose settings for the various parameters which resulted in a coverage of 996it is possible to obtain an analysis for the remaining 04 but at a significant loss in speedthe parse times and speeds include the failed sentences and include the time taken by the supertagger but not the pos tagger however the pos tagger is extremely efficient taking less than 4 seconds to supertag section 23 most of which consists of load time for the maximum entropy modelthe first row corresponds to the strategy of earlier work by starting with an unrestrictive setting of the supertaggerthe first value of β is 0005 if the parser cannot find a spanning analysis this is changed to β 0001k150 which increases the average number of categories assigned to a word by decreasing β and increasing the tagdictionary parameterif the node limit is exceeded at β 0005 β is changed to 001if the node limit is still exceeded β is changed to 003 and finally to 0075the second row corresponds to the new strategy of starting with the most restrictive setting of the supertagger and moving through the settings if the parser cannot find a spanning analysisthe table shows that the new strategy has a significant impact on parsing speed increasing it by a factor of 3 over the earlier approach the penultimate row corresponds to using only one supertagging level with β 0075 the parser ignores the sentence if it cannot get an analysis at this levelthe percentage of sentences without an analysis is now over 6 but the parser is extremely fast processing over 30 sentences per secondthis configuration of the system would be useful for obtaining data for lexical knowledge acquisition for example for which large amounts of data are requiredthe oracle row gives the parser speed when it is provided with only the correct lexical categories showing the speeds which could be achieved given the perfect supertaggertable 11 gives the percentage of sentences which are parsed at each supertagger level for both the new and old parsing strategiesthe results show that for the old approach most of the sentences are parsed using the least restrictive setting of the supertagger conversely for the new approach most of the sentences are parsed using the most restrictive setting this suggests that in order to increase the accuracy of the parser without losing efficiency the accuracy of the supertagger at the β 0075 level needs to be improved without increasing the number of categories assigned on averagea possible response to our policy of adaptive supertagging is that any statistical parser can be made to run faster for example by changing the beam parameter in the collins parser but that any increase in speed is typically associated with a reduction in accuracyfor the ccg parser the accuracy did not degrade when using the new adaptive parsing strategythus the accuracy and efficiency of the parser were not tuned separately the configuration used to obtain the speed results was also used to obtain the accuracy results in sections 102 and 11to give some idea of how these parsing speeds compare with existing parsers table 12 gives the parse times on section 23 for a number of wellknown parserssagae and lavie is a classifierbased linear time parserthe times for the sagae collins and charniak parsers were taken from the sagae and lavie paper and were obtained using a 18 ghz p4 compared to a 32 ghz p4 for the ccg numberscomparing parser speeds is especially problematic because of implementation differences and the fact that the accuracy of the parsers is not being controlledthus we are not making any strong claims about the efficiency of parsing with ccg compared to other formalismshowever the results in table 12 add considerable weight to one of our main claims in this article namely that highly efficient parsing is possible with ccg and that largescale processing is possible with linguistically motivated grammarsan obvious question is how well the ccg parser compares with parsers using different grammar formalismsone question we are often asked is whether the ccg derivations output by the parser could be converted to penn treebankstyle trees to enable a comparison with for example the collins and charniak parsersthe difficulty is that ccg derivations often have a different shape to the penn treebank analyses and reversing the mapping used by hockenmaier to create ccgbank is a far from trivial taskthere is some existing work comparing parser performance across formalismsbriscoe and carroll evaluate the rasp parser on the parc dependency bank cahill et al evaluate an lfg parser which uses an automatically extracted grammar against depbanknvliyao and tsujii evaluate their hpsg parser against propbank kaplan et al compare the collins parser with the parc lfg parser by mapping penn treebank parses into the dependencies of depbank claiming that the lfg parser is more accurate with only a slight reduction in speedpreiss compares the parsers of collins and charniak the grammatical relations finder of buchholz veenstra and daelemans and the briscoe and carroll parser using the goldstandard grammatical relations from carroll briscoe and sanfilippo the penn treebank trees of the collins and charniak parsers and the grs of the buchholz parser are mapped into the required grammatical relations with the result that the gr finder of buchholz is the most accuratethere are a number of problems with such evaluationsthe first is that when converting the output of the collins parser for example into the output of another parser the collins parser is at an immediate disadvantagethis is especially true if the alternative output is significantly different from the penn treebank trees and if the information required to produce the alternative output is hard to extractone could argue that the relative lack of grammatical information in the output of the collins parser is a weakness and any evaluation should measure thathowever we feel that the onus of mapping into another formalism should ideally lie with the researchers making claims about their own particular parserthe second difficulty is that some constructions may be analyzed differently across formalisms and even apparently trivial differences such as tokenization can complicate the comparison despite these difficulties we have attempted a crossformalism comparison of the ccg parserfor the gold standard we chose the version of depbank reannotated by briscoe and carroll consisting of 700 sentences from section 23 of the penn treebankthe because scheme is similar to the original depbank scheme in many respects but overall contains less grammatical detailbriscoe and carroll describe the differences between the two schemeswe chose this resource for the following reasons it is publicly available allowing other researchers to compare against our results the grs making up the annotation share some similarities with the predicateargument dependencies output by the ccg parser and we can directly compare our parser against a nonccg parser namely the rasp parserand because we are converting the ccg output into the format used by rasp the ccg parser is not at an unfair advantagethere is also the susanne gr gold standard on which the because annotation is based but we chose not to use this for evaluationthis earlier gr scheme is less like the dependencies output by the ccg parser and the comparison would be complicated further by fact that unlike ccgbank the susanne corpus is not based on the penn treebankthe grs are described in briscoe briscoe and carroll and briscoe carroll and watson table 13 contains the complete list of grs used in the evaluation with examples taken from briscoethe ccg dependencies were transformed into grs in two stagesthe first stage was to create a mapping between the xcomp unsaturated vp complement kim thought of leaving ccomp saturated clausal complement kim asked about him playing rugby ta textual adjunct delimited he made the discovery by punctuation kim was the abbot ccg dependencies and the grsthis involved mapping each argument slot in the 425 lexical categories in the ccg lexicon onto a grin the second stage the grs created for a particular sentenceby applying the mapping to the parser outputwere passed through a python script designed to correct some of the obvious remaining differences between the ccg and gr representationsin the process of performing the transformation we encountered a methodological problem without looking at examples it was difficult to create the mapping and impossible to know whether the two representations were convergingbriscoe carroll and watson split the 700 sentences in depbank into a test and development set but the latter only consists of 140 sentences which we found was not enough to reliably create the transformationthere are some development files in the rasp release which provide examples of the grs which we used when possible but these only cover a subset of the ccg lexical categoriesour solution to this problem was to convert the goldstandard dependencies from ccgbank into grs and use these to develop the transformationso we did inspect the annotation in depbank and compared it to the transformed ccg dependencies but only the goldstandard ccg dependenciesthus the parser output was never used during this processwe also ensured that the dependency mapping and the postprocessing are general to the grs scheme and not specific to the test settable 14 gives some examples of the dependency mappingbecause the number of sentences annotated with grs is so small the only other option would have been to guess at various depbank analyses which would have made the the evaluation even more biased against the ccg parserone advantage of this approach is that by comparing the transformed goldstandard ccg dependencies with the goldstandard grs we can measure how close the ccg representation is to the grsthis provides some indication of how difficult it is to perform the transformation and also provides an upper bound on the accuracy of the parser on depbankthis method would be useful when converting the output of the collins parser into an alternative representation applying the transformation to the goldstandard penn treebank trees and comparing with depbank would provide an upper bound on the performance of the collins parser and give some indication of the effectiveness of the transformation argument slotfor many of the ccg dependencies the mapping into grs is straightforwardfor example the first two rows of table 14 show the mapping for the transitive verb category np2 argument slot 1 is a nonclausal subject and argument slot 2 is a direct objectin the example kim likes juicy oranges likes is associated with the transitive verb category kim is the subject and oranges is the head of the constituent filling the object slot leading to the following grs and the third row shows an example of a modifier modifies a verb phrase to the rightnote that in this example the order of the lexical category and filler is switched compared to the previous example to match the depbank annotationthere are a number of reasons why creating the dependency transformation is more difficult than these examples suggestthe first problem is that the mapping from ccg dependencies to grs is manytomanyfor example the transitive verb category np applies to the copular in sentences like imperial corp is the parent of imperial savings loanwith the default annotation the relation between is and parent would be dobj whereas in depbank the argument of the copular is analyzed as an xcomptable 15 gives some examples of how we attempt to deal with this problemthe constraint in the first example means that whenever the word associated with the transitive verb category is a form of be the second argument is xcomp otherwise the default case applies there are a number of categories with similar constraints checking whether the word associated with the category is a form of bethe second type of constraint shown in the third line of the table checks the lexical category of the word filling the argument slotin this example if the lexical category of the preposition is ppnp then the second argument of pp maps to iobj thus in the loss stems from several factors the relation between the verb and preposition is if the lexical category of the preposition is pp then the gr is xcomp thus in the future depends on building cooperation the relation between the verb and preposition is there are a number of ccg dependencies with similar constraints many of them covering the iobjxcomp distinctionthe second difficulty in creating the transformation is that not all the grs are binary relations whereas the ccg dependencies are all binarythe primary example of this is toinfinitival constructionsfor example in the sentence the company wants to wean itself away from expensive gimmicks the ccg parser produces two dependencies relating wants to and wean whereas there is only one gr the final row of table 15 gives an examplewe implement this constraint by introducing a k variable into the gr template which denotes the argument of the category in the constraint column in the example the current category is 2 which is associated with wants this combines with associated with to and the argument of is weanthe k variable allows us to look beyond the arguments of the current category when creating the grsa further difficulty in creating the transformation is that the head passing conventions differ between depbank and ccgbankby head passing we mean the mechanism which determines the heads of constituents and the mechanism by which words become arguments of longrange dependenciesfor example in the sentence the group said it would consider withholding royalty payments the depbank and ccgbank annotations create a dependency between said and the following clausehowever in depbank the relation is between said and consider whereas in ccgbank the relation is between said and wouldwe fixed this problem by changing the head of would consider to be consider rather than wouldin practice this means changing the annotation of all the relevant lexical categories in the markedup file8 the majority of the categories to which this applies are those creating aux relationsa related difference between the two resources is that there are more subject relations in ccgbank than depbankin the previous example ccgbank has a subject relation between it and consider and also it and would whereas depbank only has the relation between it and considerin practice this means ignoring a number of the subject dependencies output by the ccg parser which is implemented by annotating the relevant lexical categories plus argument slot in the markedup file with an ignore markeranother example where the dependencies differ in the two resources is the treatment of relative pronounsfor example in sen mitchell who had proposed the streamlining the subject of proposed is mitchell in ccgbank but who in depbankagain we implemented this change by fixing the head annotation in the lexical categories which apply to relative pronounsin summary considerable changes were required to the markedup file in order to bring the dependency annotations of ccgbank and depbank closer togetherthe major types of changes have been described here but not all the detailsdespite the considerable changes made to the parser output described in the previous section there were still significant differences between the grs created from the ccg dependencies and the depbank grsto obtain some idea of whether the schemes were converging we performed the following oracle experimentwe took the ccg derivations from ccgbank corresponding to the sentences in depbank and ran the parser over the goldstandard derivations outputting the newly created grs9 treating the depbank grs as a gold standard and comparing these with the ccgbank grs gave precision and recall scores of only 7623 and 7956 respectivelythus given the current mapping the perfect ccgbank parser would achieve an fscore of only 7786 when evaluated against depbankon inspecting the output it was clear that a number of general rules could be applied to bring the schemes closer together which we implemented as a python postprocessing scriptwe now provide a description of some of the major changes to give an indication of the kinds of rules we implementedwe tried to keep the changes as general as possible and not specific to the test set although some rules such as the handling of monetary amounts are genrespecificwe decided to include these rules because they are trivial to implement and significantly affect the score and we felt that without these changes the ccg parser would be unfairly penalizedthe first set of changes deals with coordinationone significant difference between depbank and ccgbank is the treatment of coordinations as argumentsconsider the example the president and chief executive officer said the loss stems from several factorsin both ccgbank and depbank there are two conj grs arising from the coordination and 10 the difference arises in the subject of said in depbank the subject is and whereas in ccgbank there are two subjects and we deal with this problem by replacing any pairs of grs which differ only in their arguments and where the arguments are coordinated items with a single gr containing the coordination term as the argumenttwo arguments are coordinated if they appear in conj relations with the same coordinating term where same term is determined by both the word and sentence positionanother source of conj errors is coordination terms acting as sentential modifiers with category ss often at the beginning of a sentencethese are labeled conj in depbank but the gr for ss is ncmodso any ncmod whose modifiers lexical category is ss and whose pos tag is cc is changed to conjampersands are also a significant problem and occur frequently in wsj textfor example the ccgbank analysis of standard poors index assigns the lexical category nn to both standard and treating them as modifiers of poor whereas depbank treats as a coordinating termwe fixed this by creating conj grs between any and the two words on either side removing the modifier gr between the two words and replacing any grs in which the words on either side of the are arguments with a single gr in which is the argumentthe ta relation which identifies text adjuncts delimited by punctuation is difficult to assign correctly to the parser outputthe simple punctuation rules used by the parser and derived from ccgbank do not contain enough information to distinguish between the various cases of tathus the only rule we have implemented which is somewhat specific to the newspaper genre is to replace grs of the form with where say can be any of say said or saysthis rule applies to only a small subset of the ta cases but has high enough precision to be worthy of inclusiona common source of error is the distinction between iobj and ncmod which is not surprising given the difficulty that human annotators have in distinguishing arguments and adjunctsthere are many cases where an argument in depbank is an adjunct in ccgbank and vice versathe only change we have made is to turn all ncmod grs with 10 ccgbank does not contain grs in this form although we will continue to talk as though it does these are the grs after the ccgbank dependencies have been put through the dependency to grs mappingclark and curran widecoverage efficient statistical parsing of as the modifier into iobj grs this was found to have high precision and applies to a significant number of casesthere are some dependencies in ccgbank which do not appear in depbankexamples include any dependencies in which a punctuation mark is one of the arguments and so we removed these from the output of the parserwe have made some attempt to fill the subtype slot for some grsthe subtype slot specifies additional information about the gr examples include the value obj in a passive ncsubj indicating that the subject is an underlying object the value num in ncmod indicating a numerical quantity and prt in ncmod to indicate a verb particlethe passive case is identified as follows any lexical category which starts spssnp indicates a passive verb and we also mark any verbs pos tagged vbn and assigned the lexical category nn as passiveboth these rules have high precision but still leave many of the cases in depbank unidentifiedmany of those remaining are pos tagged jj and assigned the lexical category nn but this is also true of many nonpassive modifiers so we did not attempt to extend these rules furtherthe numerical case is identified using two rules the num subtype is added if any argument in a gr is assigned the lexical category nnnum and if any of the arguments in an ncmod is pos tagged cd prt is added to an ncmod if the modifiee has a pos tag beginning v and if the modifier has pos tag rpwe are not advocating that any of these postprocessing rules should form part of a parserit would be preferable to have the required information in the treebank from which the grammar is extracted so that it could be integrated into the parser in a principled wayhowever in order that the parser evaluation be as fair and informative as possible it is important that the parser output conform as closely to the gold standard as possiblethus it is appropriate to use any general transformation rules as long as they are simple and not specific to the test set to achieve thisthe final columns of table 16 show the accuracy of the transformed goldstandard ccgbank dependencies when compared with depbank the simple postprocessing rules have increased the fscore from 7786 to 8476however note that this fscore provides an upper bound on the performance of the ccg parser and that this score is still below the fscores reported earlier when evaluating the parser output against ccgbanksection 114 contains more discussion of this issuethe results in table 16 were obtained by parsing the sentences from ccgbank corresponding to those in the 560sentence test set used by briscoe carroll and watson we used the ccgbank sentences because these differ in some ways from the original penn treebank sentences and the parser has been trained on ccgbankeven here we experienced some unexpected difficulties because some of the tokenization is different between depbank and ccgbank and there are some sentences in depbank which have been significantly shortened compared to the original penn treebank sentenceswe modified the ccgbank sentencesand the ccgbank analyses because these were used for the oracle experimentsto be as close to the depbank sentences as possibleall the results were obtained using the rasp evaluation scripts with the results for the rasp parser taken from briscoe carroll and watson the results for ccgbank were obtained using the oracle method described previously aux 9333 9100 9215 9503 9075 9284 9647 9033 9330 400 conj 7239 7227 7233 7902 7597 7746 8307 8027 8165 595 ta 4261 5137 4658 5152 1164 1899 6207 1259 2093 292 det 8773 9048 8909 9523 9497 9510 9727 9409 9566 1114 arg mod 7918 7547 7728 8146 8176 8161 8675 8419 8545 8295 mod 7443 6778 7095 7130 7723 7414 7783 7965 7873 3908 ncmod 7572 6994 7272 7336 7896 7605 7888 8064 7975 3550 xmod 5321 4663 4970 4267 5393 4764 5654 6067 5854 178 cmod 4595 3036 3656 5134 5714 5408 6477 6909 6686 168 pmod 3077 3333 3200 000 000 000 000 000 000 12 arg 7742 7645 7694 8576 8001 8278 8979 8291 8621 4387 subj or dobj 8236 7451 7824 8608 8308 8456 9101 8529 8806 3127 subj 7855 6691 7227 8408 7557 7960 8907 7843 8341 1363 ncsubj 7916 6706 7261 8389 7578 7963 8886 7851 8337 1354 xsubj 3333 2857 3077 000 000 000 5000 2857 3636 7 csubj 1250 5000 2000 000 000 000 000 000 000 2 comp 7589 7953 7767 8616 8171 8388 8992 8474 8725 3024 obj 7949 7942 7946 8630 8308 8466 9042 8552 8790 2328 dobj 8363 7908 8129 8701 8844 8771 9211 9032 9121 1764 obj2 2308 3000 2609 6842 6500 6667 6667 6000 6316 20 iobj 7077 7610 7334 8322 6563 7338 8359 6981 7608 544 clausal 6098 7440 6702 7767 7247 7498 8035 7754 7892 672 xcomp 7688 7769 7728 7769 7402 7581 8000 7849 7924 381 ccomp 4644 6942 5555 7727 7010 7351 8081 7631 7849 291 pcomp 7273 6667 6957 000 000 000 000 000 000 24 macroaverage 6212 6377 6294 6571 6229 6395 7173 6585 6867 microaverage 7766 7498 7629 8195 8035 8114 8686 8275 8476 the ccg parser results are based on automatically assigned pos tags using the curran and clark taggerfor the parser we used the hybrid dependency model and the maximum recall decoder because this obtained the highest accuracy on ccgbank with the same parser and supertagger parameter settings as described in section 10211 the coverage of the parser on depbank is 100the coverage of the rasp parser is also 100 84 of the analyses are complete parses rooted in s and the rest are obtained using a robustness technique based on fragmentary analyses the coverage for the oracle experiments is less than 100 since there are some goldstandard derivations in ccgbank which the parser is unable to follow exactly because the grammar rules used by the parser are a subset of those in ccgbankthe oracle figures are based only on those sentences for which there is a goldstandard analysis because we wanted to measure how close the two resources are and provide an approximate upper bound for the parser11 the results reported in clark and curran differ from those here because clark and curran used the normalform model and viterbi decoderclark and curran widecoverage efficient statistical parsing fscore is the balanced harmonic mean of precision and recall 2pr grs is the number of grs in depbankfor a gr in the parser output to be correct it has to match the goldstandard gr exactly including any subtype slots however it is possible for a gr to be incorrect at one level but correct at a subsuming levelfor example if an ncmod gr is incorrectly labeled with xmod but is otherwise correct it will be correct for all levels which subsume both ncmod and xmod for example modthus the scores at the most general level in the gr hierarchy correspond to unlabeled accuracy scoresthe microaveraged scores are calculated by aggregating the counts for all the relations in the hierarchy whereas the macroaveraged scores are the mean of the individual scores for each relation the results show that the performance of the ccg parser is higher than rasp overall and also higher on the majority of gr typesrelations on which the ccg parser performs particularly well relative to rasp are conj det ncmod cmod ncsubj dobj obj2 and ccompthe relations for which the ccg parser performs poorly are some of the less frequent relations ta pmod xsubj csubj and pcomp in fact pmod and pcomp are not in the current ccg dependencies to grs mappingthe overall fscore for the ccg parser 8114 is only 36 points below that for ccgbank which provides an upper bound for the ccg parserbriscoe and carroll give a rough comparison of rasp with the parc lfg parser on depbank obtaining similar results overall but acknowledging that the results are not strictly comparable because of the different annotation schemes usedwe might expect the ccg parser to perform better than rasp on this data because rasp is not tuned to newspaper text and uses an unlexicalized parsing modelon the other hand the relatively low upper bound for the ccg parser on depbank demonstrates the considerable disadvantage of evaluating on a resource which uses a different annotation scheme to the parserour feeling is that the overall fscore on depbank understates the accuracy of the ccg parser because of the information lost in the translationone aspect of the ccgbank evaluation which is more demanding than the depbank evaluation is the set of labeled dependencies usedin ccgbank there are many more labeled dependencies than grs in depbank because a dependency is defined as a lexical categoryargument slot pairin ccgbank there is a distinction between the direct object of a transitive verb and ditransitive verb for example whereas in depbank these would both be dobjin other words to get a dependency correct in the ccgbank evaluation the lexical categorytypically a subcategorization framehas to be correctin a final experiment we used the grs generated by transforming ccgbank as a gold standard against which we compared the grs from the transformed parser outputthe resulting fscore of 8960 shows the increase obtained from using goldstandard grs generated from ccgbank rather than the ccgbank dependencies themselves another difference between depbank and ccgbank is that depbank has been manually corrected whereas ccgbank including the test sections has been produced semiautomatically from the penn treebankthere are some constructions in ccgbank noun compounds being a prominent examplewhich are often incorrectly analyzed simply because the required information is not in the penn treebankthus the evaluation on ccgbank overstates the accuracy of the parser because it is tuned to produce the output in ccgbank including constructions where the analysis is incorrecta similar comment would apply to other parsers evaluated on and using grammars extracted from the penn treebanka contribution of this section has been to highlight the difficulties associated with crossformalism parser comparisonsnote that the difficulties are not unique to ccg and many would apply to any crossformalism comparison especially with parsers using automatically extracted grammarsparser evaluation has improved on the original parseval measures but the challenge still remains to develop a representation and evaluation suite which can be easily applied to a wide variety of parsers and formalismsone of the key questions currently facing researchers in statistical parsing is how to adapt existing parsers to new domainsthere is some experimental evidence showing that perhaps not surprisingly the performance of parsers trained on the wsj penn treebank drops significantly when the parser is applied to domains outside of newspaper text the difficulty is that developing new treebanks for each of these domains is infeasibledeveloping the techniques to extract a ccg grammar from the penn treebank together with the preprocessing of the penn treebank which was required took a number of years and developing the penn treebank itself also took a number of yearsclark steedman and curran applied the parser described in this article to questions from the trec question answering trackbecause of the small number of questions in the penn treebank the performance of the parser was extremely poor well below that required for a working qa systemthe novel idea in clark steedman and curran was to create new training data from questions but to annotate at the lexical category level only rather than annotate with full derivationsthe idea is that because lexical categories contain so much syntactic information adapting just the supertagger to the new domain by training on the new question data may be enough to obtain good parsing performancethis technique assumes that annotation at the lexical category level can be done relatively quickly allowing rapid porting of the supertaggerwe were able to annotate approximately 1 000 questions in around a week which led to an accurate supertagger and combined with the penn treebank parsing model an accurate parser of questionsthere are ways in which this porting technique can be extendedfor example we have developed a method for training the dependency model which requires lexical category data only partial dependency structures are extracted from the lexical category sequences and the training algorithm for the dependency model is extended to deal with partial dataremarkably the accuracy of the dependency model trained on data derived from lexical category sequences alone is only 13 labeled fscore less than the full data modelthis result demonstrates the significant amount of syntactic information encoded in the lexical categoriesfuture work will look at applying this method to biomedical textwe have shown how using automatically assigned pos tags reduces the accuracy of the supertagger and parserin curran clark and vadas we investigate using the multitagging techniques developed for the supertagger at the pos tag levelthe idea is to maintain some pos tag ambiguity for later parts of the parsing process using the tag probabilities to decide which tags to maintainwe were able to reduce the drop clark and curran widecoverage efficient statistical parsing in supertagger accuracy by roughly one halffuture work will also look at maintaing the pos tag ambiguity through to the parsing stagecurrently we do not use the probabilities assigned to the lexical categories by the supertagger as part of the parse selection processthese scores could be incorporated as realvalued features or as auxiliary functions as in johnson and riezler we would also like to investigate using the generative model of hockenmaier and steedman in a similar wayusing a generative models score as a feature in a discriminative framework has been beneficial for reranking approaches because the generative model uses local features similar to those in our loglinear models it could be incorporated into the estimation and decoding processes without the need for rerankingone way of improving the accuracy of a supertagger is to use the parser to provide large amounts of additional training data by taking the lexical categories chosen by the parser as goldstandard training dataif enough unlabeled data is parsed then the large volume can overcome the noise in the data we plan to investigate this idea in the context of our own parsing systemthis article has shown how to estimate a loglinear parsing model for an automatically extracted ccg grammar on a very large scalethe techniques that we have developed including the use of a supertagger to limit the size of the charts and the use of parallel estimation could be applied to loglinear parsing models using other grammar formalismsdespite memory requirements of up to 25 gb we have shown how a parallelized version of the estimation process can limit the estimation time to under three hours resulting in a practical framework for parser developmentone of the problems with modeling approaches which require very long estimation times is that it is difficult to test different configurations of the system for example different feature setsit may also not be possible to train or run the system on anything other than short sentences the supertagger is a key component in our parsing systemit reduces the size of the charts considerably compared with naive methods for assigning lexical categories which is crucial for practical discriminative trainingthe tight integration of the supertagger and parser enables highly efficient as well as accurate parsingthe parser is significantly faster than comparable parsers in the nlp literaturethe supertagger we have developed can be applied to other lexicalized grammar formalismsanother contribution of the article is the development of loglinear parsing models for ccgin particular we have shown how to define a ccg parsing model which exploits all derivations including nonstandard derivationsthese nonstandard derivations are an integral part of the formalism and we have answered the question of whether efficent estimation and parsing algorithms can be defined for models which use these derivationswe have also defined a new parsing algorithm for ccg which maximizes expected recall of predicateargument dependenciesthis algorithm when combined with normalform constraints gives the highest parsing accuracy to date on ccgbankwe have also given competitive results on depbank outperforming a nonccg parser despite the considerable difficulties involved in evaluating on a gold standard which uses a different annotation scheme to the parserthere has perhaps been a perception in the nlp community that parsing with ccg is necessarily ineffficient because of ccgs spurious ambiguitywe have demonstrated using stateoftheart statistical models that both accurate and highly efficient parsing is practical with ccglinguistically motivated grammars can now be used for largescale nlp applications12
J07-4004
widecoverage efficient statistical parsing with ccg and loglinear modelsthis article describes a number of loglinear parsing models for an automatically extracted lexicalized grammarthe models are full parsing models in the sense that probabilities are defined for complete parses rather than for independent events derived by decomposing the parse treediscriminative training is used to estimate the models which requires incorrect parses for each sentence in the training data as well as the correct parsethe lexicalized grammar formalism used is combinatory categorial grammar and the grammar is automatically extracted from ccgbank a ccg version of the penn treebankthe combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement which is satisfied using a parallel implementation of the bfgs optimization algorithm running on a beowulf clusterdynamic programming over a packed chart in combination with the parallel implementation allows us to solve one of the largestscale estimation problems in the statistical parsing literature in under three hoursa key component of the parsing system for both training and testing is a maximum entropy supertagger which assigns ccg lexical categories to words in a sentencethe supertagger makes the discriminative training feasible and also leads to a highly efficient parsersurprisingly given ccgs purious ambiguity the parsing speeds are significantly higher than those reported for comparable parsers in the literaturewe also extend the existing parsing techniques for ccg by developing a new model and efficient parsing algorithm which exploits all derivations including ccgs nonstandard derivationsthis model and parsing algorithm when combined with normalform constraints give stateoftheart accuracy for the recovery of predicateargument dependencies from ccgbankthe parser is also evaluated on depbank and compared against the rasp parser outperforming rasp overall and on the majority of relation typesthe evaluation on depbank raises a number of issues regarding parser evaluationthis article provides a comprehensive blueprint for building a widecoverage ccg parserwe demonstrate that both accurate and highly efficient parsing is possible with ccgfrom a parsing perspective the c c parser has been shown to be competitive with stateoftheart statistical parsers on a variety of test suites including those consisting of grammatical relations penn treebank phrase structure trees and unbounded dependencies
modeling local coherence an entitybased approach this article proposes a novel framework for representing and measuring local coherence central to this approach is the entitygrid representation of discourse which captures patterns of entity distribution in a text the algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional syntactic and referential information about discourse entities we reconceptualize coherence assessment as a learning task and show that our entitybased representation is wellsuited for rankingbased generation and text classification tasks using the proposed representation we achieve good performance on text ordering summary coherence evaluation and readability assessment this article proposes a novel framework for representing and measuring local coherencecentral to this approach is the entitygrid representation of discourse which captures patterns of entity distribution in a textthe algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional syntactic and referential information about discourse entitieswe reconceptualize coherence assessment as a learning task and show that our entitybased representation is wellsuited for rankingbased generation and text classification tasksusing the proposed representation we achieve good performance on text ordering summary coherence evaluation and readability assessmenta key requirement for any system that produces text is the coherence of its outputnot surprisingly a variety of coherence theories have been developed over the years and their principles have found application in many symbolic text generation systems the ability of these systems to generate high quality text almost indistinguishable from human writing makes the incorporation of coherence theories in robust largescale systems particularly appealingthe task is however challenging considering that most previous efforts have relied on handcrafted rules valid only for limited domains with no guarantee of scalability or portability furthermore coherence constraints are often embedded in complex representations which are hard to implement in a robust applicationthis article focuses on local coherence which captures text relatedness at the level of sentencetosentence transitionslocal coherence is undoubtedly necessary for global coherence and has received considerable attention in computational linguistics argue that local coherence is the primary source of inferencemaking during readingthe key premise of our work is that the distribution of entities in locally coherent texts exhibits certain regularitiesthis assumption is not arbitrarysome of these regularities have been recognized in centering theory and other entitybased theories of discourse the algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences a representation that reflects distributional syntactic and referential information about discourse entitieswe argue that the proposed entitybased representation of discourse allows us to learn the properties of coherent texts from a corpus without recourse to manual annotation or a predefined knowledge basewe demonstrate the usefulness of this representation by testing its predictive power in three applications text ordering automatic evaluation of summary coherence and readability assessmentwe formulate the first two problemstext ordering and summary evaluationas ranking problems and present an efficiently learnable model that ranks alternative renderings of the same information based on their degree of local coherencesuch a mechanism is particularly appropriate for generation and summarization systems as they can produce multiple text realizations of the same underlying content either by varying parameter values or by relaxing constraints that control the generation processa system equipped with a ranking mechanism could compare the quality of the candidate outputs in much the same way speech recognizers employ language models at the sentence levelin the textordering task our algorithm has to select a maximally coherent sentence order from a set of candidate permutationsin the summary evaluation task we compare the rankings produced by the model against human coherence judgments elicited for automatically generated summariesin both experiments our method yields improvements over stateoftheart modelswe also show the benefits of the entitybased representation in a readability assessment task where the goal is to predict the comprehension difficulty of a given textin contrast to existing systems which focus on intrasentential features we explore the contribution of discourselevel features to this taskby incorporating coherence features stemming from the proposed entitybased representation we improve the performance of a stateoftheart readability assessment system in the following section we provide an overview of entitybased theories of local coherence and outline previous work on its computational treatmentthen we introduce our entitybased representation and define its linguistic propertiesin the subsequent sections we present our three evaluation tasks and report the results of our experimentsdiscussion of the results concludes the articleour approach is inspired by entitybased theories of local coherence and is wellsuited for developing a coherence metric in the context of a rankingbased text generation systemwe first summarize entitybased theories of discourse and overview previous attempts for translating their underlying principles into computational coherence modelsnext we describe ranking approaches to natural language generation and focus on coherence metrics used in current text plannerslinguistic modelingentitybased accounts of local coherence have a long tradition within the linguistic and cognitive science literature a unifying assumption underlying different approaches is that discourse coherence is achieved in view of the way discourse entities are introduced and discussedthis observation is commonly formalized by devising constraints on the linguistic realization and distribution of discourse entities in coherent textsat any point in the discourse some entities are considered more salient than others and consequently are expected to exhibit different propertiesin centering theory salience concerns how entities are realized in an utterance in other theories salience is defined in terms of topicality predictability and cognitive accessibility more refined accounts expand the notion of salience from a binary distinction to a scalar one examples include princes familiarity scale and givons and ariels givennesscontinuumthe salience status of an entity is often reflected in its grammatical function and the linguistic form of its subsequent mentionssalient entities are more likely to appear in prominent syntactic positions and to be introduced in a main clausethe linguistic realization of subsequent mentionsin particular pronominalizationis so tightly linked to salience that in some theories it provides the sole basis for defining a salience hierarchythe hypothesis is that the degree of underspecification in a referring expression indicates the topical status of its antecedent in centering theory this phenomenon is captured in the pronoun rule and givons scale of topicality and ariels accessibility marking scale propose a graded hierarchy of underspecification that ranges from zero anaphora to full noun phrases and includes stressed and unstressed pronouns demonstratives with modifiers and definite descriptionsentitybased theories capture coherence by characterizing the distribution of entities across discourse utterances distinguishing between salient entities and the restthe intuition here is that texts about the same discourse entity are perceived to be more coherent than texts fraught with abrupt switches from one topic to the nextthe patterned distribution of discourse entities is a natural consequence of topic continuity observed in a coherent textcentering theory formalizes fluctuations in topic continuity in terms of transitions between adjacent utterancesthe transitions are ranked that is texts demonstrating certain types of transitions are deemed more coherent than texts where such transitions are absent or infrequentfor example continue transitions require that two utterances have at least one entity in common and are preferred over transitions that repeatedly shift from one entity to the othergivons and hoeys accounts of discourse continuity complement local measurements by considering global characteristics of entity distribution such as the lifetime of an entity in discourse and the referential distance between subsequent mentionscomputational modelingan important practical question is how to translate principles of these linguistic theories into a robust coherence metrica great deal of research has been devoted to this issue primarily in centering theory such translation is challenging in several respects one has to determine ways of combining the effects of various constraints and to instantiate parameters of the theory that are often left underspecifiedpoesio et al note that even for fundamental concepts of centering theory such as utterance realization and ranking multipleand often contradictoryinterpretations have been developed over the years because in the original theory these concepts are not explicitly fleshed outfor instance in some centering papers entities are ranked with respect to their grammatical function and in others with respect to their position in princes givenness hierarchy or their thematic role as a result two instantiations of the same theory make different predictions for the same inputpoesio et al explore alternative specifications proposed in the literature and demonstrate that the predictive power of the theory is highly sensitive to its parameter definitionsa common methodology for translating entitybased theories into computational models is to evaluate alternative specifications on manually annotated corporasome studies aim to find an instantiation of parameters that is most consistent with observable data other studies adopt a specific instantiation with the goal of improving the performance of a metric on a taskfor instance miltsakaki and kukich annotate a corpus of student essays with entity transition information and show that the distribution of transitions correlates with human gradesanalogously hasler investigates whether centering theory can be used in evaluating the readability of automatic summaries by annotating human and machine generated extracts with entity transition informationthe present work differs from these approaches in goal and methodologyalthough our work builds upon existing linguistic theories we do not aim to directly implement or refine any of them in particularwe provide our model with sources of knowledge identified as essential by these theories and leave it to the inference procedure to determine the parameter values and an optimal way to combine themfrom a design viewpoint we emphasize automatic computation for both the underlying discourse representation and the inference procedurethus our work is complementary to computational models developed on manually annotated data automatic albeit noisy feature extraction allows us to perform a large scale evaluation of differently instantiated coherence models across genres and applicationsranking approaches have enjoyed an increasing popularity at all stages in the generation pipeline ranging from text planning to surface realization in this framework an underlying system produces a potentially large set of candidate outputs with respect to various text generation rules encoded as hard constraintsnot all of the resulting alternatives will correspond to wellformed texts and of those which may be judged acceptable some will be preferable to othersthe candidate generation phase is followed by an assessment phase in which the candidates are ranked based on a set of desirable properties encoded in a ranking functionthe topranked candidate is selected for presentationa twostage generateandrank architecture circumvents the complexity of traditional generation systems where numerous often conflicting constraints have to be encoded during development in order to produce a single highquality outputbecause the focus of our work is on text coherence we discuss here ranking approaches applied to text planning the goal of text planning is to determine the content of a text by selecting a set of informationbearing units and arranging them into a structure that yields wellformed outputdepending on the system text plans are represented as discourse trees or linear sequences of propositions candidate text structures may differ in terms of the selected propositions the sequence in which facts are presented the topology of the tree or the order in which entities are introduceda set of plausible candidates can be created via stochastic search or by a symbolic text planner following different textformation rules the best candidate is chosen using an evaluation or ranking function often encoding coherence constraintsalthough the type and complexity of constraints vary greatly across systems they are commonly inspired by rhetorical structure theory or entitybased constraints similar to the ones captured by our methodfor instance the ranking function used by mellish et al gives preference to plans where consecutive facts mention the same entities and is sensitive to the syntactic environment in which the entity is first introduced karamanis finds that a ranking function based solely on the principle of continuity achieves competitive performance against more sophisticated alternatives when applied to ordering short descriptions of museum artifacts1 in other applications the ranking function is more complex integrating rules from centering theory along with stylistic constraints a common feature of current implementations is that the specification of the ranking functionfeature selection and weightingis performed manually based on the intuition of the system developerhowever even in a limited domain this task has proven difficultmellish et al note the problem is far too complex and our knowledge of the issues involved so meager that only a token gesture can be made at this point moreover these ranking functions operate over semantically rich input representations that cannot be created automatically without extensive knowledge engineeringthe need for manual coding impairs the portability of existing methods for coherence ranking to new applications most notably to texttotext generation applications such as summarizationin the next section we present a method for coherence assessment that overcomes these limitations we introduce an entitybased representation of discourse that is automatically computed from raw text we argue that the proposed representation reveals entity transition patterns characteristic of coherent textsthe latter can be easily translated into a large feature space which lends itself naturally to the effective learning of a ranking function without explicit manual involvementin this section we describe our entitybased representation of discoursewe explain how it is computed and how entity transition patterns are extractedwe also discuss how these patterns can be encoded as feature vectors appropriate for performing coherencerelated ranking and classification taskseach text is represented by an entity grid a twodimensional array that captures the distribution of discourse entities across text sentenceswe follow miltsakaki and kukich in assuming that our unit of analysis is the traditional sentence the rows of the grid correspond to sentences and the columns correspond to discourse entitiesby discourse entity we mean a class of coreferent noun phrases for each occurrence of a discourse entity in the text the corresponding grid cell contains information about its presence or absence in a sequence of sentencesin addition for entities present in a given sentence grid cells contain information about their syntactic rolesuch information can be expressed in many ways because grammatical relations figure prominently in entitybased theories of local coherence they serve as a logical point of departureeach grid cell thus corresponds to a string from a set of categories reflecting whether the entity in question is a subject object or neither entities absent from a sentence are signaled by gaps grammatical role information can be extracted from the output of a broadcoverage dependency parser or any stateofthe art statistical parser we discuss how this information was computed for our experiments in section 33table 1 illustrates a fragment of an entity grid constructed for the text in table 2because the text contains six sentences the grid columns are of length sixconsider for instance the grid column for the entity trial o xit records that trial is present in sentences 1 and 6 but is absent from the rest of the sentencesalso note that the grid in table 1 takes coreference resolution into accounteven though the same entity appears in different linguistic forms for example microsoft corp microsoft and the company it is mapped to a single entry in the grid a fragment of the entity gridnoun phrases are represented by their head nounsgrid cells correspond to grammatical roles subjects objects or neither when a noun is attested more than once with a different grammatical role in the same sentence we default to the role with the highest grammatical ranking subjects are ranked higher than objects which in turn are ranked higher than the restfor example the entity microsoft is mentioned twice in sentence 1 with the grammatical roles x and s but is represented only by s in the grid a fundamental assumption underlying our approach is that the distribution of entities in coherent texts exhibits certain regularities reflected in grid topologysome of these regularities are formalized in centering theory as constraints on transitions of the local focus in adjacent sentencesgrids of coherent texts are likely to have some dense columns and many sparse columns which will consist mostly of gaps one would further expect that entities corresponding to dense columns are more often subjects or objectsthese characteristics will be less pronounced in lowcoherence textsinspired by centering theory our analysis revolves around patterns of local entity transitionsa local entity transition is a sequence s o x n that represents entity occurrences and their syntactic roles in n adjacent sentenceslocal transitions can be easily obtained from a grid as continuous subsequences of each columneach transition will have a certain probability in a given gridfor instance the probability of the transition s in the grid from table 1 is 008 each text can thus be viewed as a distribution defined over transition typeswe can now go one step further and represent each text by a fixed set of transition sequences using a standard feature vector notationeach grid rendering j of a document di corresponds to a feature vector φ p2 pm where m is the number of all predefined entity transitions and pt the probability of transition t in grid xijthis feature vector representation is usefully amenable to machine learning algorithms furthermore it allows the consideration of large numbers of transitions which could potentially uncover novel entity distribution patterns relevant for coherence assessment or other coherencerelated tasksnote that considerable latitude is available when specifying the transition types to be included in a feature vectorthese can be all transitions of a given length or the most frequent transitions within a document collectionan example of one of the central research issues in developing entitybased models of coherence is determining what sources of linguistic knowledge are essential for accurate prediction and how to encode them succinctly in a discourse representationprevious approaches tend to agree on the features of entity distribution related to local coherencethe disagreement lies in the way these features are modeledour study of alternative encodings is not a mere duplication of previous efforts that focus on linguistic aspects of parameterizationbecause we are interested in an automatically constructed model we have to take into account computational and learning issues when considering alternative representationstherefore our exploration of the parameter space is guided by three considerations the linguistic importance of a parameter the accuracy of its automatic computation and the size of the resulting feature spacefrom the linguistic side we focus on properties of entity distribution that are tightly linked to local coherence and at the same time allow for multiple interpretations during the encoding processcomputational considerations prevent us from considering discourse representations that cannot be computed reliably by existing toolsfor instance we could not experiment with the granularity of an utterance sentence versus clausebecause available clause separators introduce substantial noise into a grid constructionfinally we exclude representations that will explode the size of the feature space thereby increasing the amount of data required for training the modelentity extractionthe accurate computation of entity classes is key to computing meaningful entity gridsin previous implementations of entitybased models classes of coreferent nouns have been extracted manually but this is not an option for our modelan obvious solution for identifying entity classes is to employ an automatic coreference resolution tool that determines which noun phrases refer to the same entity in a documentcurrent approaches recast coreference resolution as a classification taska pair of nps is classified as coreferring or not based on constraints that are learned from an annotated corpusa separate clustering mechanism then coordinates the possibly contradictory pairwise classifications and constructs a partition on the set of npsin our experiments we employ ng and cardies coreference resolution systemthe system decides whether two nps are coreferent by exploiting a wealth of lexical grammatical semantic and positional featuresit is trained on the muc data sets and yields stateoftheart performance example of a featurevector document representation using all transitions of length two given syntactic categories s o x and barzilay and lapata modeling local coherence although machine learning approaches to coreference resolution have been reasonably successfulstateoftheart coreference tools today reach an fmeasure2 of 70 when trained on newspaper textsit is unrealistic to assume that such tools will be readily available for different domains and languageswe therefore consider an additional approach to entity extraction where entity classes are constructed simply by clustering nouns on the basis of their identityin other words each noun in a text corresponds to a different entity in a grid and two nouns are considered coreferent only if they are identicalunder this view microsoft corp from table 2 corresponds to two entities microsoft and corp which are in turn distinct from the companythis approach is only a rough approximation to fully fledged coreference resolution but it is simple from an implementational perspective and produces consistent results across domains and languagesgrammatical functionseveral entitybased approaches assert that grammatical function is indicative of an entitys prominence in discourse most theories discriminate between subject object and the remaining grammatical roles subjects are ranked higher than objects and these are ranked higher than other grammatical functionsin our framework we can easily assess the impact of syntactic knowledge by modifying how transitions are represented in the entity gridin syntactically aware grids transitions are expressed by four categories s o x and whereas in simplified grids we only record whether an entity is present or absent in a sentencewe employ a robust statistical parser to determine the constituent structure for each sentence from which subjects objects and relations other than subject or object are identifiedthe phrasestructure output of collinss parser is transformed into a dependency tree from which grammatical relations are extractedpassive verbs are recognized using a small set of patterns and the underlying deep grammatical role for arguments involved in the passive construction is entered in the grid for more details on the grammatical relations extraction component we refer the interested reader to barzilay saliencecentering and other discourse theories conjecture that the way an entity is introduced and mentioned depends on its global role in a given discoursewe evaluate the impact of salience information by considering two types of models the first model treats all entities uniformly whereas the second one discriminates between transitions of salient entities and the restwe identify salient entities based on their frequency3 following the widely accepted view that frequency of occurrence correlates with discourse prominence to implement a saliencebased model we modify our feature generation procedure by computing transition probabilities for each salience group separately and then combining them into a single feature vectorfor n transitions with k salience classes the feature space will be of size n k while we can easily build a model with multiple salience classes we opt for a binary distinction this is more in line with theoretical accounts of salience and results in a moderate feature space for which reliable parameter estimation is possibleconsidering a large number of salience classes would unavoidably increase the number of featuresparameter estimation in such a space requires a large sample of training examples that is unavailable for most domains and applicationsdifferent classes of models can be defined along the linguistic dimensions just discussedour experiments will consider several models with varying degrees of linguistic complexity while attempting to strike a balance between expressivity of representation and ease of computationin the following sections we evaluate their performance on three tasks sentence ordering summary coherence rating and readability assessmentequipped with the feature vector representation introduced herein we can view coherence assessment as a machine learning problemwhen considering text generation applications it is desirable to rank rather than classify instances there is often no single coherent rendering of a given text but many different possibilities that can be partially orderedit is therefore not surprising that systems often employ scoring functions to select the most coherent output among alternative renderings in this article we argue that encoding texts as entity transition sequences constitutes an appropriate feature set for learning such a ranking function we present two taskbased experiments that put this hypothesis to the test information ordering and summary coherence rating both tasks can be naturally formulated as ranking problems the learner takes as input a set of alternative renderings of the same document and ranks them based on their degree of local coherenceexamples of such renderings are a set of different sentence orderings of the same text and a set of summaries produced by different systems for the same documentnote that in both ranking experiments we assume that the algorithm is provided with a limited number of alternativesin practice the space of candidates can be vast and finding the optimal candidate may require pairing our ranking algorithm with a decoder similar to the ones used in machine translation although the majority of our experiments fall within the generateandrank framework previously sketched nothing prevents the use of our feature vector representation for conventional classification taskswe offer an illustration in experiment 3 where features extracted from entity grids are used to enhance the performance of a readability assessment systemhere the learner takes as input a set of documents labeled with discrete classes and learns to make predictions for unseen instances text structuring algorithms are commonly evaluated by their performance at informationorderingthe task concerns determining a sequence in which to present a preselected set of informationbarzilay and lapata modeling local coherence bearing items this is an essential step in concepttotext generation multidocument summarization and other textsynthesis problemsthe information bearing items can be database entries propositions or sentences in sentence ordering a document is viewed as a bag of sentences and the algorithms task is to try to find the ordering which maximizes coherence according to some criterion as explained previously we use our coherence model to rank alternative sentence orderings instead of trying to find an optimal orderingwe do not assume that local coherence is sufficient to uniquely determine a maximally coherent orderingother constraints clearly play a role hereit is nevertheless a key property of wellformed text and a model which takes it into account should be able to discriminate coherent from incoherent textsin our sentenceordering task we generate random permutations of a test document and measure how often a permutation is ranked higher than the original documenta nondeficient model should prefer the original text more frequently than its permutations we begin by explaining how a ranking function can be learned for the sentence ordering tasknext we give details regarding the corpus used for our experiments describe the methods used for comparison with our approach and note the evaluation metric employed for assessing model performanceour results are presented in section 43our training set consists of ordered pairs of alternative renderings of the same document di where xij exhibits a higher degree of coherence than xik without loss of generality we assume j k the goal of the training procedure is to find a parameter vector w that yields a ranking score function which minimizes the number of violations of pairwise rankings provided in the training set where e r if xij is ranked higher than xik for the optimal ranking are and and are a mapping onto features representing the coherence properties of renderings xij and xikin our case the features correspond to the entity transition probabilities introduced in section 32thus the ideal ranking function represented by the weight vector w would satisfy the condition the problem is typically treated as a support vector machine constraint optimization problem and can be solved using the search technique described in joachims this approach has been shown to be highly effective in various tasks ranging from collaborative filtering to parsing other discriminative formulations of the ranking problem are possible however we leave this to future workthe size of the training and test instances for the earthquakes and accidents corpora earthquakes 1896 2056 accidents 2095 2087 once the ranking function is learned unseen renderings of document di can be ranked simply by computing the values wφ and wφ and sorting them accordinglyhere w is the optimized parameter vector resulting from trainingdatato acquire a large collection for training and testing we create synthetic data wherein the candidate set consists of a source document and permutations of its sentencesthis framework for data acquisition enables largescale automatic evaluation and is widely used in assessing ordering algorithms the underlying assumption is that the original sentence order in the source document must be coherent and so we should prefer models that rank it higher than other permutationsbecause we do not know the relative quality of different permutations our corpus includes only pairwise rankings that comprise the original document and one of its permutationsgiven k original documents each with n randomly generated permutations we obtain k n annotated pairwise rankings for training and testingusing the technique described herein we collected data4 in two different genres newspaper articles and accident reports written by government officialsthe first collection consists of associated press articles from the north american news corpus on the topic of earthquakes the second includes narratives from the national transportation safety boards aviation accident database both corpora have documents of comparable lengththe average number of sentences is 104 and 115 respectivelyfor each set we used 100 source articles with up to 20 randomly generated permutations for training5 a similar method was used to obtain the test datatable 4 shows the size of the training and test corpora used in our experimentswe held out 10 documents from the training data for development purposesfeatures and parameter settingsin order to investigate the contribution of linguistic knowledge on model performance we experimented with a variety of grid representations resulting in different parameterizations of the feature space from which our model is learnedwe focused on three sources of linguistic knowledgesyntax coreference resolution and saliencewhich play a prominent role in entitybased analyses of disbarzilay and lapata modeling local coherence course coherence an additional motivation for our study was to explore the tradeoff between robustness and richness of linguistic annotationsnlp tools are typically trained on humanauthored texts and may deteriorate in performance when applied to automatically generated texts with coherence violationswe thus compared a linguistically rich model against models that use more impoverished representationsmore concretely our full model uses coreference resolution denotes entity transition sequences via grammatical roles and differentiates between salient and nonsalient entitiesour lessexpressive models use only a subset of these linguistic features during the grid construction processwe evaluated the effect of syntactic knowledge by eliminating the identification of grammatical relations and recording solely whether an entity is present or absent in a sentencethis process created a class of four models of the form coreferencesyntaxsaliencethe effect of fully fledged coreference resolution was assessed by creating models where entity classes were constructed simply by clustering nouns on the basis of their identity finally the contribution of salience was measured by comparing the full model which accounts separately for patterns of salient and nonsalient entities against models that do not attempt to discriminate between them we would like to note that in this experiment we apply a coreference resolution tool to the original text and then generate permutations for the pairwise ranking taskan alternative design is to apply coreference resolution to permuted textsbecause existing methods for coreference resolution take into consideration the order of noun phrases in a text the accuracy of these tools on permuted sentence sequences is close to randomtherefore we opt to resolve coreference within the original textalthough this design has an oracle feel to it it is not uncommon in practical applicationsfor instance in text generation systems content planners often operate over fully specified semantic representations and can thus take advantage of coreference information during sentence orderingbesides variations in the underlying linguistic representation our model is also specified by two free parameters the frequency threshold used to identify salient entities and the length of the transition sequencethese parameters were tuned separately for each data set on the corresponding heldout development setoptimal saliencebased models were obtained for entities with frequency 2the optimal transition length was 36 in our ordering experiments we used joachimss svmlight package for training and testing with all parameters set to their default valuescomparison with stateoftheart methodswe compared the performance of our algorithm against two stateoftheart models proposed by foltz kintsch and landauer and barzilay and lee these models rely largely on lexical information for assessing document coherence contrary to our models which are in essence unlexicalizedrecall from section 3 that our approach captures local coherence by modeling patterns of entity distribution in discourse without taking note of their lexical instantiationsin the following we briefly describe the lexicalized models we employed in our comparative study and motivate their selectionfoltz kintsch and landauer model measures coherence as a function of semantic relatedness between adjacent sentencesthe underlying intuition here is that coherent texts will contain a high number of semantically related wordssemantic relatedness is computed automatically using latent semantic analysis from raw text without employing syntactic or other annotationsin this framework a words meaning is captured in a multidimensional space by a vector representing its cooccurrence with neighboring wordscooccurrence information is collected in a frequency matrix where each row corresponds to a unique word and each column represents a given linguistic context foltz kintsch and landauers model use singular value decomposition to reduce the dimensionality of the spacethe transformation renders sparse matrices more informative and can be thought of as a means of uncovering latent structure in distributional datathe meaning of a sentence is next represented as a vector by taking the mean of the vectors of its wordsthe similarity between two sentences is determined by measuring the cosine of their means where µ si eusi you and you is the vector for word youan overall text coherence measure can be easily obtained by averaging the cosines for all pairs of adjacent sentences si and si1 this model is a good point of comparison for several reasons it is fully automatic and has relatively few parameters it correlates reliably with human judgments and has been used to analyze discourse structure and it models an aspect of local coherence which is orthogonal to oursthe lsa model is lexicalized coherence amounts to quantifying the degree of semantic similarity between sentencesin contrast our model does not incorporate any notion of similarity coherence is encoded in terms of transition sequences that are documentspecific rather than sentencespecificour implementation of the lsa model followed closely foltz kintsch and landauer we constructed vectorbased representations for individual words from a lemmatized version of the north american news corpus7 using a termdocument matrixwe used svd to reduce the semantic space to 100 dimensions obtaining thus a space similar to lsawe estimated the coherence of a document using equations and a ranking can be trivially inferred by comparing the 7 our selection of this corpus was motivated by two factors the corpus is large enough to yield a reliable semantic space and it consists of news stories and is therefore similar in style vocabulary and content to most of the corpora employed in our coherence experimentsbarzilay and lapata modeling local coherence coherence score assigned to the original document against each of its permutationsties are resolved randomlyboth lsa and our entitygrid model are localthey model sentencetosentence transitions without being aware of global document structurein contrast the content models developed by barzilay and lee learn to represent more global text properties by capturing topics and the order in which these topics appear in texts from the same domainfor instance a typical earthquake newspaper report contains information about the quakes epicenter how much it measured the time it was felt and whether there were any victims or damageby encoding constraints on the ordering of these topics content models have a pronounced advantage in modeling document structure because they can learn to represent how documents begin and end but also how the discourse shifts from one topic to the nextlike lsa the content models are lexicalized however unlike lsa they are domainspecific and would expectedly yield inferior performance on outofdomain textsbarzilay and lee implemented content models using an hmm wherein states correspond to distinct topics and state transitions represent the probability of changing from one topic to another thereby capturing possible topicpresentation orderings within a domaintopics refer to text spans of varying granularity and lengthbarzilay and lee used sentences in their experiments but clauses or paragraphs would also be possiblebarzilay and lee employed their content models to find a highprobability ordering for a document whose sentences had been randomly shuffledhere we use content models for the simpler coherence ranking taskgiven two text permutations we estimate their likelihood according to their hmm model and select the text with the highest probabilitybecause the two candidates contain the same set of sentences the assumption is that a more probable text corresponds to an ordering that is more typical for the domain of interestin our experiments we built two content models one for the accidents corpus and one for the earthquake corpusalthough these models are trained in an unsupervised fashion a number of parameters related to the model topology affect their performancethese parameters were tuned on the development set and chosen so as to optimize the models performance on the pairwise ranking taskevaluation metricgiven a set of pairwise rankings we measure accuracy as the ratio of correct predictions made by the model over the size of the test setin this setup random prediction results in an accuracy of 50impact of linguistic representationwe first investigate how different types of linguistic knowledge influence our models performancetable 5 shows the accuracy on the ordering task when the model is trained on different grid representationsas can be seen in both domains the full model coreferencesyntaxsalience significantly outperforms a linguistically naive model which simply records the presence of entities in discourse moreover we observe that linguistically impoverished models consistently perform worse than their linguistically elaborate counterpartswe assess whether differences in accuracy are statistically significant using a fisher sign testspecifically we compare the full model against each of the less expressive models let us first discuss in more detail how the contribution of different knowledge sources varies across domainson the earthquakes corpus every model that does not use coreference information performs significantly worse than models augmented with coreference this effect is less pronounced on the accidents corpus especially for model coreferencesyntaxsalience whose accuracy drops only by 05 the same models performance decreases by 42 on the earthquakes corpusthis variation can be explained by differences in entity realization between the two domainsin particular the two corpora vary in the amount of coreference they employ texts from the earthquakes corpus contain many examples of referring expressions that our simple identitybased approach cannot possibly resolveconsider for instance the text in table 6here the expressions the same area the remote region and site all refer to menglian countyin comparison the text from the accidents corpus contains fewer referring expressions in fact entities are often repeated verbatim across several sentences and therefore could be straightforwardly resolved with a shallow approach the omission of syntactic information causes a drop in accuracy for models applied to the accidents corpusthis effect is less noticeable on the earthquakes corpus we explain this variation by the substantial difference in the typetoken ratio between the two domains121 for earthquakes versus 50 for accidentsthe low typetoken ratio for accidents means that most sentences in a text have some words in commonfor example the entities pilot airplane and airport appear in multiple sentences in the text from table 6because there is so much repetition in this domain the syntaxfree grids will be relatively similar for both coherent and incoherent texts in fact inspection of the grids from the accidents corpus reveals that they have many sequences of the form x x x x x x x and x x in common two texts from the earthquakes and accidents corpusone entity class for each document is shown to demonstrate the difference in referring expressions used in the two corpora and deteriorating rapidlywitnesses near pine mountain stated that the visibility at the time of the accident was about 14 mile in hazefog whereas such sequences are more common in coherent earthquakes documents and more sparse in their permutationsthis indicates that syntaxfree analysis can sufficiently discriminate coherent from incoherent texts in the earthquakes domain while a more refined representation of entity transition types is required for the accidents domainthe contribution of salience is less pronounced in both domainsthe difference in performance between the full model and its salienceagnostic counterpart is not statistically significantsaliencebased models do deliver some benefits for linguistically impoverished modelsfor instance coreferencesyntaxsalience improves over coreferencesyntaxsalience on the earthquakes corpuswe hypothesize that the small contribution of salience is related to the way it is currently representedaddition of this knowledge source to our grid representation doubles the number of features that serve as input to the learning algorithmin other words salienceaware models need to learn twice as many parameters as saliencefree models while having access to the same amount of training dataachieving any improvement in these conditions is challengingcomparison with stateoftheart methodswe next discuss the performance of the hmmbased content models and lsa in comparison to our model first note that the entitygrid model significantly outperforms lsa on both domains in contrast to our model lsa is neither entitybased nor unlexicalized it measures the degree of semantic overlap across successive sentences without handling discourse entities in a special way we attribute our models superior performance despite the lack of lexicalization to three factors the use of more elaborate linguistic knowledge a more holistic representation of coherence and exposure to domain relevant texts our semantic space was created from a large news corpus covering a wide variety of topics and writing stylesthis is necessary for constructing robust vector representations that are not extremely sparsewe thus expect the grid models to be more sensitive to the discourse conventions of the trainingtest datathe accuracy of the hmmbased content modes is comparable to the grid model on the earthquakes corpus but is significantly lower on the accidents texts although the grid model yields similar performance on the two domains content models exhibit high variabilitythese results are not surprisingthe analysis presented in barzilay and lee shows that the earthquakes texts are quite formulaic in their structure following the editorial style of the associated pressin contrast the accidents texts are more challenging for content modelsreports in this set do not undergo centralized editing and therefore exhibit more variability in lexical choice and stylethe lsa model also significantly outperforms the content model on the earthquakes domain being a local model lsa is less sensitive to the way documents are structured and is therefore more likely to deliver consistent performance across domainsthe comparison in table 5 covers a broad spectrum of coherence modelsat one end of the spectrum is lsa a lexicalized model of local discourse coherence which is fairly robust and domain independentin the middle of the spectrum lies our entitygrid model which is unlexicalized but linguistically informed and goes beyond simple sentencetosentence transitions without however fully modeling global discourse structureat the other end of the spectrum are the hmmbased content models which are both global and lexicalizedour results indicate that these models are complementary and that their combination could yield improved resultsfor example we could lexicalize our entity grids or supply the content models with local information either in the style of lsa or as entity transitionshowever we leave this to future worktraining requirementswe now examine in more detail the training requirements for the entitygrid modelsalthough for our ordering experiments we obtained training data cheaply this will not generally be the case and some effort will have to be invested in collecting appropriate data with coherence ratingswe thus address two questions how much training data is required for achieving satisfactory performance how domain sensitive are the entitygrid modelsin other words does their performance degrade gracefully when applied to outofdomain textsfigure 1 shows learning curves for the best performing model on the earthquakes and accidents corporawe observe that the amount of data required depends on the domain at handthe accidents texts are more repetitive and therefore less training data is required to achieve good performancethe learning curves for the entitybased model coreferencesyntaxsalience on the earthquakes and accidents corpora learning curve is steeper for the earthquakes documentsirrespective of the domain differences the model reaches good accuracies when half of the data set is used this is encouraging because for some applications large amounts of training data may be not readily availabletable 7 illustrates the accuracy of the best performing model coreference syntaxsalience when trained on the earthquakes corpus and tested on accidents texts and reversely when trained on the accident corpus and tested on earthquakes documentswe also illustrate how this model performs when trained and tested on a data set that contains texts from both domainsfor the latter experiment the training data set was created by randomly sampling 50 earthquakes and 50 accidents documentsas can be seen from table 7 the models performance degrades considerably when tested on outofdomain textson the positive side the models outofdomain performance is better than chance furthermore once the model is trained on data representative of both domains it performs almost as well as a model which has been trained exclusively on indomain texts to put these results into context we also considered the crossdomain performance of the content modelsas table 7 shows the decrease in performance is more dramatic for the content modelsin fact the model trained on the earthquakes domain plummets below the random baseline when applied to the accidents domainthese results are expected for content modelsthe two domains have little overlap in topics and do not share structural constraintsnote that the lsa model is not sensitive to crossdomain issuesthe semantic space is constructed over many different domains without taking into account style or writing conventionsthe crosstraining performance of the entitybased models is somewhat puzzling these models are not lexicalized and one would expect that valid entity transitions are preserved across domainsalthough transition types are not domainspecific their distribution could vary from one domain to anotherto give a simple example some domains will have more entities than others in other words entity transitions capture not only text coherence properties but also reflect stylistic and genrespecific discourse propertiesthis hypothesis is indirectly confirmed by the observed differences in the contribution of various linguistic features across the two domains discussed abovecrossdomain differences in the distribution and occurrence of entities have been also observed in other empirical studies of local coherencefor instance poesio et al show differences in transition types between instructional texts and descriptions of museum textsin section 6 we show that features derived from the entity grid help determine the readability level for a given text thereby verifying more directly the hypothesis that the grid representation captures stylistic discourse factorsthe results presented so far suggest that adapting the proposed model to a new domain would involve some effort in collecting representative texts with associated coherence ratingsthankfully the entity grids are constructed in a fully automatic fashion without requiring manual annotationthis contrasts with traditional implementations of centering theory that operate over linguistically richer representations that are typically handcodedwe further test the ability of our method to assess coherence by comparing model induced rankings against rankings elicited by human judgesadmittedly the synthetic data used in the ordering task only partially approximates coherence violations that human readers encounter in machine generated textsa representative example of such texts are automatically generated summaries which often contain sentences taken out of context and thus display problems with respect to local coherence a model that exhibits high agreement with human judges not only accurately captures the coherence properties of the summaries in question but ultimately holds promise for the automatic evaluation of machinegenerated textsexisting automatic evaluation measures such as bleu and rouge are not designed for the coherence assessment task because they focus on content similarity between system output and reference textsbarzilay and lapata modeling local coherence summary coherence rating can be also formulated as a ranking learning taskwe are assuming that the learner has access to several summaries corresponding to the same document or document clustersuch summaries can be produced by several systems that operate over identical inputs or by a single system similarly to the sentence ordering task our training data includes pairs of summaries of the same document di where xij is more coherent than xikan optimal learner should return a ranking r that orders the summaries according to their coherenceas in experiment 1 we adopt an optimization approach and follow the training regime put forward by joachims dataour evaluation was based on materials from the document understanding conference which include multidocument summaries produced by human writers and by automatic summarization systemsin order to learn a ranking we require a set of summaries each of which has been rated in terms of coherenceone stumbling block to performing this kind of evaluation is the coherence ratings themselves which are not routinely provided by duc summary evaluatorsin duc 2003 the quality of automatically generated summaries was assessed along several dimensions ranging from grammatically to content selection fluency and readabilitycoherence was indirectly evaluated by noting the number of sentences indicating an awkward time sequence suggesting a wrong becauseeffect relationship or being semantically incongruent with their neighboring sentences8 unfortunately the observed coherence violations were not finegrained enough to be of use in our rating experimentsin the majority of cases duc evaluators noted either 0 or 1 violations however without judging the coherence of the summary as a whole we cannot know whether a single violation disrupts coherence severely or notwe therefore obtained judgments for automatically generated summaries from human subjects9 we randomly selected 16 input document clusters and five systems that had produced summaries for these sets along with reference summaries composed by humanscoherence ratings were collected during an elicitation study by 177 unpaid volunteers all native speakers of englishthe study was conducted remotely over the internetparticipants first saw a set of instructions that explained the task and defined the notion of coherence using multiple examplesthe summaries were randomized in lists following a latin square design ensuring that no two summaries in a given list were generated from the same document clusterparticipants were asked to use a sevenpointscale to rate how coherent the summaries were without having seen the source textsthe ratings given by our subjects were averaged to provide a rating between 1 and 7 for each summarythe reliability of the collected judgments is crucial for our analysis we therefore performed several tests to validate the quality of the annotationsfirst we measured how well humans agree in their coherence assessmentwe employed leaveoneout resampling10 by correlating the data obtained from each participant with the mean coherence ratings obtained from all other participantsthe intersubject agreement was are 768 second we examined the effect of different types of summaries an anova revealed a reliable effect of summary type f 2038 p 01 indicating that human summaries are perceived as significantly more coherent than systemgenerated onesfinally we also compared the elicited ratings against the duc evaluations using correlation analysisthe human judgments were discretized to two classes using entropybased discretization we found a moderate correlation between the human ratings and duc coherence violations this is expected given that duc evaluators were using a different scale and and were not explicitly assessing summary coherencethe summaries used in our rating elicitation study form the basis of a corpus used for the development of our entitybased coherence modelsto increase the size of our training and test sets we augmented the materials used in the elicitation study with additional duc summaries generated by humans for the same input setswe assumed that these summaries were maximally coherentas mentioned previously our participants tend to rate humanauthored summaries higher than machinegenerated onesto ensure that we do not tune a model to a particular system we used the output summaries of distinct systems for training and testingour set of training materials contained 6 x 16 summaries yielding x 16 240 pairwise rankingsbecause human summaries often have identical scores we eliminated pairs of such summaries from the training setconsequently the resulting training corpus consisted of 144 summariesin a similar fashion we obtained 80 pairwise rankings for the test setsix documents from the training data were used as a development setfeatures parameter settings and training requirementswe examine the influence of linguistic knowledge on model performance by comparing models with varying degrees of linguistic complexityto be able to assess the performance of our models across tasks we experimented with the same model types introduced in the previous experiment we also investigate the training requirements for these models on the summary coherence taskexperiment 1 differs from the present study in the way coreference information was obtainedin experiment 1 a coreference resolution tool was applied to humanwritten texts which are grammatical and coherenthere we apply a coreference tool to automatically generated summariesbecause many summaries in our corpus are fraught with coherence violations the performance of a coreference resolution tool is likely to dropunfortunately resolving coreference in the input documents would require a multidocument coreference tool which is currently unavailable to usas in experiment 1 the frequency threshold and the length of the transition sequence were optimized on the development setoptimal saliencebased models were obtained for entities with frequency 2the optimal transition length was 2all models were trained and tested using svmlight comparison with stateoftheart methodsour results were compared to the lsa model introduced in experiment 1 unfortunately we could not barzilay and lapata modeling local coherence employ barzilay and lees content models for the summary ranking taskbeing domaindependent these models require access to domain representative texts for trainingour summary corpus however contains texts from multiple domains and does not provide an appropriate sample for reliably training content modelsimpact of linguistic representationour results are summarized in table 8similarly to the sentence ordering task we observe that the linguistically impoverished model coreferencesyntaxsalience exhibits decreased accuracy when compared against models that operate over more sophisticated representationshowever the contribution of individual knowledge sources differs in this taskfor instance coreference resolution improved model performance in ordering but it causes a decrease in accuracy in summary evaluation this drop in performance can be attributed to two factors both related to the fact that our summary corpus contains many machinegenerated textsfirst an automatic coreference resolution tool will be expected to be less accurate on our corpus because it was trained on wellformed humanauthored textssecond automatic summarization systems do not use anaphoric expressions as often as humans dotherefore a simple entity clustering method is more suitable for automatic summariesboth salience and syntactic information contribute to the accuracy of the ranking modelthe impact of each of these knowledge sources in isolation is not dramatic dropping either of them yields some decrease in accuracy but the difference is not statistically significanthowever eliminating both salience and syntactic information significantly decreases performance figure 2 shows the learning curve for our best model coreferencesyntax saliencealthough the model performs poorly when trained on a small fraction of the data it stabilizes relatively fast and does not improve after learning curve for the entitybased model coreferencesyntaxsalience applied to the summary ranking task a certain pointthese results suggest that further improvements to summary ranking are unlikely to come from adding more annotated datacomparison with the stateoftheartas in experiment 1 we compared the best performing grid model against lsa the former model significantly outperforms the latter by a wide marginlsa is perhaps at a disadvantage here because it has been exposed only to humanauthored textsmachinegenerated summaries are markedly distinct from human texts even when these are incoherent for example manual inspection of our summary corpus revealed that lowquality summaries often contain repetitive informationin such cases simply knowing about high crosssentential overlap is not sufficient to distinguish a repetitive summary from a wellformed onefurthermore note that in contrast to the documents in experiment 1 the summaries being ranked here differ in lexical choicesome are written by humans whereas others have been produced by systems following different summarization paradigms this means that lsa may consider a summary coherent simply because its vocabulary is familiar analogously a summary with a large number of outofvocabulary lexical items will be given low similarity scores irrespective of whether it is coherent or notthis is not uncommon in summaries with many proper namesthese often do not overlap with the proper names found in the north american news corpus used for training the lsa modellexical differences exert much less influence on the entitygrid model which abstracts away from alternative verbalizations of the same content and captures coherence solely on the basis of grid topologyso far our experiments have explored the potential of the proposed discourse representation for coherence modelingwe have presented several classes of grid models barzilay and lapata modeling local coherence achieving good performance in discerning coherent from incoherent textsour experiments also reveal a surprising property of grid models even though these models are not lexicalized they are domain and styledependentin this section we investigate in detail this feature of grid modelshere we move away from the coherence rating task and put the entitygrid representation further to the test by examining whether it can be usefully employed in style classificationspecifically we embed our entity grids into a system that assesses document readabilitythe term describes the ease with which a document can be read and understoodthe quantitative measurement of readability has attracted considerable interest and debate over the last 70 years and has recently benefited from the use of nlp technology a number of readability formulas have been developed with the primary aim of assessing whether texts or books are suitable for students at particular grade levels or agesmany readability methods focus on simple approximations of semantic factors concerning the words used and syntactic factors concerning the length or structure of sentences despite their widespread applicability in education and technical writing readability formulas are often criticized for being too simplistic they systematically ignore many important factors that affect readability such as discourse coherence and cohesion layout and formatting use of illustrations the nature of the topic the characteristics of the readers and so forthschwarm and ostendorf developed a method for assessing readability which addresses some of the shortcomings of previous approachesby recasting readability assessment as a classification task they are able to combine several knowledge sources ranging from traditional reading level measures to statistical language models and syntactic analysisevaluation results show that their system outperforms two commonly used reading level measures in the following we build on their approach and examine whether the entitygrid representation introduced in this article contributes to the readability assessment taskthe incorporation of coherencebased information in the measurement of text readability is to our knowledge novelwe follow schwarm and ostendorf in treating readability assessment as a classification taskthe unit of classification is a single article and the learners task is to predict whether it is easy or difficult to reada variety of machine learning techniques are amenable to this problembecause our goal was to replicate schwarm and ostendorfs system as closely as possible we followed their choice of support vector machines for our classification experimentsour training sample therefore consisted of n documents such that xi e ryi e 11 where xi is a feature vector for the ith document in the training sample and yi its class labelin the basic svm framework we try to separate the positive and negative instances by a hyperplanethis means that there is a weight also spelled valletta seaport and capital of malta on the northeast coast of the islandthe nucleus of the city is built on the promontory of mount sceberras that runs like a tongue into the middle of a bay which it thus divides into two harbours grand harbour to the east and marsamxett harbour to the westbuilt after the great siege of malta in 1565 which checked the advance of ottoman power in southern europe it was named after jean parisot de la valette grand master of the order of hospitallers and became the maltese capital in 1570the hospitallers were driven out by the french in 1798 and a maltese revolt against the french garrison led to vallettas seizure by the british in 1800a port city valletta is the capital of the island country of malta in the mediterranean seavalletta is located on the eastern coast of the largest island which is also named maltavalletta lies on a peninsulaa land mass surrounded by water on three sidesit borders marsamxett harbor to the north and grand harbor to the souththe eastern end of the city juts out into the mediterraneanvalletta was planned in the 16th century by the italian architect francesco laparellito make traveling through valletta easier laparelli designed the city in a grid pattern with straight streets that crossed each other and ran the entire width and length of the townvalletta was one of the first towns to be laid out in this way vector w and a threshold b so that all positive training examples are on one side of the hyperplane while all negative ones lie on the other sidethis is equivalent to requiring finding the optimal hyperplane is an optimization problem which can be solved efficiently using the procedure described in vapnik svms have been widely used for many nlp tasks ranging from text classification to syntactic chunking and shallow semantic parsing datafor our experiments we used a corpus collected by barzilay and elhadad from the encyclopedia britannica and britannica elementarythe latter is a new version targeted at childrenthe corpus contains 107 articles from the full version of the encyclopedia and their corresponding simplified articles from britannica elementary although these texts are not explicitly annotated with grade levels they still represent two broad readability categories namely easy and difficult11 examples of these two categories are given in table 9barzilay and lapata modeling local coherence features and parameter settingswe created two system versions the first one used solely schwarm and ostendorf features12 the second one employed a richer feature spacewe added the entitybased representation proposed here to their original feature setwe will briefly describe the readabilityrelated features used in our systems and direct the interested reader to schwarm and ostendorf for a more detailed discussionschwarm and ostendorf use three broad classes of features syntactic semantic and their combinationtheir syntactic features are average sentence length and features extracted from parse trees computed using charniaks parserthe latter include average parse tree height average number of nps average number of vps and average number of subordinate clauses we computed average sentence length by measuring the number of tokens per sentencetheir semantic features include the average number of syllables per word and language model perplexity scoresa unigram bigram and trigram model was estimated for each class and perplexity scores were used to assess their performance on test datafollowing schwarm and ostendorf we used information gain to select words that were good class discriminantsall remaining words were replaced by their parts of speechthe vocabulary thus consisted of 300 words with high information gain and 36 penn treebank partofspeech tagsthe language models were estimated using maximum likelihood estimation and smoothed with wittenbell discountingthe language models described in this article were all built using the cmu statistical language modeling toolkit our perplexity scores were six in total finally the fleschkincaid grade level score was included as a feature that captures both syntactic and semantic text propertiesthe fleschkincaid formula estimates readability as a combination of the the average number of syllables per word and the average number of words per sentence we also enriched schwarm and ostendorfs feature space with coherencebased featureseach document was represented as a feature vector using the entity transition notation introduced in section 3we experimented with two models that yielded good performances in our previous experiments coreferencesyntaxsalience and coreferencesyntaxsalience the transition length was 2 and entities were considered salient if they occurred 2 timesas in our previous experiments we compared the entitybased representation against lsathe latter is a measure of the semantic relatedness across pairs of sentenceswe could not apply the hmmbased content models to the readability data setthe encyclopedia lemmas are written by different authors and consequently vary considerably in structure and vocabulary choicerecall that these models are suitable for more restricted domains and texts that are more formulaic in naturethe different systems were trained and tested on the britannica corpus using fivefold crossvalidation13 the language models were created anew for every fold using the documents in the training datawe use joachims svmlight package for training and testing with all parameters set to their default valuesevaluation metricwe measure classification accuracy we report accuracy averaged over foldsa chance baseline yields an accuracy of 50our training and test sets have the same number of documents for the two readability categoriestable 10 summarizes our results on the readability assessment taskwe first compared schwarm and ostendorfs system against a system that incorporates entitybased coherence features as can be seen the systems accuracy significantly increases by 10 when the full feature set is included entitygrid features that do not incorporate coreference information perform numerically better however the difference is not statistically significantthe superior performance of the coreferencesyntaxsalience feature set is not entirely unexpectedinspection of our corpus revealed that easy and difficult texts differ in their distribution of pronouns and coreference chains in generaleasy texts tend to employ less coreference and the use of personal pronouns is relatively sparseto give a concrete example the pronoun they is attested 173 times in the difficult corpus and only 73 in the easy corpusthis observation suggests that coreference information is a good indicator of the level of reading difficulty and explains why its omission from the entitybased feature space yields inferior performancebarzilay and lapata modeling local coherence furthermore note that discourselevel information is absent from schwarm and ostendorfs original modelthe latter employs a large number of lexical and syntactic features which capture sentential differences among documentsour entitybased representation supplements their feature space with information spanning two or more successive sentenceswe thus are able to model stylistic differences in readability that go beyond syntax and lexical choicebesides coreference our feature representation captures important information about the presence and distribution of entities in discoursefor example difficult texts tend to have twice as many entities as easy onesconsequently easy and difficult texts are represented by entity transition sequences with different probabilities interestingly when coherence is quantified using lsa we observe no improvement to the classification taskthe lsa scores capture lexical or semantic text properties similar to those expressed by the flesch kincaid index and the perplexity scores it is therefore not surprising that their inclusion in the feature set does not increase performancewe also evaluated the training requirements for the readability system described hereinfigure 3 shows the learning curve for schwarm and ostendorfs model enhanced with the coreferencesyntaxsalience feature space and on its ownas can be seen both models perform relatively well when trained on small data sets and reach peak accuracy with half of the training datathe inclusion of discoursebased features consistently increases accuracy irrespective of the amount of training data availablefigure 3 thus suggests that better feature engineering is likely to bring further performance improvements on the readability taskour results indicate that the entitybased text representation introduced here captures aspects of text readability and can be successfully incorporated into a practical systemcoherence is by no means the sole predictor of readabilityin fact on its own it performs poorly on this task as demonstrated when using either lsa or the entitybased feature space without schwarm and ostendorfs features rather we claim that coherence is one among many factors contributing to text readability and that our entitygrid representation is wellsuited for text classification tasks such as reading level assessmentin this article we proposed a novel framework for representing and measuring text coherencecentral to this framework is the entitygrid representation of discourse which we argue captures important patterns of sentence transitionswe reconceptualize coherence assessment as a learning task and show that our entitybased representation is wellsuited for rankingbased generation and text classification tasksusing the proposed representation we achieve good performance on text ordering summary coherence evaluation and readability assessmentthe entity grid is a flexible yet computationally tractable representationwe investigated three important parameters for grid construction the computation of coreferring entity classes the inclusion of syntactic knowledge and the influence of salienceall these knowledge sources figure prominently in theories of discourse and are considered important in determining coherenceour results empirically validate the importance of salience and syntactic information for coherencebased modelsthe combination of both knowledge sources yields models with consistently good performance for all our tasksthe benefits of full coreference resolution are less uniformthis is partly due to mismatches between training and testing conditionsthe system we employ was trained on humanauthored newspaper textsthe corpora we used in our sentence ordering and readability assessment experiments are somewhat similar whereas our summary coherence rating experiment employed machine generated textsit is therefore not surprising that coreference resolution delivers performance gains on the first two tasks but not on the latter our results further show that in lieu of an automatic coreference resolution system entity classes can be approximated simply by string matchingthe latter is a good indicator of nominal coreference it is often included as a feature in machine learning approaches to coreference resolution and is relatively robust it is important to note that although inspired by entitybased theories of discourse coherence our approach is not a direct implementation of any theory in particularrather we sacrifice linguistic faithfulness for automatic computation and breadth of coveragedespite approximations and unavoidable errors our results indicate that entity grids are a useful representational framework across tasks and text genresin agreement with poesio et al we find that pronominalization is a good indicator of document coherencewe also find that coherent texts are characterized by transitions with particular properties which do not hold for all discoursescontrary to centering theory we remain agnostic to the type of transitions that our models capture we simply record whether an entity is mentioned in the discourse and in what grammatical roleour experiments quantitatively measured the predictive power of various linguistic features for several coherencerelated taskscrucially we find that our models are sensitive to the domain at hand and the type of texts under consideration this is an unavoidable consequence of the grid representation which is entityspecificdifferences in entity distribution indicate not only differences in coherence but also in writing conventions and stylesimilar observations have been made in other work which is closer in spirit to centerings claims barzilay and lapata modeling local coherence an important future direction lies in augmenting our entitybased representation with more finegrained lexicosemantic knowledgeone way to achieve this goal is to cluster entities based on their semantic relatedness thereby creating a grid representation over lexical chains an entirely different approach is to develop fully lexicalized models akin to traditional language modelscache language models seem particularly promising in this contextthe granularity of syntactic information is another topic that warrants further investigationso far we have only considered the contribution of core grammatical relations to the grid constructionexpanding our grammatical categories to modifiers and adjuncts may provide additional information in particular when considering machine generated textswe also plan to investigate whether the proposed discourse representation and modeling approaches generalize across different languagesfor instance the identification and extraction of entities poses additional challenges in grid construction for chinese where word boundaries are not denoted orthographically similar challenges arise in german a language with a large number of inflected forms and productive derivational processes not indicated by orthographyin the discourse literature entitybased theories are primarily applied at the level of local coherence while relational models such as rhetorical structure theory are used to model the global structure of discoursewe plan to investigate how to combine the two for improved prediction on both local and global levels with the ultimate goal of handling longer textsthe authors acknowledge the support of the national science foundation and epsrc we are grateful to claire cardie and vincent ng for providing us the results of their coreference system on our datathanks to eli barzilay eugene webber and three anonymous reviewers for helpful comments and suggestionsany opinions findings and conclusions or recommendations expressed herein are those of the authors and do not necessarily reflect the views of the national science foundation or epsrc
J08-1001
modeling local coherence an entitybased approachthis article proposes a novel framework for representing and measuring local coherencecentral to this approach is the entitygrid representation of discourse which captures patterns of entity distribution in a textthe algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional syntactic and referential information about discourse entitieswe reconceptualize coherence assessment as a learning task and show that our entitybased representation is wellsuited for rankingbased generation and text classification tasksusing the proposed representation we achieve good performance on text ordering summary coherence evaluation and readability assessmentan entity grid is constructed for each document and is represented as a matrix in which each row represents a sentence and each column represents an entitywe experiment on two datasets news articles on the topic of earthquakes and narratives on the topic of aviation accidents
feature forest models for probabilistic hpsg parsing probabilistic modeling of lexicalized grammars is difficult because these grammars exploit complicated data structures such as typed feature structures this prevents us from applying common methods of probabilistic modeling in which a complete structure is divided into substructures under the assumption of statistical independence among substructures for example partofspeech tagging of a sentence is decomposed into tagging of each word and cfg parsing is split into applications of cfg rules these methods have relied on the structure of the target problem namely lattices or trees and cannot be applied to graph structures including typed feature structures this article proposes the feature forest model as a solution to the problem of probabilistic modeling of complex data structures including typed feature structures the feature forest model provides a method for probabilistic modeling without the independence assumption when probabilistic events are represented with feature forests feature forests are generic data structures that represent ambiguous trees in a packed forest structure feature forest models are maximum entropy models defined over feature forests a dynamic programming algorithm is proposed for maximum entropy estimation without unpacking feature forests thus probabilistic modeling of any data structures is possible when they are represented by feature forests this article also describes methods for representing hpsg syntactic structures and predicateargument structures with feature forests hence we describe a complete strategy for developing probabilistic models for hpsg parsing the effectiveness of the proposed methods is empirically evaluated through parsing experiments on the penn treebank and the promise of applicability to parsing of realworld sentences is discussed probabilistic modeling of lexicalized grammars is difficult because these grammars exploit complicated data structures such as typed feature structuresthis prevents us from applying common methods of probabilistic modeling in which a complete structure is divided into substructures under the assumption of statistical independence among substructuresfor example partofspeech tagging of a sentence is decomposed into tagging of each word and cfg parsing is split into applications of cfg rulesthese methods have relied on the structure of the target problem namely lattices or trees and cannot be applied to graph structures including typed feature structuresthis article proposes the feature forest model as a solution to the problem of probabilistic modeling of complex data structures including typed feature structuresthe feature forest model provides a method for probabilistic modeling without the independence assumption when probabilistic events are represented with feature forestsfeature forests are generic data structures that represent ambiguous trees in a packed forest structurefeature forest models are maximum entropy models defined over feature forestsa dynamic programming algorithm is proposed for maximum entropy estimation without unpacking feature foreststhus probabilistic modeling of any data structures is possible when they are represented by feature foreststhis article also describes methods for representing hpsg syntactic structures and predicateargument structures with feature forestshence we describe a complete strategy for developing probabilistic models for hpsg parsingthe effectiveness of the proposed methods is empirically evaluated through parsing experiments on the penn treebank and the promise of applicability to parsing of realworld sentences is discussedfollowing the successful development of widecoverage lexicalized grammars statistical modeling of these grammars is attracting considerable attentionthis is because natural language processing applications usually require disambiguated or ranked parse results and statistical modeling of syntacticsemantic preference is one of the most promising methods for disambiguationthe focus of this article is the problem of probabilistic modeling of widecoverage hpsg parsingalthough previous studies have proposed maximum entropy models of hpsgstyle parse trees the straightforward application of maximum entropy models to widecoverage hpsg parsing is infeasible because estimation of maximum entropy models is computationally expensive especially when targeting widecoverage parsingin general complete structures such as transition sequences in markov models and parse trees have an exponential number of ambiguitiesthis causes an exponential explosion when estimating the parameters of maximum entropy modelswe therefore require solutions to make model estimation tractablethis article first proposes feature forest models which are a general solution to the problem of maximum entropy modeling of tree structures our algorithm avoids exponential explosion by representing probabilistic events with feature forests which are packed representations of tree structureswhen complete structures are represented with feature forests of a tractable size the parameters of maximum entropy models are efficiently estimated without unpacking the feature foreststhis is due to dynamic programming similar to the algorithm for computing insideoutside probabilities in pcfg parsingthe latter half of this article is on the application of feature forest models to disambiguation in widecoverage hpsg parsingwe describe methods for representing hpsg parse trees and predicateargument structures using feature forests together with the parameter estimation algorithm for feature forest models these methods constitute a complete procedure for the probabilistic modeling of widecoverage hpsg parsingthe methods we propose here were applied to an english hpsg parser enju we report on an extensive evaluation of the parser through parsing experiments on the wall street journal portion of the penn treebank the content of this article is an extended version of our earlier work reported in miyao and tsujii and miyao ninomiya and tsujii the major contribution of this article is a strict mathematical definition of the feature forest model and the parameter estimation algorithm which are substantially refined and extended from miyao and tsujii another contribution is that this article thoroughly discusses the relationships between the feature forest model and its application to hpsg parsingwe also provide an extensive empirical evaluation of the resulting hpsg parsing approach using realworld textsection 2 discusses a problem of conventional probabilistic models for lexicalized grammarssection 3 proposes feature forest models for solving this problemsection 4 describes the application of feature forest models to probabilistic hpsg parsingsection 5 presents an empirical evaluation of probabilistic hpsg parsing and section 6 introduces research related to our proposalssection 7 concludesmaximum entropy models are now becoming the de facto standard approach for disambiguation models for lexicalized or feature structure grammars previous studies on probabilistic models for hpsg have also adopted loglinear modelsthis is because these grammar formalisms exploit feature structures to represent linguistic constraintssuch constraints are known to introduce inconsistencies in probabilistic models estimated using simple relative frequency as discussed in abney the maximum entropy model is a reasonable choice for credible probabilistic modelsit also allows various overlapping features to be incorporated and we can expect higher accuracy in disambiguationa maximum entropy model gives a probabilistic distribution that maximizes the likelihood of training data under given feature functionsgiven training data e a maximum entropy model gives conditional probability p as followsdefinition 1 a maximum entropy model is defined as the solution of the following optimization problemin this definition p is the relative frequency of in the training data fi is a feature function which represents a characteristic of probabilistic events by mapping an event into a real value λi is the model parameter of a corresponding feature function fi and is determined so as to maximize the likelihood of the training data y is a set of y for given x for example in parsing x is a given sentence and y is a parse forest for xan advantage of maximum entropy models is that feature functions can represent any characteristics of eventsthat is independence assumptions are unnecessary for the design of feature functionshence this method provides a principled solution for the estimation of consistent probabilistic distributions over feature structure grammarsthe remaining issue is how to estimate parametersseveral numerical algorithms such as generalized iterative scaling improved iterative scaling and the limitedmemory broydenfletchergoldfarbshanno method have been proposed for parameter estimationalthough the algorithm proposed in the present article is applicable to all of the above algorithms we used lbfgs for experimentshowever a computational problem arises in these parameter estimation algorithmsthe size of y is generally very largethis is because local ambiguities in parse trees potentially because exponential growth in the number of structures assigned to subsequences of words resulting in billions of structures for whole sentencesfor example when we apply rewriting rule s np vp and the left np and the right vp respectively have n and m ambiguous subtrees the result of the rule application generates n m treesthis is problematic because the complexity of parameter estimation is proportional to the size of ythe cost of the parameter estimation algorithms is bound by the computation of model expectation µi given as as shown in this definition the computation of model expectation requires the summation over y for every x in the training datathe complexity of the overall estimation algorithm is o where y and f are the average numbers of y and activated features for an event respectively and e is the number of eventswhen y grows exponentially the parameter estimation becomes intractablein pcfgs the problem of computing probabilities of parse trees is avoided by using a dynamic programming algorithm for computing insideoutside probabilities with the algorithm the computation becomes tractablewe can expect that the same approach would be effective for maximum entropy models as wellthis notion yields a novel algorithm for parameter estimation for maximum entropy models as described in the next sectionour solution to the problem is a dynamic programming algorithm for computing insideoutside αproductsinsideoutside αproducts roughly correspond to inside outside probabilities in pcfgsin maximum entropy models a probability is defined as a normalized product of αfj jhence similar to the algorithm of computing insideoutside probabilities we can compute exp j λjfj which we define as the αproduct for each node in a tree structureif we can compute αproducts at a tractable cost the model expectation µi is also computed at a tractable costwe first define the notion of a feature forest a packed representation of a set of an exponential number of tree structuresfeature forests correspond to packed charts in cfg parsingbecause feature forests are generalized representations of forest structures the notion is not only applicable to syntactic parsing but also to sequence tagging such as pos tagging and named entity recognition we then define insideoutside αproducts that represent the αproducts of partial structures of a feature forestinside αproducts correspond to inside probabilities in pcfg and represent the summation of αproducts of the daughter subtreesoutside αproducts correspond to outside probabilities in pcfg and represent the summation of αproducts in the upper part of the feature forestboth can be computed incrementally by a dynamic programming algorithm similar to the algorithm for computing insideoutside probabilities in pcfggiven insideoutside o products of all nodes in a feature forest the model expectation µi is easily computed by multiplying them for each nodeto describe the algorithm we first define the notion of a feature forest the generalized representation of features in a packed forest structurefeature forests are used for enumerating possible structures of events that is they correspond to y in equation 1a feature forest φ is a tuple where we denote a feature forest for x as φfor example φ can represent the set of all possible tag sequences of a given sentence x or the set of all parse trees of xa feature forest is an acyclic graph and unpacked structures extracted from a feature forest are treeswe also assume that terminal nodes of feature forests are conjunctive nodesthat is disjunctive nodes must have daughters 0 for all d e da feature forest represents a set of trees of conjunctive nodes in a packed structureconjunctive nodes correspond to entities such as states in markov chains and nodes in cfg treesfeature functions are assigned to conjunctive nodes and express their characteristicsdisjunctive nodes are for enumerating alternative choicesconjunctive disjunctive daughter functions represent immediate relations of conjunctive and disjunctive nodesby selecting a conjunctive node as a child of each disjunctive node we can extract a tree consisting of conjunctive nodes from a feature foresta feature forest nodes as its daughtersthe feature forest in figure 1 represents a set of 2 x 2 x 2 8 unpacked trees shown in figure 2for example by selecting the leftmost conjunctive node at each disjunctive node we extract an unpacked tree an unpacked tree is represented as a set of conjunctive nodesgenerally a feature forest represents an exponential number of trees with a polynomial number of nodesthus complete structures such as tag sequences and parse trees with ambiguities can be represented in a tractable formfeature functions are defined over conjunctive nodes1 definition 3 a feature function for a feature forest is hence together with feature functions a feature forest represents a set of trees of featuresfeature forests may be regarded as a packed chart in cfg parsingalthough feature forests have the same structure as pcfg parse forests nodes in feature forests do not necessarily correspond to nodes in pcfg parse forestsin fact in sections 42 and 43 we will demonstrate that syntactic structures and predicateargument structures in hpsg can be represented with tractablesize feature foreststhe actual interpretation of a node in a feature forest may thus be ignored in the following discussionour algorithm is applicable whenever feature forests are of a tractable sizethe descriptive power of feature forests will be discussed again in section 6as mentioned a feature forest is a packed representation of trees of featureswe first define model expectations µi on a set of unpacked trees and then show that they can be computed without unpacking feature forestswe denote an unpacked tree as a set c c of conjunctive nodesour concern is only the set of features associated with each conjunctive node and the shape of the tree structure is irrelevant to the computation of probabilities of unpacked treeshence we do not distinguish an unpacked tree from a set of conjunctive nodesthe collection of unpacked trees represented by a feature forest is defined as a multiset of unpacked trees because we allow multiple occurrences of equivalent unpacked trees in a feature forest2 given multisets of unpacked trees a b we define the union and the product as followsintuitively the first operation is a collection of trees and the second lists all combinations of trees in a and bit is trivial that they satisfy commutative associative and distributive lawswe denote a set of unpacked trees rooted at node n e c you d as ωω is defined recursivelyfor a terminal node c e c obviously ω cfor an internal conjunctive node c e c an unpacked tree is a combination of trees each of which is selected from a disjunctive daughterhence a set of all unpacked trees is represented as a product of trees from disjunctive daughtersa disjunctive node d e d represents alternatives of packed trees and obviously a set of its unpacked trees is represented as a union of the daughter trees that is ω to summarize a set of unpacked trees is defined formally as followsgiven a feature forest φ a set ω of unpacked trees rooted at node n e c you d is defined recursively as followsfeature forests are directed acyclic graphs and as such this definition does not include a loophence ω is properly defineda set of all unpacked trees is then represented by ω henceforth we denote ω as ω or just ω when it is not confusing in contextfigure 3 shows ω of the feature forest in figure 1following definition 4 the first element of each set is the root node c1 and the rest are elements of the product of c2 c3 c4 c5 and c6 c7each set in figure 3 corresponds to a tree in figure 2given this formalization the feature function for an unpacked tree is defined as followsdefinition 5 the feature function fi for an unpacked tree c e ω is defined as because c e ω corresponds to y of the conventional maximum entropy model this function substitutes for fi in the conventional modelonce a feature function for an unpacked tree is given a model expectation is defined as in the traditional modeldefinition 6 the model expectation µi for a set of feature forests φ is defined as it is evident that the naive computation of model expectations requires exponential time complexity because the number of unpacked trees is exponentially related to the number of nodes in the feature forest φwe therefore need an algorithm for computing model expectations without unpacking a feature forestfigure 3 unpacked trees represented as sets of conjunctive nodesinsideoutside at node c2 in a feature forestto efficiently compute model expectations we incorporate an approach similar to the dynamic programming algorithm for computing insideoutside probabilities in pcfgswe first define the notion of insideoutside of a feature forestfigure 4 illustrates this concept which is similar to the analogous concept in pcfgs3 inside denotes a set of partial trees derived from node c2outside denotes a set of partial trees that derive node c2that is outside trees are partial trees of complements of inside treeswe denote a set of inside trees at node n as ι and that of outside trees as owe define a set ι of inside trees rooted at node n c d as a set of unpacked trees rooted at n we define a set o of outside trees rooted at node n c d as followsin the definition γ1 and δ1 denote mothers of conjunctive and disjunctive nodes respectivelyformally we can derive that the model expectations of a feature forest are computed as the product of the inside and outside αproductstheorem 1 the model expectation µi of a feature forest φ is computed as the product of inside and outside αproducts as follows where z ϕrx this equation shows a method for efficiently computing model expectations by traversing conjunctive nodes without unpacking the forest if the insideoutside αproducts are giventhe remaining issue is how to efficiently compute insideoutside αproductsfortunately insideoutside αproducts can be incrementally computed by dynamic programming without unpacking feature forestsfigure 5 shows the process of computing the inside αproduct at a conjunctive node from the inside αproducts of its daughter nodesbecause the inside of a conjunctive node is a set of the combinations of all of its descendants the αproduct is computed by multiplying the αproducts of the daughter treesthe following equation is derivedthe inside of a disjunctive node is the collection of the inside trees of its daughter nodeshence the inside αproduct at disjunctive node d d is computed as follows the inside αproduct ϕc at a conjunctive node c is computed by the following equation if ϕd is given for all daughter disjunctive nodes d δthe outside of a disjunctive node is equivalent to the outside of its daughter nodeshence the outside αproduct of a disjunctive node is propagated to its daughter conjunctive nodes the computation of the outside αproduct of a disjunctive node is somewhat complicatedas shown in figure 8 the outside trees of a disjunctive node are all combinations of incremental computation of outside αproducts at conjunctive node c2we finally find the following theorem for the computation of outside o productstheorem 3 the outside o product c at conjunctive node c is computed by the following equation if d is given for all mother disjunctive nodes that is all d such that c ythe outside o product d at disjunctive node d is computed by the following equation if c is given for all mother conjunctive nodes that is all c such that d b and yds for all sibling disjunctive nodes dnote that the order in which nodes are traversed is important for incremental computation although it is not shown in figure 9the computation for the daughter nodes and mother nodes must be completed before computing the inside and outside αproducts respectivelythis constraint is easily solved using any topological sort algorithma topological sort is applied once at the beginningthe result of the sorting does not affect the cost and the result of estimationin our implementation we assume that conjunctivedisjunctive nodes are already ordered from the root node in input datathe complexity of this algorithm is o fe where c and d are the average numbers of conjunctive and disjunctive nodes respectivelythis is tractable when c and d are of a reasonable sizeas noted in this section the number of nodes in a feature forest is usually polynomial even when that of the unpacked trees is exponentialthus we can efficiently compute model expectations with polynomial computational complexityfollowing previous studies on probabilistic models for hpsg we apply a maximum entropy model to hpsg parse disambiguationthe probability p of producing parse result t of a given sentence w is defined as where where p0 is a reference distribution and t is a set of parse candidates assigned to w the feature function fi represents the characteristics of t and w and the corresponding model parameter λi is its weightmodel parameters that maximize the loglikelihood of the training data are computed using a numerical optimization method estimation of the model requires a set of pairs where tw is the correct parse for a sentence w whereas tw is provided by a treebank t has to be computed by parsing each w in the treebankprevious studies assumed t could be enumerated however this assumption is impractical because the size of t is exponentially related to the length of w our solution here is to apply the feature forest model of section 3 to the probabilistic modeling of hpsg parsingsection 41 briefly introduces hpsgsection 42 and 43 describe how to represent hpsg parse trees and predicateargument structures by feature foreststogether with the parameter estimation algorithm in section 3 these methods constitute a complete method for probabilistic disambiguationwe also address a method for accelerating the construction of feature forests for all treebank sentences in section 44the design of feature functions will be given in section 45hpsg is a syntactic theory that follows the lexicalist frameworkin hpsg linguistic entities such as words and phrases are denoted by signs which are represented by typed feature structures signs are a formal representation of combinations of phonological forms and syntacticsemantic structures and express which phonological form signifies which syntacticsemantic structurefigure 10 shows the lexical sign for lovesthe geometry of signs follows pollard and sag head represents the partofspeech of the head word mod denotes modifiee constraints and spr subj and comps describe constraints of a specifier a syntactic subject and complements respectivelycont denotes the lexical entry for the transitive verb lovessimplified representation of the lexical entry in figure 10 predicateargument structure of a phrasesentencethe notation of cont in this article is borrowed from that of minimal recursion semantics hook represents a structure accessed by other phrases and rels describes the remaining structure of the semanticsin what follows we represent signs in a reduced form as shown in figure 11 because of the large size of typical hpsg signs which often include information not immediately relevant to the point being discussedwe will only show attributes that are relevant to an explanation expecting that readers can fill in the values of suppressed attributesin our actual implementation of the hpsg grammar lexicalphrasal signs contain additional attributes that are not defined in the standard hpsg theory but are used by a disambiguation modelexamples include the surface form of lexical heads and the type of lexical entry assigned to lexical heads which are respectively used for computing the features word and le introduced in section 45by incorporating additional attributes into signs we can straightforwardly compute feature functions for each signthis allows for a simple mapping between a parsing chart and a feature forest as described subsequentlyhowever this might increase the size of parse forests and therefore decrease parsing efficiency because differences between additional attributes interfere with equivalence relations for ambiguity packingwe represent an hpsg parse tree with a set of tuples where ml and r are the signs of the mother left daughter and right daughter respectively4 in chart parsing partial parse candidates are stored in a chart in which phrasal signs are identified and packed into equivalence classes if they are judged to be equivalent and dominate the same word sequencesa set of parse trees is then represented as a set of relations among equivalence classes5 figure 12 shows a chart for parsing he saw a girl with a telescope where the modifiee of with is ambiguous each feature structure expresses an equivalence class and the arrows represent immediatedominance relationsthe phrase saw a girl with a telescope has two trees because the signs of the topmost nodes are equivalent they are packed into an equivalence classthe ambiguity is represented as the two pairs of arrows leaving the node aa set of hpsg parse trees is represented in a chart as a tuple where e is a set of equivalence classes er c e is a set of root nodes and o e 4 2ee is a function to represent immediatedominance relationsour representation of a chart can be interpreted as an instance of a feature forestwe map the tuple which corresponds to into a conjunctive nodefigure 13 shows the hpsg parse trees in figure 12 represented as a feature forestsquare boxes are conjunctive nodes and di disjunctive nodesa solid arrow represents a disjunctive daughter function and a dotted line expresses a conjunctive daughter functionformally a chart is mapped into a feature forest as follows6 5 we assume that cont and dtrs are restricted and we will discuss a method for encoding cont in a feature forest in section 43we also assume that parse trees are packed according to equivalence relations rather than subsumption relations we cannot simply map parse forests packed under subsumption into feature forests because they overgenerate possible unpacked trees6 for ease of explanation the definition of the root node is different from the original definition given in section 3in this section we define r as a set of conjunctive nodes rather than a single node r the definition here is translated into the original definition by introducing a dummy root node r that has no features and only one disjunctive daughter whose daughters are r feature forest representation of hpsg parse trees in figure 12 changing the modelactually we successfully developed a probabili stic model including features on nonlocalpredicateargument dependencies as described subsequentlylocality in each step of composition of structure only a limited depth of the structures are referred tothat is local structures in the deep descendent phrases maybe ignored to construct larger phrasesthis assumption mean apredicateargument daughterspredicateargument s that predicateargument structures can be packed into conjunctive nodes by ignoring local structuresone may claim that restricting the domain of feature functions to limits the flexibility of feature designalthough this is true to some extent it does not necessarily mean the impossibility of incorporating features on nonlocal dependencies into the modelthis is because a feature forest model does not assume probabilistic independence of conjunctive nodesthis means that we can unpack a part of the forest without with the method previously described we can represent an hpsg parsing chart with a feature foresthowever equivalence classes in a chart might increase exponentially because predicateargument structures in hpsg signs represent the semantic relations of all words that the phrase dominatesfor example figure 14 shows phrasal signs with predicateargument structures for saw a girl with a telescopein the chart in figure 12 these signs are packed into an equivalence classhowever figure 14 shows that the values of cont that is predicateargument structures have different values and the signs as they are cannot be equivalentas seen in this example predicateargument structures prevent us from packing signs into equivalence classesin this section we apply the feature forest model to predicateargument structures which may include reentrant structures and nonlocal dependenciesit is theoretically difficult to apply the feature forest model to predicateargument structures a feature forest cannot represent graph structures that include reentrant structures in a straightforward mannerhowever if predicateargument structures are constructed as in the manner described subsequently they can be represented by feature forests of a tractable sizefeature forests can represent predicateargument structures if we assume some locality and monotonicity in the composition of predicateargument structuressigns with predicateargument structurescomputational linguistics volume 34 number 1 monotonicity all relations in the daughters predicateargument structures are percolated to the motherthat is none of the predicateargument relations in the daughter phrases disappear in the motherthus predicateargument structures of descendent phrases can be located at lower nodes in a feature forestpredicateargument structures usually satisfy the above conditions even when they include nonlocal dependenciesfor example figure 15 shows hpsg lexical entries for the whextraction of the object of love and for the control construction of try the first condition is satisfied because both lexical entries refer to conthook of argument signs in subj comps and slashnone of the lexical entries directly access argx of the argumentsthe second condition is also satisfied because the values of conthook of all of the argument signs are percolated to argx of the motherin addition the elements in contrels are percolated to the mother by the semantic principlecompositional semantics usually satisfies the above conditions including mrs the composition of mrs refers to hook and no internal structures of daughtersthe semantic principle of mrs also assures that all semantic relations in rels are percolated to the motherwhen these conditions are satisfied semantics may include any constraints such as selectional restrictions although the grammar we used in the experiments does not include semantic restrictions to constrain parse forestsunder these conditions local structures of predicateargument structures are encoded into a conjunctive node when the values of all of its arguments have been instantiatedwe introduce the notion of inactives to denote such local structuresan inactive is a subset of predicateargument structures in which all arguments have been instantiatedbecause inactive parts will not change during the rest of the parsing process they can be placed in a conjunctive nodeby placing newly generated inactives into corresponding conjunctive nodes a set of predicateargument structures can be represented in a feature forest by packing local ambiguities and nonlocal dependencies are preservedlexical entries including nonlocal relations and fact may optionally take a complementizer phrase7 the predicateargument structures for dispute1 and dispute2 are shown in figure 17curly braces express the ambiguities of partially constructed predicateargument structuresthe resulting feature forest is shown in figure 18the boxes denote conjunctive nodes and dx represent disjunctive nodesthe clause i wanted to dispute has two possible predicateargument structures one corresponding to dispute1 and the other corresponding to dispute2 the nodes of the predicateargument structure α are all instantiated that is it contains only inactivesthe corresponding conjunctive node has two inactives for want and dispute1the other structure β has an unfilled object in the argument of dispute2 which will be filled by the nonlocal dependencyhence the corresponding conjunctive node β has only one inactive corresponding to want and the remaining part that corresponds to dispute2 is passed on for further processingwhen we process the phrase the fact that i wanted to dispute the object of dispute2 is filled by fact and the predicateargument structure of dispute2 is then placed into a conjunctive node a feature forest representation of predicateargument structuresone of the beneficial characteristics of this packed representation is that the representation is isomorphic to the parsing process that is a charthence we can assign features of hpsg parse trees to a conjunctive node together with features of predicate argument structuresin section 5 we will investigate the contribution of features on parse trees and predicateargument structures to the disambiguation of hpsg parsingthe method just described is the essence of our solution for the tractable estimation of maximum entropy models on exponentially many hpsg parse treeshowever the problem of computational cost remainsconstruction of feature forests requires parsing of all of the sentences in a treebankdespite the development of methods to improve hpsg parsing efficiency exhaustive parsing of all sentences is still expensivewe assume that computation of parse trees with low probabilities can be omitted in the estimation stage because t can be approximated by parse trees with high probabilitiesto achieve this we first prepared a preliminary probabilistic model whose estimation did not require the parsing of a treebankthe preliminary model was used to reduce the search space for parsing a training treebankthe preliminary model in this study is a unigram model p _ fjww p where w w is a word in the sentence w and l is a lexical entry assigned to w this model is estimated by counting the relative frequencies of lexical entries used for w in the training datahence the estimation does not require parsing of a treebankactually we use a maximum entropy model to compute this probability as described in section 5the preliminary model is used for filtering lexical entries when we parse a treebankgiven this model we restrict the number of lexical entries used to parse a treebankwith a threshold n for the number of lexical entries and a threshold c for the probability lexical entries are assigned to a word in descending order of probability until the number of assigned entries exceeds n or the accumulated probability exceeds c if this procedure does not assign a lexical entry necessary to produce a correct parse it is added to the list of lexical entriesit should be noted that oracle lexical entries are given by the hpsg treebankthis assures that the filtering method does not exclude correct parse trees from parse forestsfigure 19 shows an example of filtering the lexical entries assigned to sawwith c 095 four lexical entries are assignedalthough the lexicon includes other lexical entries such as a verbal entry taking a sentential complement they are filtered outalthough this method reduces the time required for parsing a treebank this approximation causes bias in the training data and results in lower accuracythe tradeoff between parsing cost and accuracy will be examined experimentally in section 54we have several ways to integrate p with the estimated model pin the experiments we will empirically compare the following methods in terms of accuracy and estimation timefiltering only the unigram probability p is used only for filtering in trainingproduct the probability is defined as the product of p and the estimated model p reference distribution p is used as a reference distribution of p feature function log p is used as a feature function of p this method has been shown to be a generalization of the reference distribution method feature functions in maximum entropy models are designed to capture the characteristics of in this article we investigate combinations of the atomic features listed filtering of lexical entries for sawsym symbol of the phrasal category word surface form of the head word pos partofspeech of the head word le lexical entry assigned to the head word arg argument label of a predicate in table 1the following combinations are used for representing the characteristics of binaryunary schema applications ruledistcomma fbinary spanl syml wordl posl lel spanr symr wordr posr ler funary where subscripts l and r denote left and right daughtersin addition the following is used for expressing the condition of the root node of the parse treefeature functions to capture predicateargument dependencies are represented as follows fpa arg dist wordp posp lep worda posa lea where subscripts p and a represent predicate and argument respectivelyfigure 20 shows examples froot is for the root node in which the phrase symbol is s and the surface form partofspeech and lexical entry of the lexical head are saw vbd and a transitive verb respectively fbinary is for the binary rule application to saw a girl and with a telescope in which the applied schema is the headmodifier schema the left daughter is vp headed by saw and the right daughter is pp headed by with whose partofspeech is in and whose lexical entry is a vpmodifying prepositionfigure 21 shows example features for predicateargument structuresthe figure shows features assigned to the conjunctive node denoted as α in figure 18because inactive structures in the node have three predicateargument relations three features are activatedthe first one is for the relation of want and i where the label of the relation is arg1 the distance between the head words is 1 the surface string and the pos of example features for predicateargument structures the predicate are want and vbd and those of the argument are i and prpthe second and the third features are for the other two relationswe may include features on more than two relations such as the dependencies among want i and dispute although such features are not incorporated currentlyin our implementation some of the atomic features are abstracted for smoothingtables 2 3 and 4 show the full set of templates of combined features used in the experimentseach row represents the template for a feature functiona check indicates the atomic feature is incorporated and a hyphen indicates the feature is ignoredfeature templates for root conditionfeature templates for predicateargument dependenciesthis section presents experimental results on the parsing accuracy attained by the feature forest modelsin all of the following experiments we use the hpsg grammar developed by the method of miyao ninomiya and tsujii section 51 describes how this grammar was developedsection 52 explains other aspects of the experimental settingsin sections 53 to 57 we report results of the experiments on hpsg parsingin the following experiments we use enju 21 which is a widecoverage hpsg grammar extracted from the penn treebank by the method of miyao ninomiya and tsujii in this method we convert the penn treebank into an hpsg treebank and collect hpsg lexical entries from terminal nodes of the hpsg treebankfigure 22 illustrates the process of treebank conversion and lexicon collectionwe first convert and fertilize parse trees of the penn treebankthis step identifies syntactic constructions that require special treatment in hpsg such as raisingcontrol and longdistance dependenciesthese constructions are then annotated with typed feature structures so that they conform to the hpsg analysisnext we apply hpsg schemas and principles and obtain fully specified hpsg parse treesthis step solves feature structure constraints given in the previous step and fills unspecified constraintsfailures of schemaprinciple applications indicate that the annotated constraints do not extracting hpsg lexical entries from the penn treebank conform to the hpsg analysis and require revisionsfinally we obtain lexical entries from the hpsg parse treesthe terminal nodes of hpsg parse trees are collected and they are generalized by removing wordspecific or contextspecific constraintsan advantage of this method is that a widecoverage hpsg lexicon is obtained because lexical entries are extracted from realworld sentencesobtained lexical entries are guaranteed to construct wellformed hpsg parse trees because hpsg schemas and principles are successfully applied during the development of the hpsg treebankanother notable feature is that we can additionally obtain an hpsg treebank which can be used as training data for disambiguation modelsin the following experiments this hpsg treebank is used for the training of maximum entropy modelsthe lexicon used in the following experiments was extracted from sections 0221 of the wall street journal portion of the penn treebankthis lexicon can assign correct lexical entries to 9909 of words in the hpsg treebank converted from penn treebank section 23this number expresses lexical coverage in the strong sense defined by hockenmaier and steedman in this notion of coverage this lexicon has 841 sentential coverage where this means that the lexicon can assign correct lexical entries to all of the words in a sentencealthough the parser might produce parse results for uncovered sentences these parse results cannot be completely correctthe data for the training of the disambiguation models was the hpsg treebank derived from sections 0221 of the wall street journal portion of the penn treebank that is the same set used for lexicon extractionfor training of the disambiguation models we eliminated sentences of 40 words or more and sentences for which the parser could not produce the correct parsesthe resulting training set consists of 33604 sentences the treebanks derived from sections 22 and 23 were used as the development and final test sets respectivelyfollowing previous studies on parsing with pcfgbased models accuracy is measured for sentences of less than 40 words and for those with less than 100 wordstable 5 shows the specifications of the test datathe measure for evaluating parsing accuracy is precisionrecall of predicate argument dependencies output by the parsera predicateargument dependency is defined as a tuple where wh is the head word of the predicate wn is the head word of the argument 7t is the type of the predicate and p is an argument label for example he tried running has three dependencies as follows labeled precisionrecall is the ratio of tuples correctly identified by the parser and unlabeled precisionrecall is the ratio of wh and wn correctly identified regardless of π and p fscore is the harmonic mean of lp and lrsentence accuracy is the exact match accuracy of complete predicateargument relations in a sentencethese measures correspond to those used in other studies measuring the accuracy of predicateargument dependencies in ccg parsing and lfg parsing although exact figures cannot be compared directly because the definitions of dependencies are differentall predicateargument dependencies in a sentence are the target of evaluation except quotation marks and periodsthe accuracy is measured by parsing test sentences with goldstandard partofspeech tags from the penn treebank unless otherwise notedthe gaussian prior was used for smoothing and its hyperparameter was tuned for each model to maximize fscore for the development setthe algorithm for parameter estimation was the limitedmemory bfgs method the parser was implemented in c with the lilfes library and various speedup techniques for hpsg parsing were used such as quick check and iterative beam search other efficient parsing techniques including global thresholding hybrid parsing with a chunk parser and large constituent inhibition were not usedthe results obtained using these techniques are given in ninomiya et al a limit on the number of constituents was set for timeout the parser stopped parsing when the number of constituents created during parsing exceeded 50000in such a case the parser output nothing and the recall was computed as zerofeatures occurring more than twice were included in the probabilistic modelsa method of filtering lexical entries was applied to the parsing of training data unless otherwise noted parameters for filtering were n 10 and c 095 and a reference distribution method was appliedthe unigram model p0 for filtering is a maximum entropy model with two feature templates and the model includes 24847 featurestables 6 and 7 show parsing accuracy for the test setin the tables syntactic features denotes a model with syntactic features that is fbinary funary and froot introduced in section 45semantic features represents a model with features on predicate argument structures that is fpa given in table 4all is a model with both syntactic and semantic featuresthe baseline row shows the results for the reference model p0 used for lexical entry filtering in the estimation of the other modelsthis model is considered as a simple application of a traditional pcfgstyle model that is p 1 for any rule r in the construction rules of the hpsg grammarthe results demonstrate that feature forest models have significantly higher accuracy than a baseline modelcomparing syntactic features with semantic features we see that the former model attained significantly higher accuracy than the latterthis indicates that syntactic features are more important for overall accuracywe will examine the contributions of each atomic feature of the syntactic features in section 55features on predicateargument relations were generally considered as important for the accurate disambiguation of syntactic structuresfor example ppattachment ambiguity cannot be resolved with only syntactic preferenceshowever the results show that a model with only semantic features performs significantly worse than one with syntactic featureseven when combined with syntactic features semantic features do not improve accuracyobviously semantic preferences are necessary for accurate parsing but the features used in this work were not sufficient to capture semantic preferencesa possible reason is that as reported in gildea bilexical dependencies may be too sparse to capture semantic preferencesfor reference our results are competitive with the best corresponding results reported in ccg parsing although our results cannot be compared directly with other grammar formalisms because each formalism represents predicateargument dependencies differentlyin contrast with the results of ccg and pcfg the recall is clearly lower than precisionthis may have resulted from the hpsg grammar having stricter feature constraints and the parser not being able to produce parse results for around 1 of the sentencesto improve recall we need techniques to deal with these 1 of sentencestable 8 gives the computationspace costs of model estimationestimation time indicates user times required for running the parameter estimation algorithmno of feature occurrences denotes the total number of occurrences of features in the training data and data size gives the sizes of the compressed files of training datawe can conclude that feature forest models are estimated at a tractable computational cost and a reasonable data size even when a model includes semantic features including nonlocal dependenciesthe results reveal that feature forest models essentially solve the problem of the estimation of probabilistic models of sentence structurestable 9 compares the estimation methods introduced in section 44in all of the following experiments we show the accuracy for the test set the numbers in bold type represent a significant difference from the final model according to stratified shuffling tests with the bonferroni correction with pvalue 05 for 32 pairwise comparisonsthe results indicate that dist comma span word and filtering threshold vs estimation cost n c estimation time parsing time data size 5 080 108 5103 341 5 090 150 6242 407 5 095 190 7724 469 5 098 259 9604 549 10 080 130 6003 370 10 090 268 8855 511 10 095 511 15393 727 10 098 1395 36009 1230 15 080 123 6298 372 15 090 259 9543 526 15 095 735 20508 854 15 098 3777 86844 2031 pos features contributed to the final accuracy although the differences were slightin contrast rule sym and le features did not affect accuracyhowever when each was removed together with another feature the accuracy decreased drasticallythis implies that such features carry overlapping informationtable 13 shows parsing accuracy for covered and uncovered sentencesas defined in section 51 covered indicates that the hpsg lexicon has all correct lexical entries for a sentencein other words for covered sentences exactly correct parse trees are obtained if the disambiguation model worked perfectlythe result reveals clear differences in accuracy between covered and uncovered sentencesthe fscore for covered sentences is around 25 points higher than the overall fscore whereas the fscore is more than 10 points lower for uncovered sentencesthis result indicates improvement of lexicon quality is an important factor for higher accuracyfigure 23 shows the learning curvea feature set was fixed and the parameter of the gaussian prior was optimized for each modelhigh accuracy is attained even with a small training set and the accuracy seems to be saturatedthis indicates that we cannot further improve the accuracy simply by increasing the size of the training data setthe exploration of new types of features is necessary for higher accuracyit should also be noted that the upper bound of the accuracy is not 100 because the grammar cannot produce completely correct parse results for uncovered sentencesfigure 24 shows the accuracy for each sentence lengthit is apparent from this figure that the accuracy is significantly higher for sentences with less than 10 wordsthis implies that experiments with only short sentences overestimate the performance of parserssentences with at least 10 words are necessary to properly evaluate the performance of parsing realworld textsthe accuracies for the sentences with more than 10 words are not very different although data points for sentences with more than 50 words are not reliabletable 14 shows the accuracies for predicateargument relations when partsofspeech tags are assigned automatically by a maximumentropybased partsofspeech tagger the results indicate a drop of about three points in labeled precisionrecall a reason why we observed larger accuracy drops in labeled precisionrecall is that sentence length vs accuracy predicateargument relations are fragile with respect to partsofspeech errors because predicate types are determined depending on the partsofspeech of predicate wordsalthough our current parsing strategy assumes that partsofspeech are given beforehand for higher accuracy in real application contexts we will need a method for determining partsofspeech and parse trees jointlytable 15 shows a manual classification of the causes of disambiguation errors in 100 sentences randomly chosen from section 00in our evaluation one error source may cause multiple dependency errorsfor example if an incorrect lexical entry is assigned to a verb all of the argument dependencies of the verb are counted as errorsthe numbers in the table include such doublecountingfigure 25 shows examples of disambiguation errorsthe figure shows output from the parsermajor causes are classified into three types attachment ambiguity argument modifier distinction and lexical ambiguityas attachment ambiguities are wellknown error sources ppattachment is the largest source of errors in our evaluationour disambiguation model cannot accurately resolve ppattachment ambiguities because it does not include dependencies among a modifiee and the argument of the prepositionbecause previous studies revealed that such dependencies are effective features for ppattachment resolution we should incorporate them into our modelsome of the attachment ambiguities including adjective and adverb should also be resolved with an extension of featureshowever we cannot identify any effective features for the disambiguation of attachment of verbal phrases including relative clauses verb phrases subordinate clauses and toinfinitivesfor example figure 25 shows an example error of the attachment of a relative clausethe correct answer is that the examples of disambiguation errors subject of yielded is acre but this cannot be determined only by the relation among yield grapes and acrethe resolution of these errors requires a novel type of feature functionerrors of argumentmodifier distinction are prominent in deep syntactic analysis because arguments and modifiers are not explicitly distinguished in the evaluation of cfg parsersfigure 25 shows an example of the argumentmodifier distinction of a toinfinitive clausein this case the toinfinitive clause is a complement of temptsthe subcategorization frame of tempts seems responsible for this problemhowever the disambiguation model wrongly assigned a lexical entry for a transitive verb because of the sparseness of the training data the resolution of this sort of ambiguity requires the refinement of a probabilistic model of lexical entrieserrors of verb phrases and subordinate clauses are similar to this exampleerrors of argumentmodifier distinction of noun phrases are mainly caused by temporal nouns and cardinal numbersthe resolution of these errors seems to require the identification of temporal expressions and usage of cardinal numberserrors of lexical ambiguities were mainly caused by idiomsfor example in figure 25 compared with is a compound preposition but the parser recognized it as a verb phrasethis indicates that the grammar or the disambiguation model requires the special treatment of idiomserrors of verb subcategorization frames were mainly caused by difficult constructions such as insertionsfigure 25 shows that the parser could not identify the inserted clause and a lexical entry for a declarative transitive verb was chosenattachment errors of commas are also significantit should be noted that commas were ignored in the evaluation of cfg parserswe did not eliminate punctuation from the evaluation because punctuation sometimes contributes to semantics as in coordination and insertionin this error analysis errors of commas representing coordinationinsertion are classified into coordinationinsertion and comma indicates errors that do not contribute to the computation of semanticserrors of noun phrase identification mean that a noun phrase was split into two phrasesthese errors were mainly caused by the indirect effects of other errorserrors of identifying coordinationinsertion structures sometimes resulted in catastrophic analyseswhile accurate analysis of such constructions is indispensable it is also known to be difficult because disambiguation of coordinationinsertion requires the computation of preferences over global structures such as the similarity of syntacticsemantic structure of coordinatesincorporating features for representing the similarity of global structures is difficult for feature forest modelszeropronoun resolution is also a difficult problemhowever we found that most were indirectly caused by errors of argumentmodifier distinction in toinfinitive clausesa significant portion of the errors discussed above cannot be resolved by the features we investigated in this study and the design of other features will be necessary for improving parsing accuracythe model described in this article was first published in miyao and tsujii and has been applied to probabilistic models for parsing with lexicalized grammarsapplications to ccg parsing and lfg parsing demonstrated that feature forest models attained higher accuracy than other modelsthese researchers applied feature forests to representations of the packed parse results of lfg and the dependencyderivation structures of ccgtheir work demonstrated the applicability and effectiveness of feature forest models in parsing with widecoverage lexicalized grammarsfeature forest models were also shown to be effective for widecoverage sentence realization this work demonstrated that feature forest models are generic enough to be applied to natural language processing tasks other than parsingthe work of geman and johnson independently developed a dynamic programming algorithm for maximum entropy modelsthe solution was similar to our approach although their method was designed to traverse lfg parse results represented with disjunctive feature structures as proposed by maxwell and kaplan the difference between the two approaches is that feature forests use a simpler generic data structure to represent packed forest structurestherefore without assuming what feature forests represent our algorithm can be applied to various tasks including theirsanother approach to the probabilistic modeling of complete structures is a method of approximationthe work on whole sentence maximum entropy models proposed an approximation algorithm to estimate parameters of maximum entropy models on whole sentence structureshowever the algorithm suffered from slow convergence and the model was basically a sequence modelit could not produce a solution for complex structures as our model canwe should also mention conditional random fields for solving a similar problem in the context of maximum entropy markov modelstheir solution was an algorithm similar to the computation of forwardbackward probabilities of hidden markov models their algorithm is a special case of our algorithm in which each conjunctive node has only one daughterthis is obvious because feature forests can represent markov chainsin an analogy crfs correspond to hmms whereas feature forest models correspond to pcfgsextensions of crfs such as semimarkov crfs are also regarded as instances of feature forest modelsthis fact implies that our algorithm is applicable to not only parsing but also to other taskscrfs are now widely used for sequencebased tasks such as partsofspeech tagging and named entity recognition and have been shown to achieve the best performance in various tasks these results suggest that the method proposed in the present article will achieve high accuracy when applied to various statistical models with tree structuresdynamic crfs provide us with an interesting inspiration for extending feature forest modelsthe purpose of dynamic crfs is to incorporate feature functions that are not represented locally and the solution is to apply a variational method which is an algorithm of numerical computation to obtain approximate solutionsa similar method may be developed to overcome a bottleneck of feature forest models that is the fact that feature functions are localized to conjunctive nodesthe structure of feature forests is common in natural language processing and computational linguisticsas is easily seen lattices markov chains and cfg parse trees are represented by feature forestsfurthermore because conjunctive nodes do not necessarily represent cfg nodes or rules and terminals of feature forests need not be words feature forests can express any forest structures in which ambiguities are packed in local structuresexamples include the derivation trees of ltag and ccgchiang proved that feature forests could be considered as the derivation forests of linear contextfree rewriting systems lcfrss define a wide variety of grammars including ltag and ccg while preserving polynomialtime complexity of parsingthis demonstrates that feature forest models are applicable to probabilistic models far beyond pcfgsfeature forests are also isomorphic to support graphs used in the graphical them algorithm in their framework a program in a logic programming language prism is converted into support graphs and parameters of probabilistic models are automatically learned by an them algorithmsupport graphs have been proved to represent various statistical structural models including hmms pcfgs bayesian networks and many other graphical structures taken together these results imply the high applicability of feature forest models to various real tasksbecause feature forests have a structure isomorphic to parse forests of pcfg it might seem that they can represent only immediate dominance relations of cfg rules as in pcfg resulting in only a slight trivial extension of pcfgas described herein however feature forests can represent structures beyond cfg parse treesfurthermore because feature forests are a generalized representation of ambiguous structures each node in a feature forest need not correspond to a node in a pcfg parse forestthat is a node in a feature forest may represent any linguistic entity including a fragment of a syntactic structure a semantic relation or other sentencelevel informationthe idea of feature forest models could be applied to nonprobabilistic machine learning methodstaskar et al proposed a dynamic programming algorithm for the learning of largemargin classifiers including support vector machines and presented its application to disambiguation in cfg parsingtheir algorithm resembles feature forest models an optimization function is computed by a dynamic programing algorithm without unpacking packed forest structuresfrom the discussion in this article it is evident that if the main part of an update formula is represented with linear combinations a method similar to feature forest models should be applicablebefore the advent of feature forest models studies on probabilistic models of hpsg adopted conventional maximum entropy models to select the most probable parse from parse candidates given by hpsg grammars the difference between these studies and our work is that we used feature forests to avoid the exponential increase in the number of structures that results from unpacked parse resultsthese studies ignored the problem of exponential explosion in fact training sets in these studies were very small and consisted only of short sentencesa possible approach to avoid this problem is to develop a fully restrictive grammar that never causes an exponential explosion although the development of such a grammar requires considerable effort and it cannot be acquired from treebanks using existing approacheswe think that exponential explosion is inevitable particularly with the largescale widecoverage grammars required to analyze realworld textsin such cases these methods of model estimation are intractableanother approach to estimating loglinear models for hpsg was to extract a small informative sample from the original set t the method was successfully applied to dutch hpsg parsing a possible problem with this method is in the approximation of exponentially many parse trees by a polynomialsize samplehowever their method has an advantage in that any features on parse results can be incorporated into a model whereas our method forces feature functions to be defined locally on conjunctive nodeswe will discuss the tradeoff between the approximation solution and the locality of feature functions in section 63nonprobabilistic statistical classifiers have also been applied to disambiguation in hpsg parsing voted perceptrons and support vector machines however the problem of exponential explosion is also inevitable using their methodsas described in section 61 an approach similar to ours may be applied following the study of taskar et al a series of studies on parsing with lfg also proposed a maximum entropy model for probabilistic modeling of lfg parsinghowever similarly to the previous studies on hpsg parsing these groups had no solution to the problem of exponential explosion of unpacked parse resultsas discussed in section 61 geman and johnson proposed an algorithm for maximum entropy estimation for packed representations of lfg parsesrecent studies on ccg have proposed probabilistic models of dependency structures or predicateargument dependencies which are essentially the same as the predicateargument structures described in the present articleclark hockenmaier and steedman attempted the modeling of dependency structures but the model was inconsistent because of the violation of the independence assumptionhockenmaier proposed a consistent generative model of predicateargument structuresthe probability of a nonlocal dependency was conditioned on multiple words to preserve the consistency of the probability model that is probability p in section 43 was directly estimatedthe problem was that such probabilities could not be estimated directly from the data due to data sparseness and a heuristic method had to be employedprobabilities were therefore estimated as the average of individual probabilities conditioned on a single wordanother problem is that the model is no longer consistent when unification constraints such as those in hpsg are introducedour solution is free of these problems and is applicable to various grammars not only hpsg and ccgmost of the stateoftheart studies on parsing with lexicalized grammars have adopted feature forest models their methods of translating parse results into feature forests are basically the same as our method described in section 4 and details differ because different grammar theories represent syntactic structures differentlythey reported higher accuracy in parsing the penn treebank than the previous methods introduced herein and these results attest the effectiveness of feature forest models in practical deep parsinga remaining problem is that no studies could provide empirical comparisons across grammar theoriesthe above studies and our research evaluated parsing accuracy on their own test setsthe construction of theoryindependent standard test sets requires enormous effort because we must establish theoryindependent criteria such as agreed definitions of phrases and headednessalthough this issue is beyond the scope of the present article it is a fundamental obstacle to the transparency of these studies on parsingclark and curran described a method for reducing the cost of parsing a training treebank without sacrificing accuracy in the context of ccg parsingthey first assigned each word a small number of supertags corresponding to lexical entries in our case and parsed supertagged sentencesbecause they did not use the probabilities of supertags in a parsing stage their method corresponds to our filtering only methodthe difference from our approach is that they also applied the supertagger in a parsing stagewe suppose that this was crucial for high accuracy in their approach although empirical investigation is necessarythe proposed algorithm is an essential solution to the problem of estimating probabilistic models on exponentially many complete structureshowever the applicability of this algorithm relies on the constraint that features are defined locally in conjunctive nodesas discussed in section 61 this does not necessarily mean that features in our model can represent only the immediatedominance relations of cfg rules because conjunctive nodes may encode any fragments of complete structuresin fact we demonstrated in section 43 that certain assumptions allowed us to encode nonlocal predicate argument dependencies in tractablesize feature forestsin addition although in the experiments we used only features on bilexical dependencies the method described in section 43 allows us to define any features on a predicate and all of its arguments such as a ternary relation among a subject a verb and a complement and a generalized relation among semantic classes of a predicate and its argumentsthis is because a predicate and all of its arguments are included in a conjunctive node and feature functions can represent any relations expressed within a conjunctive nodewhen we define more global features such as cooccurrences of structures at distant places in a sentence conjunctive nodes must be expanded so that they include all structures that are necessary to define these featureshowever this obviously increases the number of conjunctive nodes and consequently the cost of parameter estimation increasesin an extreme case for example if we define features on any cooccurrences of partial parse trees the full unpacking of parse forests would be necessary and parameter estimation would be intractablethis indicates that there is a tradeoff between the locality of features and the cost of estimationthat is larger context features might contribute to higher accuracy while they inflate the size of feature forests and increase the cost of parameter estimationsampling techniques allow us to define any features on complete structures without any constraintshowever they force us to employ approximation methods for tractable computationthe effectiveness of those techniques therefore relies on convergence speed and approximation errors which may vary depending on the characteristics of target problems and featuresit is an open research question whether dynamic programming or sampling can deliver a better balance of estimation efficiency and accuracythe answer will differ in different problemswhen most effective features can be represented locally in tractablesize feature forests dynamic programming methods including ours are suitablehowever when global context features are indispensable for high accuracy sampling methods might be betterwe should also investigate compromise solutions such as dynamic crfs and reranking techniques there is no analytical way of predicting the best solution and it must be investigated experimentally for each target taska dynamic programming algorithm was presented for maximum entropy modeling and shown to provide a solution to the parameter estimation of probabilistic models of complete structures without the independence assumptionwe first defined the notion of a feature forest which is a packed representation of an exponential number of trees of featureswhen training data is represented with feature forests model parameters are estimated at a tractable cost without unpacking the foreststhe method provides a more flexible modeling scheme than previous methods of application of maximum entropy models to natural language processingfurthermore it is applicable to complex data structures where an event is difficult to decompose into independent subeventswe also demonstrated that feature forest models are applicable to probabilistic modeling of linguistic structures such as the syntactic structures of hpsg and predicate argument structures including nonlocal dependenciesthe presented approach can be regarded as a general solution to the probabilistic modeling of syntactic analysis with lexicalized grammarstable 16 summarizes the best performance of the hpsg parser described in this articlethe parser demonstrated impressively high coverage and accuracy for realworld textswe therefore conclude that the hpsg parser for english is moving toward a practical level of use in realworld applicationsrecently the applicability of the hpsg parser to practical applications such as information extraction and retrieval has also been demonstrated from our extensive investigation of hpsg parsing we observed that exploration of new types of features is indispensable to further improvement of parsing accuracya possible research direction is to encode larger contexts of parse trees which has been shown to improve accuracy future work includes not only the investigation of these features but also the abstraction of predicateargument dependencies using semantic classesexperimental results also suggest that an improvement in grammar coverage is crucial for higher accuracythis indicates that an improvement in the quality of the grammar is a key factor for the improvement of parsing accuracythe feature forest model provides new insight into the relationship between a linguistic structure and a unit of probabilitytraditionally a unit of probability was implicitly assumed to correspond to a meaningful linguistic structure a tagging of a word or an application of a rewriting ruleone reason for the assumption is to enable dynamic programming algorithms such as the viterbi algorithmthe probability of a complete structure must be decomposed into atomic structures in which ambiguities are limited to a tractable sizeanother reason is to estimate plausible probabilitiesbecause a probability is defined over atomic structures they should also be meaningful so as to be assigned a probabilityin feature forest models however conjunctive nodes are responsible for the former whereas feature functions are responsible for the latteralthough feature functions must be defined locally in conjunctive nodes they are not necessarily equivalentconjunctive nodes may represent any fragments of a complete structure which are not necessarily linguistically meaningfulthey should be designed to pack ambiguities and enable us to define useful featuresmeanwhile feature functions indicate an atomic unit of probability and are designed to capture statistical regularity of the target problemwe expect the separation of a unit of probability from linguistic structures to open up a new framework for flexible probabilistic modelingthe authors wish to thank the anonymous reviewers of computational linguistics for their helpful comments and discussionswe would also like to thank takashi ninomiya and kenji sagae for their precious support
J08-1002
feature forest models for probabilistic hpsg parsingprobabilistic modeling of lexicalized grammars is difficult because these grammars exploit complicated data structures such as typed feature structuresthis prevents us from applying common methods of probabilistic modeling in which a complete structure is divided into substructures under the assumption of statistical independence among substructuresfor example partofspeech tagging of a sentence is decomposed into tagging of each word and cfg parsing is split into applications of cfg rulesthese methods have relied on the structure of the target problem namely lattices or trees and cannot be applied to graph structures including typed feature structuresthis article proposes the feature forest model as a solution to the problem of probabilistic modeling of complex data structures including typed feature structuresthe feature forest model provides a method for probabilistic modeling without the independence assumption when probabilistic events are represented with feature forestsfeature forests are generic data structures that represent ambiguous trees in a packed forest structurefeature forest models are maximum entropy models defined over feature forestsa dynamic programming algorithm is proposed for maximum entropy estimation without unpacking feature foreststhus probabilistic modeling of any data structures is possible when they are represented by feature foreststhis article also describes methods for representing hpsg syntactic structures and predicateargument structures with feature forestshence we describe a complete strategy for developing probabilistic models for hpsg parsingthe effectiveness of the proposed methods is empirically evaluated through parsing experiments on the penn treebank and the promise of applicability to parsing of realworld sentences is discussed
a global joint model for semantic role labeling we present a model for semantic role labeling that effectively captures the linguistic intuition that a semantic argument frame is a joint structure with strong dependencies among the arguments we show how to incorporate these strong dependencies in a statistical joint model with a rich set of features over multiple argument phrases the proposed model substantially outperforms a similar stateoftheart local model that does not include dependencies among different arguments we evaluate the gains from incorporating this joint information on the propbank corpus when using correct syntactic parse trees as input and when using automatically derived parse the gains amount to reduction on all arguments and core arguments for goldstandard parse trees on propbank for automatic parse trees the error reductions are all and core arguments respectively we also present results on the conll 2005 shared task data set additionally we explore considering multiple syntactic analyses to cope with parser noise and uncertainty we present a model for semantic role labeling that effectively captures the linguistic intuition that a semantic argument frame is a joint structure with strong dependencies among the argumentswe show how to incorporate these strong dependencies in a statistical joint model with a rich set of features over multiple argument phrasesthe proposed model substantially outperforms a similar stateoftheart local model that does not include dependencies among different argumentswe evaluate the gains from incorporating this joint information on the propbank corpus when using correct syntactic parse trees as input and when using automatically derived parse treesthe gains amount to 241 error reduction on all arguments and 368 on core arguments for goldstandard parse trees on propbankfor automatic parse trees the error reductions are 83 and 103 on all and core arguments respectivelywe also present results on the conll 2005 shared task data setadditionally we explore considering multiple syntactic analyses to cope with parser noise and uncertaintysince the release of the framenet and propbank corpora there has been a large amount of work on statistical models for semantic role labelingmost of this work relies heavily on local classifiers ones that decide the semantic role of each phrase independently of the roles of other phraseshowever linguistic theory tells us that a core argument frame is a joint structure with strong dependencies between argumentsfor instance in the sentence finalhour tradingtheme accelerated to 1081 million sharestarget yesterdayargmtmp the first argument is the subject noun phrase finalhour trading of the active verb acceleratedif we did not consider the rest of the sentence it would look more like an agent argument but when we realize that there is no other good candidate for a theme argument because to 1081 million shares must be a target and yesterday is most likely argmtmp we can correctly label it themeeven though previous work has modeled some correlations between the labels of parse tree nodes many important phenomena have not been modeledthe key properties needed to model this joint structure are no finite markov horizon assumption for dependencies among node labels features looking at the labels of multiple argument nodes and internal features of these nodes and a statistical model capable of incorporating these longdistance dependencies and generalizing wellwe show how to build a joint model of argument frames incorporating novel features into a discriminative loglinear modelthis system achieves an error reduction of 241 on all arguments and 368 on core arguments over a stateoftheart independent classifier for goldstandard parse trees on propbankif we consider the linguistic basis for joint modeling of a verbs arguments there are at least three types of information to be capturedthe most basic is to limit occurrences of each kind of argumentfor instance there is usually at most one argument of a verb that is an arg0 and although some modifier roles such as argmtmp can fairly easily be repeated others such as argmmnr also generally occur at most once1 the remaining two types of information apply mainly to core arguments which in most linguistic theories are modeled as belonging together in an argument frame the information is only marginally useful for adjuncts which are usually treated as independent realizational choices not included in the argument frame of a verbfirstly many verbs take a number of different argument framesprevious work has shown that these are strongly correlated with the word sense of the verb if verbs were disambiguated for sense the semantic roles of phrases would be closer to independent given the sense of the verbhowever because in almost all semantic role labeling work the word sense is unknown and the model conditions only on the lemma there is much joint information between arguments when conditioning only on the verb lemmafor example compare in the first case the noun phrase after passed is an arg1 whereas in the second case it is a argmloc with the choice governed by the sense of the verb passsecondly even with same sense of a verb different patterns of argument realization lead to joint information between argumentsconsider the meal that the ogre cooked the children is still remembereddespite both examples having an identical surface syntax knowing that the arg1 of cook is expressed by the initial noun meal in the second example gives evidence that the children is the arg2 not the arg1 in this caselet us think of a graphical model over a set of m variables one for each node in the parse tree t representing the labels of the nodes and the dependencies between themin order for a model over these variables to capture for example the statistical tendency of some semantic roles to occur at most once there must be a dependency link between any two variablesto estimate the probability that a certain node gets the role agent we need to know if any of the other nodes were labeled with this rolewe propose such a model with a very rich graphical model structure which is globally conditioned on the observation 2 such a model is formally a conditional random field however note that in practice this term has previously been used almost exclusively to describe the restricted case of linear chain conditional markov random fields or at least models that have strong markov properties which allow efficient dynamic programming algorithms instead we consider a densely connected crf structure with no markov properties and use approximate inference by reranking the nbest solutions of a simpler model with stronger independence assumptions such a rich graphical model can represent many dependencies but there are two dangersone is that the computational complexity of training the model and searching for the most likely labeling given the tree can be prohibitive and the other is that if too many dependencies are encoded the model will overfit the training data and will not generalize wellwe propose a model which circumvents these two dangers and achieves significant performance gains over a similar local model that does not add any dependency arcs among the random variablesto tackle the efficiency problem we adopt dynamic programming and reranking algorithmsto avoid overfitting we encode only a small set of linguistically motivated dependencies in features over sets of the random variablesour reranking approach like the approach to parse reranking of collins employs a simpler modela local semantic role labeling algorithmas a first pass to generate a set of n likely complete assignments of labels to all parse tree nodesthe joint model is restricted to these n assignments and does not have to search the exponentially large space of all possible joint labelingsthere has been a substantial amount of work on automatic semantic role labeling starting with the statistical model of gildea and jurafsky researchers have worked on defining new useful features and different system architectures and modelshere we review the work most closely related to ours concentrating on methods for incorporating joint information and for increasing robustness to parser errorgildea and jurafsky propose a method to model global dependencies by including a probability distribution over multisets of semantic role labels given a predicatein this way the model can consider the assignment of all nodes in the parse tree and evaluate whether the set of realized semantic roles is likelyif a necessary role is missing or if an unusual set of arguments is assigned by the local model this additional factor can correct some of the mistakesthe distribution over label multisets is estimated using interpolation of a relative frequency and a backoff distributionthe backoff distribution assumes each argument label is present or absent independently of the other labels namely it assumes a bernoulli naive bayes modelthe most likely assignment of labels according to such a joint model is found approximately using rescoring of the top k 10 assignments according to a local model which does not include dependencies among argumentsusing this model improves the performance of the system in fmeasure from 592 to 6285this shows that adding global information improves the performance of a role labeling system considerablyhowever the type of global information in this model is limited to label multisetswe will show that much larger gains are possible from joint modeling adding richer sources of joint information using a more flexible statistical modelthe model of pradhan hacioglu et al is a stateoftheart model based on support vector machines and incorporating a large set of structural and lexical featuresat the heart of the model lies a local classifier which labels each parse tree node with one of the possible argument labels or nonejoint information is integrated into the model in two ways dynamic class context using the labels of the two nodes to the left as features for classifying the current nodethis is similar to the conditional markov models often used in information extraction notice that here the previous two nodes classified are not in general the previous two nodes assigned nonnone labelsif a linear order on all nodes is imposed then the previous two nodes classified most likely bear the label nonelanguage model lattice rescoring rescoring of an nbest lattice with a trigram language model over semantic role label sequencesthe target predicate is also part of the sequencethese ways of incorporating joint information resulted in small gains over a baseline system using only the features of gildea and jurafsky the performance gain due to joint information over a system using all features was not reportedthe joint information captured by this model is limited by the ngram markov assumption of the language model over labelsin our work we improve the modeling of joint dependencies by looking at longerdistance context by defining richer features over the sequence of labels and input features and by estimating the model parameters discriminativelya system which can integrate longerdistance dependencies is that of punyakanok et al and punyakanok roth and yih the idea is to build a semantic role labeling system that is based on local classifiers but also uses a global component that ensures that several linguistically motivated global constraints on argument frames are satisfiedthe constraints are categorical and specified by handfor example one global constraint is that the argument phrases cannot overlapthat is if a node is labeled with a nonnone label all of its descendants have to be labeled nonethe proposed framework is integer linear programming which makes it possible to find the most likely assignment of labels to all nodes of the parse tree subject to specified constraintssolving the ilp problem is nphard but it is very fast in practice the authors report substantial gains in performance due to these global consistency constraintsthis method was applied to improve the performance both of a system based on labeling syntactic chunks and one based on labeling parse tree nodesour work differs from that work in that our constraints are not categorical but are rather statistical preferences and that they are learned automatically based on features specified by the knowledge engineeron the other hand we solve the searchestimation problem through reranking and nbest search only approximately not exactlyso far we have mainly discussed systems which label nodes in a parse treemany systems that only use shallow syntactic information have also been presented using full syntactic parse information was not allowed in the conll 2004 shared task on semantic role labeling and description of such systems can be found in most systems which use only shallow syntactic information represent the input sentence as a sequence of tokens which they label with a bio tagging representation limited joint information is used by such systems provided as a fixed size context of tags on previous tokens for example a length five window is used in the chunkbased system in a method that models joint information in a different way was proposed by cohn and blunsom it uses a treestructured crf where the statistical dependency structure is exactly defined by the edges in the syntactic parse treethe only dependencies captured are between the label of a node and the label of each of its childrenhowever the arguments of a predicate can be arbitrarily far from each other in the syntactic parse tree and therefore a treecrf model is limited in its ability to model dependencies among different argumentsfor instance the dependency between the meal and the children for the sentence in example will not be captured because these phrases are not in the same local tree according to penn treebank syntaxthere have been multiple approaches to reducing the sensitivity of semantic role labeling systems to syntactic parser errorpromising approaches have been to consider multiple syntactic analysesthe top k parses from a single or multiple full parsers or a shallow parse and a full parse or several types of full syntactic parses such techniques are important for achieving good performance the top four systems in the conll 2005 shared task competition all used multiple syntactic analyses these previous methods develop special components to combine the labeling decisions obtained using different syntactic annotationthe method of punyakanok roth and yih uses ilp to derive a consistent set of arguments each of which could be derived using a different parse treepradhan ward et al use stacking to train a classifier which combines decisions based on different annotations and marquez et al use specialpurpose filtering and inference stages which combine arguments proposed by systems using shallow and full analysesour approach to increasing robustness uses the top k parses from a single parser and is a simple general method to factor in the uncertainty of the parser by applying bayesian inferenceit is most closely related to the method described in finkel manning and ng and can be seen as an approximation of that methodwe describe our system in detail by first introducing simpler local semantic role labeling models in section 4 and later building on them to define joint models in section 5before we start presenting models we describe the data and evaluation measures used in section 3readers can skip the next section and continue on to section 4 if they are not interested in the details of the evaluationfor most of our experiments we used the february 2004 release of propbankwe also report results on the conll 2005 shared task data in section 62for the latter we used the standard conll evaluation measures and we refer readers to the description of that task for details of the evaluation in this section we describe the data and evaluation measures we used for the february 2004 datawe use our own set of measures on the february 2004 data for three reasonsfirstly we wish to present a richer set of measures which can better illustrate the performance of the system on core arguments as against adjuncts and the performance on identifying versus classifying argumentssecondly we technically could not use the conll measure on the february 2004 data because this earlier data was not available in a format which specifies which arguments should have the additional rargx labels used in the conll evaluation3 finally these measures are better for comparison with early papers because most research before 2005 did not distinguish referring argumentswe describe our argumentbased measures in detail here in case researchers are interested in replicating our results for the february 2004 datafor the february 2004 data we used the standard split into training development and test setsthe annotations from sections 0221 formed the training set section 24 the development and section 23 the test setthe set of argument labels considered is the set of core argument labels plus the modifier labels the training set contained 85392 propositions the test set 4615 and the development set 2626we evaluate semantic role labeling models on goldstandard parse trees and parse trees produced by charniaks automatic parser for goldstandard parse trees we preprocess the trees to discard empty constituents and strip functional tagsusing the trace information provided by empty constituents is very useful for improving performance but we have not used this information so that we can compare our results to previous work and since automatic systems that recover it are not widely availablesince 2004 there has been a precise standard evaluation measure for semantic role labeling formulated by the organizers of the conll shared tasks an evaluation script is also distributed as part of the provided software for the shared task and can be used to evaluate systems on propbank i datafor papers published between 2000 and 2005 there are several details of the evaluation measures for semantic role labeling that make it difficult to compare results obtained by different researchers because researchers use their own implementations of evaluation measures without making all the exact details clear in their papersthe first issue is the existence of arguments consisting of multiple constituentsin this case it is not clear whether partial credit is to be given for guessing only some of the constituents comprising the argument correctlythe second issue is whether the bracketing of constituents should be required to be recovered correctly in other words whether pairs of labelings such as thearg0 manarg0 and the manarg0 are to be considered the same or notif they are considered the same there are multiple labelings of nodes in a parse tree that are equivalentthe third issue is that when using automatic parsers some of the constituents that are fillers of semantic roles are not recovered by the parserin this case it is not clear how various research groups have scored their systems if we vary the choice taken for these three issues we can come up with many different evaluation measures and these details are important because different choices can lead to rather large differences in reported performancehere we describe in detail our evaluation measures for the results on the february 2004 data reported in this articlethe measures are similar to the conll evaluation measure but report a richer set of statistics the exact differences are discussed at the end of this sectionfor both goldstandard and automatic parses we use one evaluation measure which we call argumentbased evaluationto describe the evaluation measure we will use as an example the correct and guessed semantic role labelings shown in figures 2 and 2both are shown as labelings on parse tree nodes with labels of the form argx and cargxthe label cargx is used to represent multiconstituent argumentsa constituent labeled cargx is assumed to be a continuation of the closest constituent to the left labeled argxour semantic role labeling system produces labelings of this form and the gold standard propbank annotations are converted to this form as well4 the evaluation is carried out individually for each predicate and its associated argument frameif a sentence contains several clauses the several argument frames are evaluated separatelyour argumentbased measures do not require exact bracketing and do not give partial credit for labeling correctly only some of several constituents in a multiconstituent argumentthey are illustrated in figure 2for these measures a semantic role labeling of a sentence is viewed as a labeling on sets of wordsthese sets can encompass several noncontiguous spansfigure 2 gives the representation of the correct and guessed labelings shown in figures 2 and 2 in the first and second rows of the table respectivelyto convert a labeling on parse tree nodes to this form we create a labeled set for each possibly multiconstituent argumentall remaining sets of words are implicitly labeled with nonewe can see that in this way exact bracketing is not necessary and also no partial credit is given when only some of several constituents in a multiconstituent argument are labeled correctlywe will refer to word sets as spans to compute the measures we are comparing a guessed set of labeled spans to a correct set of labeled spanswe briefly define the various measures of comparison used herein using the example guessed and correct argumentbased scoring measures for the guessed labelingtoutanova haghighi and manning a global joint model for srl labelings shown in figure 2all spans not listed explicitly are assumed to have label nonethe scoring measures are illustrated in figure 2the figure shows performance measuresfmeasure and whole frame accuracy across nine different conditionswhen the sets of labeled spans are compared directly we obtain the complete task measures corresponding to the idcls row and all column in figure 2we also define several other measures to understand the performance of the system on different types of labelswe measure the performance on identification classification and the complete task when considering only the core arguments all arguments but with a single argm label for the modifier arguments and all arguments this defines nine subtasks which we now describefor each of them we compute the whole frame accuracy and fmeasure as follows whole frame accuracy this is the percentage of propositions for which there is an exact match between the proposed and correct labelingsfor example the whole frame accuracy for idcls and all is 0 because the correct and guessed sets of labeled spans shown in figure 2 do not match exactlyin the figures acc is always an abbreviation for this whole frame accuracyeven though this measure has not been used extensively in previous work we find it useful to trackmost importantly potential applications of role labeling may require correct labeling of all arguments in a sentence in order to be effective and partially correct labelings may not be very usefulmoreover a joint model for semantic role labeling optimizes whole frame accuracy more directly than a local model doesfmeasure because there may be confusion about what we mean by fmeasure in this multiclass setting we define it herefmeasure is defined as the harmonic mean of precision and recall f 2pr true positive are true positive this formula uses the number of true positive false positive and false negative spans in a given guessed labelingtrue positive is the number of spans whose correct label is one of the core or modifier argument labels and whose guessed label is the same as the correct labelfalse positive is the number of spans whose guessed label is nonnone and whose correct label is different from the guessed label false negative is the number of spans whose correct label is nonnone and whose guessed label is not the same as the correct one in the figures in this paper we show fmeasure multiplied by 100 so that it is in the same range as whole frame accuracycore argument measures these measures score the system on core arguments only without regard to modifier argumentsthey can be obtained by first mapping all noncore argument labels in the guessed and correct labelings to nonecoarse modifier argument measures sometimes it is sufficient to know a given span has a modifier role without knowledge of the specific role labelin addition deciding exact modifier argument labels was one of the decisions with highest disagreement among annotators to estimate performance under this setting we relabel all argmx arguments to argm in the proposed and correct labelingsuch a performance measure was also used by xue and palmer note that these measures do not exclude the core arguments but instead consider the core plus a coarse version of the modifier argumentsthus for coarseargm all we count 0 as a true positive span 12 34 and 789 as false positive and 1 2 3 4 and 7 8 9 as false negativeidentification measures these measure how well we do on the arg vs none distinctionfor the purposes of this evaluation all spans labeled with a nonnone label are considered to have the generic label argfor example to compute core id we compare the following sets of labeled spans classification measures these are performance on argument spans which were also guessed to be argument spans in other words these measures ignore the arg vs none confusionsthey ignore all spans which were incorrectly labeled none or incorrectly labeled with an argument label when the correct label was nonethis is different from classification accuracy used in previous work to mean the accuracy of the system in classifying spans when the correct set of argument spans is givento compute cls measures we remove all spans from sguessed and scorrect that do not occur in both sets and compare the resulting setsfor example to compute the all cls measures we need to compare the following sets of labeled spans the rest of the spans were removed from both sets because they were labeled none according to one of the labelings and nonnone according to the otherthe fmeasure is 50 and the whole frame accuracy is 0as we mentioned before we label and evaluate the semantic frame of every predicate in the sentence separatelyit is possible for a sentence to contain several propositionsannotations of predicates occurring in the sentencefor example in the sentence the spacecraft faces a sixyear journey to explore jupiter there are two propositions for the verbs faces and explorethese are the spacecraftarg0 facespred a sixyear journey to explore jupiterarg1the spacecraftarg0 faces a sixyear journey to explorepred jupiterarg1our evaluation measures compare the guessed and correct set of labeled spans for each propositionthe conll evaluation measure is almost the same as our argumentbased measurethe only difference is that the conll measure introduces an additional label type for arguments of the form rargx used for referring extoutanova haghighi and manning a global joint model for srl pressionsthe propbank distribution contains a specification of which multiconstituent arguments are in a coreference chainthe conll evaluation script considers these multiconstituent arguments as several separate arguments having different labels where one argument has an argx label and the others have rargx labelsthe decision of which constituents were to be labeled with referring labels was made using a set of rules expressed with regular expressions5 a script that converts propbank annotations to conll format is available as part of the shared task softwarefor example in the following sentence the conll specification annotates the arguments of began as follows the deregulationarg1 of railroads thatrarg1 beganpred enabled shippers to bargain for transportationin contrast we treat all multiconstituent arguments in the same way and do not distinguish coreferential versus noncoreferential split argumentsaccording to our argumentbased evaluation the annotation of the arguments of the verb began is the deregulationarg1 of railroads thatcarg1 beganpred enabled shippers to bargain for transportationthe difference between our argument based measure and the conll evaluation measure is such that we cannot say that the value of one is always higher than the value of the othereither measure could be higher depending on the kinds of errors madefor example if the guessed labeling is the deregulationarg0 of railroads thatrarg1 beganpred enabled shippers to bargain for transportation the conll script would count the argument that as correct and report precision and recall of 5 whereas our argumentbased measure would not count any argument correct and report precision and recall of 0on the other hand if the guessed labeling is the deregulationarg1 of railroads thatcarg1 beganpred enabled shippers to bargain for transportation the conll measure would report a precision and recall of 0 whereas our argumentbased measure would report precision and recall of 1if the guessed labeling is the deregulationarg1 of railroads thatrarg1 beganpred enabled shippers to bargain for transportation both measures would report precision and recall of 1nevertheless overall we expect the two measures to yield very similar resultsa classifier is local if it assigns a probability to the label of an individual parse tree node ni independently of the labels of other nodesin defining our models we use the standard separation of the task of semantic role labeling into identification and classification phasesformally let l denote a mapping of the nodes in a tree t to a label set of semantic roles with respect to a predicate v let id be the mapping which collapses ls nonnone values into arg5 the regular expressions look for phrases containing pronouns with partofspeech tags wdt wrb wp or wp then like the gildea and jurafsky system we decompose the probability of a labeling l into probabilities according to an identification model pid and a classification model pclsthis decomposition does not encode any independence assumptions but is a useful way of thinking about the problemour local models for semantic role labeling use this decompositionwe use the same features for local identification and classification models but use the decomposition for efficiency of trainingthe identification models are trained to classify each node in a parse tree as arg or none and the classification models are trained to label each argument node in the training set with its specific labelin this way the training set for the classification models is smallernote that we do not do any hard pruning at the identification stage in testing and can find the exact labeling of the complete parse tree which is the maximizer of equation we use loglinear models for multiclass classification for the local modelsbecause they produce probability distributions identification and classification models can be chained in a principled way as in equation the baseline features we used for the local identification and classification models are outlined in figure 3these features are a subset of the features used in previous workthe standard features at the top of the figure were defined by gildea and jurafsky and the rest are other useful lexical and structural features identified in more recent work we also incorporated several novel features which we describe nextexample of displaced argumentswe found that a large source of errors for arg0 and arg1 stemmed from cases such as those illustrated in figure 4 where arguments were dislocated by raising or control verbshere the predicate expected does not have a subject in the typical position indicated by the empty npbecause the auxiliary is has raised the subject to its current positionin order to capture this class of examples we use a binary feature missing subject indicating whether the predicate is missing its subject and use this feature in conjunction with the path feature so that we learn typical paths to raised subjects conditioned on the absence of the subject in its typical position6 in the particular case of figure 4 there is another instance of an argument being quite far from its predicatethe predicate widen shares the phrase the trade gap with expect as an arg1 argumenthowever as expect is a raising verb widens subject is not in its typical position either and we should expect to find it in the same position as expecteds subjectthis indicates it may be useful to use the path relative to expected to find arguments for widenin general to identify certain arguments of predicates embedded in auxiliary and infinitival vps we expect it to be helpful to take the path from the maximum extended projection of the predicatethe highest vp in the chain of vps dominating the predicatewe introduce a new path feature projected path which takes the path from the maximal extended projection to an argument nodethis feature applies only when the argument is not dominated by the maximal projection these features also handle other cases of discontinuous and nonlocal dependencies such as those arising due to control verbsthe performance gain from these new features was notable especially in identificationthe performance on all arguments for the model using only the features in figure 3 and the model using the additional features as well are shown in figure 5for these results the constraint that argument phrases do not overlap was enforced using the algorithm presented in section 42the most direct way to use trained local identification and classification models in testing is to select a labeling l of the parse tree that maximizes the product of the performance of local classifiers on all arguments using the features in figure 3 only and using the additional local featuresusing gold standard parse trees on section 23 probabilities according to the two models as in equation because these models are local this is equivalent to independently maximizing the product of the probabilities of the two models for the label li of each parse tree node ni as shown below in equation a problem with this approach is that a maximizing labeling of the nodes could possibly violate the constraint that argument nodes should not overlap with each othertherefore to produce a consistent set of arguments with local classifiers we must have a way of enforcing the nonoverlapping constraintwhen labeling parse tree nodes previous work has either used greedy algorithms to find a nonoverlapping assignment or the generalpurpose ilp approach of punyakanok et al for labeling chunks an exact algorithm based on shortest paths was proposed in punyakanok and roth its complexity is quadratic in the length of the sentencehere we describe a faster exact dynamic programming algorithm to find the most likely nonoverlapping labeling of all nodes in the parse tree according to a product of probabilities from local models as in equation for simplicity we describe the dynamic program for the case where only two classes are possible arg and nonethe generalization to more classes is straightforwardintuitively the algorithm is similar to the viterbi algorithm for contextfree grammars because we can describe the nonoverlapping constraint by a grammar that disallows arg nodes having arg descendantssubsequently we will talk about maximizing the sum of the logs of local probabilities rather than the product of local probabilities which is equivalentthe dynamic program works from the leaves of the tree up and finds a best assignment for each subtree using already computed assignments for its childrensuppose we want the most likely consistent assignment for subtree t with child trees t1 tk each storing the most likely consistent assignment of its nodes as well as the logprobability of the allnone assignment the assignment of none to all nodes in the treethe most likely assignment for t is the one that corresponds to the maximum of performance of local model on all arguments when enforcing the nonoverlapping constraint or notthe logprobability of the allnone assignment for a tree t is the logprobability of assigning the root node of t to none plus the sum of the logprobabilities of the allnone assignments of the child subtrees of t propagating this procedure from the leaves to the root of t we have our most likely nonoverlapping assignmentby slightly modifying this procedure we obtain the most likely assignment according to a product of local identification and classification modelswe use the local models in conjunction with this search procedure to select a mostlikely labeling in testingthe complexity of this algorithm is linear in the number of nodes in the parse tree which is usually much less than the square of the number of words in the sentence the complexity of the punyakanok and roth algorithmfor example for a binarybranching parse tree the number of nodes is approximately 2lthe speedup is due to the fact that when we label parse tree nodes we make use of the bracketing constraints imposed by the parse treethe shortest path algorithm proposed by punyakanok and roth can also be adapted to achieve this lower computational complexityit turns out that enforcing the nonoverlapping constraint does not lead to large gains in performancethe results in figure 5 are from models that use the dynamic program for selecting nonoverlapping argumentsto evaluate the gain from enforcing the constraint figure 6 shows the performance of the same local model using all features when the dynamic program is used versus when a most likely possibly overlapping assignment is chosen in testingthe local model with basic plus additional features is our first pass model used in rerankingthe nonoverlapping constraint is enforced using the dynamic programthis is a stateoftheart modelits fmeasure on all arguments is 884 according to our argumentbased scoring measurethis is very similar to the best reported results using goldstandard parse trees without null constituents and functional tags 894 fmeasure reported for the pradhan et al model7 a more detailed analysis of the results obtained by the local model is given in figure 7 and the two confusion matrices in figures 7 and 7 which display the number of errors of each type that the model madethe first confusion matrix concentrates on core arguments and merges all modifying argument labels into a single argm labelthe second concentrates on confusions among modifying argumentsfrom the confusion matrix in figure 7 we can see that the largest number of errors are confusions of argument labels with nonethe number of confusions between pairs of core arguments is low as is the number of confusions between core and modifier labelsif we ignore the column and row corresponding to none in figure 7 the number of offdiagonal entries is very smallthis corresponds to the high fmeasures performance measures for local model using all local features and enforcing the nonoverlapping constraintresults are on section 23 using gold standard parse trees on coarseargm cls and core cls 981 and 980 respectively shown in figure 7the number of confusions of argument labels with none shown in the none column is larger than the number of confusions of none with argument labels shown in the none rowthis shows that the model generally has higher precision than recallwe experimented with the precisionrecall tradeoff but this did not result in an increase in fmeasurefrom the confusion matrix in figure 7 we can see that the number of confusions between modifier argument labels is higher than the number of confusions between core argument labelsthis corresponds to the all cls fmeasure of 957 versus the core cls fmeasure of 980the perlabel fmeasures in the last column show that the performance on some very frequent modifier labels is in the low sixties or seventiesthe confusions between modifier labels and none are quite numerousthus to improve the performance on core arguments we need to improve recall without lowering precisionin particular when the model is uncertain which of several likely core labels to assign we need to find additional sources of evidence to improve its confidenceto improve the performance on modifier arguments we also need to lower the confusions among different modifier argumentswe will see that our joint model improves the overall performance mainly by improving the performance on core arguments through increasing recall and precision by looking at wider sentence contextas discussed in section 3 multiple constituents can be part of the same semantic argument as specified by propbankan automatic system that has to recover such information needs to have a way of indicating when multiple constituents labeled with the same semantic role are a part of the same argumentsome researchers have chosen to make labels of the form cargx distinct argument labels that become additional classes in a multiclass constituent classifierthese cargx are used to indicate continuing arguments as illustrated in the two trees in figure 2we chose to not introduce additional labels of this form because they might unnecessarily fragment the training dataour automatic classifiers label constituents with one of the core or modifier semantic role labels and a simple postprocessing rule is applied to the output of the system to determine which constituents that are labeled the same are to be merged as the same argumentthe postprocessing rule is the following for every constituent that bears a core argument label argx if there is a preceding constituent with the same label relabel the current constituent cargxtherefore according to our algorithm all constituents having the same core argument label are part of the same argument and all constituents having the same modifier labels are separate arguments by themselvesthis rule is fairly accurate for core arguments but is not always correct it fails more often on modifier argumentsan evaluation of this rule using the conll data set and evaluation measure shows that our upper bound in performance because of this rule is approximately 990 fmeasure on all argumentswe proceed to describe our models incorporating dependencies between labels of nodes in the parse treeas we discussed briefly before the dependencies we would like to model are highly nonlocala factorized sequence model that assumes a finite markov horizon such as a chain crf would not be able to encode such dependencieswe define a crf with a much richer dependency structuremotivation for rerankingfor argument identification the number of possible assignments for a parse tree with n nodes is 2nthis number can run into the hundreds of billions for a normalsized treefor argument labeling the number of possible assignments is 20m if m is the number of arguments of a verb and 20 is the approximate number of possible labels if considering both core and modifying argumentstraining a model which has such a huge number of classes is infeasible if the model does not factorize due to strong independence assumptionstherefore in order to be able to incorporate longrange dependencies in our models we chose to adopt a reranking approach which selects from likely assignments generated by a model which makes stronger independence assumptionswe utilize the top n assignments of our local semantic role labeling model psrl to generate likely assignmentsas can be seen from figure 8 for relatively small values of n our reranking approach does not present a serious bottleneck to performancewe used a value of n 10 for trainingin figure 8 we can see that if we could pick using an oracle the best assignment out of the top 10 assignments according to the local model we would achieve an fmeasure of 973 on all argumentsincreasing the number of n to 30 results in a very small gain in the upper bound on performance and a large increase in memory requirementswe therefore selected n 10 as a good compromisegeneration of top n most likely joint assignmentswe generate the top n most likely nonoverlapping joint assignments of labels to nodes in a parse tree according to a local model psrl using an exact dynamic programming algorithm which is a direct generalization of the algorithm for finding the top nonoverlapping assignment described in section 42parametric modelswe learn loglinear reranking models for joint semantic role labeling which use feature maps from a parse tree and label sequence to a vector spacethe form of the models is as followslet φ e ibs denote a feature map from a tree t target verb v and joint assignment l of the nodes of the tree to the vector space ibslet l1 l2 ln denote the top n possible joint assignmentswe learn a loglinear model with a parameter vector w with one weight for each of the s dimensions of the feature vectorthe probability of an assignment l according to this reranking model is defined as the score of an assignment l not in the top n is zerowe train the model to maximize the sum of loglikelihoods of the best assignments minus a quadratic regularization termin this framework we can define arbitrary features of labeled trees that capture general properties of predicateargument structurewe will introduce the features of the joint reranking model in the context of the example parse tree shown in figure 9we model dependencies not only between the label of a oracle upper bounds for top n nonoverlapping assignments from local model on core and all arguments using goldstandard parse treesan example tree from propbank with semantic role annotations for the sentence finalhour trading accelerated to 1081 million shares yesterday node and the labels of other nodes but also dependencies between the label of a node and input features of other argument nodesthe features are specified by instantiation of templates and the value of a feature is the number of times a particular pattern occurs in the labeled treefor a tree t predicate v and joint assignment l of labels to the nodes of the tree we define the candidate argument sequence as the sequence of nonnone labeled nodes n1 l1 vpred nm lm a reasonable candidate argument sequence usually contains very few of the nodes in the treeabout 2 to 7as this is the typical number of arguments for a verbto make it more convenient to express our feature templates we include the predicate node v in the sequencethis sequence of labeled nodes is defined with respect to the lefttoright order of constituents in the parse treebecause nonnone labeled nodes do not overlap there is a strict lefttoright order among these nodesthe candidate argument sequence that corresponds to the correct assignment in figure 9 is then np1arg1 vbd1pred pp1arg4 np3argmtmp features from local modelsall features included in the local models are also included in our joint modelsin particular each template for local features is included as a joint template that concatenates the local template and the node labelfor example for the local feature path we define a joint feature template that extracts path from every node in the candidate argument sequence and concatenates it with the label of the nodeboth a feature with the specific argument label and a feature with the generic backoff arg label are createdthis is similar to adding features from identification and classification modelsin the case of the example candidate argument sequence provided for the node np1 we have the features arg1 arg when comparing a local and a joint model we use the same set of local feature templates in the two modelsif these were the only features that a joint model used we would expect its performance to be roughly the same as the performance of a local modelthis is because the two models will in fact be in the same parametric family but will only differ slightly in the way the parameters are estimatedin particular the likelihood of an assignment according to the joint model with local features will differ from the likelihood of the same assignment according to the local model only in the denominator the joint model sums over a few likely assignments in the denominator whereas the local model sums over all assignments also the joint model does not treat the decomposition into identification and classification models in exactly the same way as the local modelwhole label sequence featuresas observed in previous work including information about the set or sequence of labels assigned to argument nodes should be very helpful for disambiguationfor example including such information will make the model less likely to pick multiple nodes to fill the same role or to come up with a labeling that does not contain an obligatory argumentwe added a whole label sequence feature template that extracts the labels of all argument nodes and preserves information about the position of the predicatetwo templates for whole label sequences were added one having the predicate voice only and another also including the predicate lemmathese templates are instantiated as follows for the example candidate argument sequence voiceactive arg1 pred arg4 argmtmp voiceactive lemmaaccelerate arg1 pred arg4 argmtmp we also add variants of these templates that use a generic arg label instead of specific labels for the argumentsthese feature templates have the effect of counting the number of arguments to the left and right of the predicate which provides useful global information about argument structurea local model is not able to represent the count of arguments since the label of each node is decided independentlythis feature can very directly and succinctly encode preferences for required arguments and expected number of argumentsas previously observed including modifying arguments in sequence features is not helpfulthis corresponds to the standard linguistic understanding that there are no prevalent constraints on the position or presence of adjuncts in an argument frame and was confirmed in our experimentswe redefined the whole label sequence features to exclude modifying argumentsthe whole label sequence features are the first type of features we add to relax the independence assumptions of the local modelbecause these features look at the sequence of labels of all arguments they capture joint informationthere is no limit on the length of the label sequence and thus there is no ngram markov order independence assumption additionally the nodes in the candidate argument sequences are in general not in the same local tree in the syntactic analysis and a treecrf model would not be able to encode these dependenciesjoint syntacticsemantic featuresthis class of features is similar to the whole label sequence features but in addition to labels of argument nodes it includes syntactic features of the nodesthese features can capture the joint mapping from the syntactic realization of the predicates arguments to its semantic framethe idea of these features is to capture knowledge about the label of a constituent given the syntactic realization and labels of all other arguments of the verbthis is helpful in capturing syntactic alternations such as the dative alternationfor example consider the sentence shaw publishingarg0 offeredpred mr smitharg2 a reimbursementarg1 and the alternative realization shaw publishingarg0 offeredpred a reimbursementarg1 to mr smitharg2when classifying the np in object position it is useful to know whether the following argument is a ppif it is the np will more likely be an arg1 and if not it will more likely be an arg2a feature template that captures such information extracts for each candidate argument node its phrase type and labelfor example the instantiations of such templates in including only the predicate voice or also the predicate lemma would be voiceactive nparg0 pred nparg1 pparg2 voiceactivelemmaoffer nparg0 pred nparg1 pparg2 we experimented with extracting several kinds of features from each argument node and found that the phrase type and the head of a directly dominating ppif one existswere most helpfullocal models normally consider only features of the phrase being classified in addition to features of the predicatethey cannot take into account the features of other argument nodes because they are only given the input and the identity of the argument nodes is unknownit is conceivable that a local model could condition on the features of all nodes in the tree but the number of parameters would be extremely largethe joint syntacticsemantic features proposed here encode important dependencies using a very small number of parameters as we will show in section 54we should note that xue and palmer define a similar feature template called syntactic frame which often captures similar informationthe important difference is that their template extracts contextual information from noun phrases surrounding the predicate rather than from the sequence of argument nodesbecause we use a joint model we are able to use information about other argument nodes when labeling a noderepetition featureswe also add features that detect repetitions of the same label in a candidate argument sequence together with the phrase types of the nodes labeled with that labelfor example is a common pattern of this formvariants of this feature template also indicate whether all repeated arguments are sisters in the parse tree or whether all repeated arguments are adjacent in terms of word spansthese features can provide robustness to parser errors making it more likely to assign the same label to adjacent phrases that may have been incorrectly split by the parserin section 54 we report results from the joint model and an ablation study to determine the contribution of each of the types of joint featureshere we describe the application in testing of a joint model for semantic role labeling using a local model psrl and a joint reranking model prsrlthe local model psrl is used to generate n nonoverlapping joint assignments l1 lnone option is to select the best li according to prsrl as in equation ignoring the score from the local modelin our experiments we noticed that for larger values of n the performance of our reranking model prsrl decreasedthis was probably due to the fact that at test time the local classifier produces very poor argument frames near the bottom of the top n for large n because the reranking model is trained on relatively few good argument frames it cannot easily rule out very bad framesit makes sense then to incorporate the local model into our final scoreour final score is given by where α is a tunable parameter determining the amount of influence the local score has on the final score such interpolation with a score from a firstpass model was also used for parse reranking in given this score at test time we choose among the top n local assignments l1 ln according to we compare the performance of joint reranking models and local modelswe used n 10 joint assignments for training reranking models and n 15 for testingthe weight α of the local model was set to 1using different numbers of joint assignments in training and testing is in general not ideal but due to memory requirements we could not experiment with larger values of n for trainingfigure 10 shows the summary performance of the local model repeated from earlier figures a joint model using only local features a joint model using local whole label sequence features and a joint model using all described types of features the evaluation is on goldstandard parse treesin addition to performance measures the figure shows the number of binary features included in the modelthe number of features is a measure of the complexity of the hypothesis space of the parametric modelwe can see that a joint model using only local features outperforms a local model by 5 points of fmeasurethe joint model using local features estimates the feature weights only using the top n consistent assignments thus making the labels of different nodes nonindependent according to the estimation procedure which may be a because of the improved performanceanother factor could be that the model jointlocal is a combination of two models as specified in equation which may lead to gains the label sequence features added in model labelseq result in another 15 points jump in fmeasure on all argumentsan additional 8 gain results from the inclusion of syntacticsemantic and repetition featuresthe error reduction of model alljoint performance of local and joint models on idcls on section 23 using goldstandard parse treesthe number of features of each model is shown in thousands over the local model is 368 in core arguments fmeasure 333 in core arguments whole frame accuracy 241 in all arguments fmeasure and 217 in all arguments whole frame accuracyall differences in all arguments fmeasure are statistically significant according to a paired wilcoxon signed rank testjointlocal is significantly better than local labelseq is significantly better than jointlocal and alljoint is significantly better than labelseq we performed the wilcoxon signed rank test on perproposition all arguments fmeasure for all modelswe also note that the joint models have fewer features than the local modelthis is due to the fact that the local model has seen many more negative examples and therefore more unique featuresthe joint features are not very numerous compared to the local features in the joint modelsthe alljoint model has around 30 more features than the jointlocal modelthese experiments showed that the label sequence features were very useful especially on core arguments increasing the fmeasure on these arguments by two points when added to the jointlocal modelthis shows that even though the local model is optimized to use a large set of features and achieve stateoftheart performance it is still advantageous to model the joint information in the sequence of labels in a predicates argument frameadditionally the joint syntacticsemantic features improved performance further showing that when predicting the label of an argument it is useful to condition on the features of other arguments in addition to their labelsa more detailed analysis of the results obtained by the joint model alljoint is given in figure 11 and the two confusion matrices in figures 11 and 11 which display the number of errors of each type that the model madethe first confusion matrix concentrates on core arguments and merges all modifying argument labels into a single argm labelthe second confusion matrix concentrates on confusions among modifying argumentsthis figure can be compared to figure 7 which summarizes the results for the local model in the same formthe biggest differences are in the performance on core arguments which can be seen by comparing the confusion matrices in figures 7 and 11the fmeasure on each of the core argument labels has increased by at least three points the fmeasure on arg2 by 57 points and the fmeasure on arg3 by eight pointsthe confusions of core argument labels with none have gone down significantly and also there is a large decrease in the confusions of none with arg1there is generally a slight increase in fmeasure on modifier labels as well but the performance on some of the modifier labels has gone downthis makes sense because our joint features are targeted at capturing the dependencies among core argumentsthere may be useful regularities for modifier arguments as well but capturing them may require different joint feature templatesfigure 12 lists the frequency with which each of the top k assignments from the local model was ranked first by the reranking model alljointfor example for 841 of the propositions the reranking model chose the same assignment that the local model would have chosenthe second best assignment according to the local model was promoted to first 86 of the timethe figure shows statistics for the top ten assignments onlythe rest of the assignments ranked 11 through 15 were chosen as best by the reranking model for a total of 03 of the propositionsthe labeling of the tree in figure 9 is a specific example of the kind of errors fixed by the joint modelsthe local classifier labeled the first argument in the tree as arg0 instead of arg1 probably because an arg0 label is more likely for the subject positionwe now evaluate our models when trained and tested using automatic parses produced by charniaks parserthe propbank training set sections 221 is also the training set of the parserthe performance of the parser is therefore better on the training setwhen the constituents of an argument do not have corresponding constituents in an automatically produced parse tree it will be very hard for a model to get the semantic role labeling correcthowever this is not impossible and systems which are more robust to parser error have been proposed our system can also theoretically guess the correct set of words by labeling a set of constituents that cover percentage of test set propositions for which each of the top ten assignments from the local model was selected as best by the joint model alljointpercentage of argument constituents that are not present in the automatic parses of charniaks parserconstituents shows the percentage of missing constituents and propositions shows the percentage of propositions that have missing constituents the argument words but we found that this rarely happens in practicefigure 13 shows the percentage of argument constituents that are missing in the automatic parse trees produced by charniaks parserwe can see that the percentage of missing constituents is quite highwe report local and joint model results in figures 14 and 14 respectivelyas for goldstandard parses we test on all arguments regardless of whether they correspond to constituents that have been recovered by the parser and use the same measures detailed in section 32we also compare the confusion matrices for the local and joint models ignoring the confusions among modifier argument labels in figure 15the error reduction of the joint over the local model is 103 in core arguments fmeasure and 83 in all arguments fmeasuresemantic role labeling is very sensitive to the correctness of the given parse tree as the results showif an argument does not correspond to any constituent in a parse tree or a constituent exists but is not attached or labeled correctly our model will have a very hard time guessing the correct labelingthus if the syntactic parser makes errors these errors influence directly the semantic role labeling systemthe theoretically correct way to propagate the uncertainty of the syntactic parser is to consider multiple possible parse trees weighted by their likelihoodin finkel manning and ng this is approximated by sampling parse treeswe implement this idea by an argmax approximation using the top k parse trees from the parser of charniak we use these alternative parses as follows suppose t1 tk are trees for sentence s with probabilities p given by the parserthen for a fixed predicate v let li denote the best joint labeling of tree ti with score scoresrl according to our final joint modelthen we choose the labeling l which maximizes this method of using multiple parse trees is very simple to implement and factors in the uncertainty of the parser to some extenthowever according to this method we are choosing a single parse and a complete semantic frame derived from that parseother methods are able to derive different arguments of the semantic frame from different syntactic annotations which may make them more robust figure 16 shows summary results for the test set when using the top ten parses and the joint modelthe weighting parameter for the parser probabilities was are 1we did not experiment extensively with different values of r preliminary experiments showed that considering 15 parses was a bit better and considering the top 20 was a bit worsethe conll 2005 data is derived from propbank version i which is the first official release in 2005 whereas the results we have been reporting in the previous sections used the prefinal february 2004 datausing the conll 2005 evaluation standard ensures that results obtained by different groups are evaluated in exactly the same wayin performance of the joint model using the top ten parses from charniaks parserresults are on section 23propbank i there have been several changes in the annotation conventions as well as error fixes and addition of new propositionsthere was also a change in the way pp arguments are annotated in the february 2004 data some pp arguments are annotated at the head np child but in propbank i all pp arguments are annotated at the pp nodesin order to achieve maximal performance with respect to these annotations it would probably be best to change the feature definitions to account for the changeshowever we did no adaptation of the featuresthe training set consists of the annotations in sections 2 to 21 the development set is section 24 and one of the test sets is section 23 the other test set is from the brown corpus the conll annotations distinguish referring arguments of the form rargx as discussed in section 3our approach to dealing with referring arguments and deciding when multiple identically labeled constituents are part of the same argument was to label constituents with only the set of argument labels and none and then map some of these labels into referring or continuation labelswe converted an argx into a rargx if and only if the label of the constituent began with whthe rule for deciding when to add continuation labels was the same as for our systems for the february 2004 data described in section 43 a constituent label becomes continuing if and only if it is a core argument label and there is another constituent with the same core argument label to the lefttherefore for the conll 2005 shared task we employ the same semantic role labeling system just using a different postprocessing rule to map to conllstyle labelings of sets of wordswe tested the upper bound in performance due to our conversion scheme in the following way take the goldstandard conll annotations for the development set convert these to basic argument labels of the form argx then convert the resulting labeling to conllstyle labeling using our rules to recover the referring and continuing annotationsthe fmeasure obtained was 990figure 17 shows the performance of the local and joint model on one of the conll test setstest wsj when using goldstandard parse treesperformance on goldstandard parse trees was not measured in the conll 2005 shared task but we report it here to provide a basis for comparison with the results of other researchersnext we present results using charniaks automatic parses on the development and two test setswe present results for the local and joint models using the maxscoring charniak parse treeadditionally we report results for the joint model using the top five charniak parse trees according to the algorithm described in section 61the performance measures reported here are higher than the results of our submission in the conll 2005 shared task because of two changesone was changing the rule that produces continuing arguments to only add continuation labels to core argument labels in the previous version the rule added continuation labels to all repeated labelsanother was fixing a bug in the way the sentences were passed in as input to charniaks parser leading to incorrect analyses of forward quotes8 we first present results of our local and joint model using the parses provided as part of the conll 2005 data in figure 18we then report results from the same local and joint model and the joint model using the top five charniak parses where the parses have correct representation of the forward quotes in figure 19for these results we used the version of the charniak parser from 4 may 2005the results were very similar to the results we obtained with the version from 18 march 2005we did not experiment with the new reranking model of charniak and johnson even though it improves upon charniak significantlyfor comparison the system we submitted to conll 2005 had an fmeasure of 7845 on the wsj test setthe winning system had an fmeasure of 7944 and our current system has an fmeasure of 8032for the brown test set our submitted version had an fmeasure of 6771 the winning system had 6775 and our current system has 6881figure 20 shows the perlabel performance of our joint model using the top five charniak parse trees on the test wsj test setthe columns show the precision recall fmeasure and the total number of arguments for each labelin accord with standard linguistic assumptions we have shown that there are substantial gains to be had by jointly modeling the argument frames of verbsthis is especially true when we model the dependencies with discriminative models capable of incorporating nonlocal featureswe incorporated joint information by using two types of features features of the complete sequence of argument labels and features modeling dependencies between the labels of arguments and syntactic features of other argumentswe showed that both types of features yielded significant performance gains over a stateoftheart local modelfor further improving performance in the presence of perfect syntactic parses we see at least three promising avenues for improvementfirst one could improve the identification of argument nodes by better handling of longdistance dependencies for example by incorporating models which recover the trace and null element information in penn treebank parse trees as in levy and manning second it may be possible to improve the accuracy on modifier labels by enhancing the knowledge about the semantic characteristics of specific words and phrases such as by improving lexical statistics for instance our performance on argmtmp roles is rather worse than that of some other groupsfinally it is worth exploring alternative handling of multiconstituent arguments our current model uses a simple rule in a postprocessing step to decide which constituents given the same label are part of the same argumentthis could be done more intelligently by the machine learning modelbecause perfect syntactic parsers do not yet exist and the major bottleneck to the performance of current semantic role labeling systems is syntactic parser performance the more important question is how to improve performance in the presence of parser errorswe explored a simple approach of choosing from among the top k parses from charniaks parser which resulted in an improvementother methods have also been proposed as we discussed in section 2 this is a very promising line of researchthis research was carried out while all the authors were at stanford universitywe thank the journal reviewers and the reviewers and audience at acl 2005 and conll 2005 for their helpful commentswe also thank dan jurafsky for his insightful comments and useful discussionsthis work was supported in part by the disruptive technology organization s advanced question answering for intelligence program
J08-2002
a global joint model for semantic role labelingwe present a model for semantic role labeling that effectively captures the linguistic intuition that a semantic argument frame is a joint structure with strong dependencies among the argumentswe show how to incorporate these strong dependencies in a statistical joint model with a rich set of features over multiple argument phrasesthe proposed model substantially outperforms a similar stateoftheart local model that does not include dependencies among different argumentswe evaluate the gains from incorporating this joint information on the propbank corpus when using correct syntactic parse trees as input and when using automatically derived parse treesthe gains amount to 241 error reduction on all arguments and 368 on core arguments for goldstandard parse trees on propbankfor automatic parse trees the error reductions are 83 and 103 on all and core arguments respectivelywe also present results on the conll 2005 shared task data setadditionally we explore considering multiple syntactic analyses to cope with parser noise and uncertaintywe present a reranking model to jointly learn the semantic roles of multiple constituents in the srl task
algorithms for deterministic incremental dependency parsing parsing algorithms that process the input from left to right and construct a single derivation have often been considered inadequate for natural language parsing because of the massive ambiguity typically found in natural language grammars nevertheless it has been shown that such algorithms combined with treebankinduced classifiers can be used to build highly accurate disambiguating parsers in particular for dependencybased syntactic representations in this article we first present a general framework for describing and analyzing algorithms for deterministic incremental dependency parsing formalized as transition systems we then describe and analyze two families of such algorithms stackbased and listbased algorithms in the former family which is restricted to projective dependency structures we describe an arceager and an arcstandard variant in the latter family we present a projective and a nonprojective variant for each of the four algorithms we give proofs of correctness and complexity in addition we perform an experimental evaluation of all algorithms in combination with svm classifiers for predicting the next parsing action using data from thirteen languages we show that all four algorithms give competitive accuracy although the nonprojective listbased algorithm generally outperforms the projective algorithms for languages with a nonnegligible proportion of nonprojective constructions however the projective algorithms often produce comparable results when combined with the technique known as pseudoprojective parsing the linear time complexity of the stackbased algorithms gives them an advantage with respect to efficiency both in learning and in parsing but the projective listbased algorithm turns out to be equally efficient in practice moreover when the projective algorithms are used to implement pseudoprojective parsing they sometimes become less efficient in parsing than the nonprojective listbased algorithm although most of the algorithms have been partially described in the literature before this is the first comprehensive analysis and evaluation of the algorithms within a unified framework parsing algorithms that process the input from left to right and construct a single derivation have often been considered inadequate for natural language parsing because of the massive ambiguity typically found in natural language grammarsnevertheless it has been shown that such algorithms combined with treebankinduced classifiers can be used to build highly accurate disambiguating parsers in particular for dependencybased syntactic representationsin this article we first present a general framework for describing and analyzing algorithms for deterministic incremental dependency parsing formalized as transition systemswe then describe and analyze two families of such algorithms stackbased and listbased algorithmsin the former family which is restricted to projective dependency structures we describe an arceager and an arcstandard variant in the latter family we present a projective and a nonprojective variantfor each of the four algorithms we give proofs of correctness and complexityin addition we perform an experimental evaluation of all algorithms in combination with svm classifiers for predicting the next parsing action using data from thirteen languageswe show that all four algorithms give competitive accuracy although the nonprojective listbased algorithm generally outperforms the projective algorithms for languages with a nonnegligible proportion of nonprojective constructionshowever the projective algorithms often produce comparable results when combined with the technique known as pseudoprojective parsingthe linear time complexity of the stackbased algorithms gives them an advantage with respect to efficiency both in learning and in parsing but the projective listbased algorithm turns out to be equally efficient in practicemoreover when the projective algorithms are used to implement pseudoprojective parsing they sometimes become less efficient in parsing than the nonprojective listbased algorithmalthough most of the algorithms have been partially described in the literature before this is the first comprehensive analysis and evaluation of the algorithms within a unified frameworkbecause parsers for natural language have to cope with a high degree of ambiguity and nondeterminism they are typically based on different techniques than the ones used for parsing welldefined formal languagesfor example in compilers for programming languagesthus the mainstream approach to natural language parsing uses algorithms that efficiently derive a potentially very large set of analyses in parallel typically making use of dynamic programming and wellformed substring tables or chartswhen disambiguation is required this approach can be coupled with a statistical model for parse selection that ranks competing analyses with respect to plausibilityalthough it is often necessary for efficiency reasons to prune the search space prior to the ranking of complete analyses this type of parser always has to handle multiple analysesby contrast parsers for formal languages are usually based on deterministic parsing techniques which are maximally efficient in that they only derive one analysisthis is possible because the formal language can be defined by a nonambiguous formal grammar that assigns a single canonical derivation to each string in the language a property that cannot be maintained for any realistically sized natural language grammarconsequently these deterministic parsing techniques have been much less popular for natural language parsing except as a way of modeling human sentence processing which appears to be at least partly deterministic in nature more recently however it has been shown that accurate syntactic disambiguation for natural language can be achieved using a pseudodeterministic approach where treebankinduced classifiers are used to predict the optimal next derivation step when faced with a nondeterministic choice between several possible actionscompared to the more traditional methods for natural language parsing this can be seen as a severe form of pruning where parse selection is performed incrementally so that only a single analysis is derived by the parserthis has the advantage of making the parsing process very simple and efficient but the potential disadvantage that overall accuracy suffers because of the early commitment enforced by the greedy search strategysomewhat surprisingly though research has shown that with the right choice of parsing algorithm and classifier this type of parser can achieve stateoftheart accuracy especially when used with dependencybased syntactic representationsclassifierbased dependency parsing was pioneered by kudo and matsumoto for unlabeled dependency parsing of japanese with headfinal dependencies onlythe algorithm was generalized to allow both headfinal and headinitial dependencies by yamada and matsumoto who reported very good parsing accuracy for english using dependency structures extracted from the penn treebank for training and testingthe approach was extended to labeled dependency parsing by nivre hall and nilsson and nivre and scholz using a different parsing algorithm first presented in nivre at a recent evaluation of datadriven systems for dependency parsing with data from 13 different languages the deterministic classifierbased parser of nivre et al reached top performance together with the system of mcdonald lerman and pereira which is based on a global discriminative model with online learningthese results indicate that at least for dependency parsing deterministic parsing is possible without a drastic loss in accuracythe deterministic classifierbased approach has also been applied to phrase structure parsing although the accuracy for this type of representation remains a bit below the state of the artin this setting more competitive results have been achieved using probabilistic classifiers and beam search rather than strictly deterministic search as in the work by ratnaparkhi and sagae and lavie a deterministic classifierbased parser consists of three essential components a parsing algorithm which defines the derivation of a syntactic analysis as a sequence of elementary parsing actions a feature model which defines a feature vector representation of the parser state at any given time and a classifier which maps parser states as represented by the feature model to parsing actionsalthough different types of parsing algorithms feature models and classifiers have been used for deterministic dependency parsing there are very few studies that compare the impact of different componentsthe notable exceptions are cheng asahara and matsumoto who compare two different algorithms and two types of classifier for parsing chinese and hall nivre and nilsson who compare two types of classifiers and several types of feature models for parsing chinese english and swedishin this article we focus on parsing algorithmsmore precisely we describe two families of algorithms that can be used for deterministic dependency parsing supported by classifiers for predicting the next parsing actionthe first family uses a stack to store partially processed tokens and is restricted to the derivation of projective dependency structuresthe algorithms of kudo and matsumoto yamada and matsumoto and nivre all belong to this familythe second family represented by the algorithms described by covington and recently explored for classifierbased parsing in nivre instead uses open lists for partially processed tokens which allows arbitrary dependency structures to be processed we provide a detailed analysis of four different algorithms two from each family and give proofs of correctness and complexity for each algorithmin addition we perform an experimental evaluation of accuracy and efficiency for the four algorithms combined with stateoftheart classifiers using data from 13 different languagesalthough variants of these algorithms have been partially described in the literature before this is the first comprehensive analysis and evaluation of the algorithms within a unified frameworkthe remainder of the article is structured as followssection 2 defines the task of dependency parsing and section 3 presents a formal framework for the characterization of deterministic incremental parsing algorithmssections 4 and 5 contain the formal analysis of four different algorithms defined within the formal framework with proofs of correctness and complexitysection 6 presents the experimental evaluation section 7 reports on related work and section 8 contains our main conclusionsdependencybased syntactic theories are based on the idea that syntactic structure can be analyzed in terms of binary asymmetric dependency relations holding between the words of a sentencethis basic conception of syntactic structure underlies a variety of different linguistic theories such as structural syntax functional generative description meaningtext theory and word grammar in computational linguistics dependencybased syntactic representations have in recent years been used primarily in datadriven models which learn to produce dependency structures for sentences solely from an annotated corpus as in the work of eisner yamada and matsumoto nivre hall and nilsson and mcdonald crammer and pereira among othersone potential advantage of such models is that they are easily ported to any domain or language in which annotated resources existin this kind of framework the syntactic structure of a sentence is modeled by a dependency graph which represents each word and its syntactic dependents through labeled directed arcsthis is exemplified in figure 1 for a czech sentence taken from the prague dependency graph for an english sentence from the penn treebankdependency treebank and in figure 2 for an english sentence taken from the penn treebank 1 an artificial word root has been inserted at the beginning of each sentence serving as the unique root of the graphthis is a standard device that simplifies both theoretical definitions and computational implementationsgiven a set l 1l1 ll of dependency labels a dependency graph for a sentence x is a labeled directed graph g where the set v of nodes is the set of nonnegative integers up to and including n each corresponding to the linear position of a word in the sentence the set a of arcs is a set of ordered triples where i and j are nodes and l is a dependency labelbecause arcs are used to represent dependency relations we will say that i is the head and l is the dependency type of j conversely we say that j is a dependent of ia dependency graph g is wellformed if and only if we will refer to conditions 13 as root singlehead and acyclicity respectivelyany dependency graph satisfying these conditions is a dependency forest if it is also connected it is a dependency tree that is a directed tree rooted at the node 0it is worth noting that any dependency forest can be turned into a dependency tree by adding arcs from the node 0 to all other rootsa dependency graph g is projective if and only if for every arc e a and node k e v if i 0 and that q 1now consider the transition tp that results in configuration cpthere are three cases case 1 if tp rightarcsl then there is a node k such that j 1 and assume that jxj p 1 and gx consider the subgraph gx where ap ax ji p v j p that is the graph gx is exactly like gx except that the node p and all the arcs going into or out of this node are missingit is obvious that if gx is a projective dependency forest for the sentence x then gx is a projective dependency forest for the sentence x and that because jxj p there is a transition sequence c0q such that gc gx the terminal configuration of g0q must have the form cq where i e σcq if and only if i is a root in gx it follows that in gx i is either a root or a dependent of p in the latter case any j such that j e σcq and i 0 and that q 1 and we concentrate on the transition tp that results in configuration cpfor the arceager algorithm there are only two cases to consider because if tp rightarcel or tp shift then 0 which contradicts our assumption that q 0case 1 if tp leftarcl then there is a node k such that i 1 and assume that jxj p 1 and gx as in proof 1 we may now assume that there exists a transition sequence c0q for the sentence x and subgraph gx where the terminal configuration has the form cq for the arceager algorithm if i is a root in gx then i e σc but if i e σcq then i is either a root or has a head j such that j 1 and assume that jxj p 1 and gx as in proof 1 we may now assume that there exists a transition sequence c0q for the sentence x and subgraph gx but the terminal configuration now has the form cq where λcq 0 1 p 1in order to construct a transition sequence c0m such that gcm gx we instead start from the configuration nonprojective transition sequence for the czech sentence in figure 1 c0 cs and apply exactly the same q transitions reaching the configuration cq we then perform exactly p transitions in each case choosingieftarcn l if the token i at the head of a1 is a dependent of p in gx rightarcnl if i is the head of p and noarcn otherwiseone final shiftλ transition takes us to the terminal configuration cm theorem 8 the worstcase time complexity of the nonprojective listbased algorithm is o where n is the length of the input sentenceproof 8 assuming that the oracle and transition functions can be performed in some constant time the worstcase running time is bounded by the maximum number of transitions nivre deterministic incremental dependency parsing in a transition sequence c0m for a sentence x as for the stackbased algorithms there can be at most n shiftλ transitions in c0mmoreover because each of the three other transitions presupposes that λ1 is nonempty and decreases its length by 1 there can be at most i such transitions between the i 1th and the ith shift transitionit follows that the total number of transitions in c0m is bounded by en the assumption that transitions can be performed in constant time can be justified by the same kind of considerations as for the stackbased algorithms the only complication is the shiftλ transition which involves appending the two lists λ1 and λ2 but this can be handled with an appropriate choice of data structuresa more serious complication is the need to check the preconditions of leftarci and rightarci but if we assume that it is the responsibility of the oracle to ensure that the preconditions of any predicted transition are satisfied we can postpone the discussion of this problem until the end of section 61theorem 9 the worstcase space complexity of the nonprojective listbased algorithm is o where n is the length of the input sentenceproof 9 given the deterministic parsing algorithm only one configuration c needs to be stored at any given timeassuming that a single node can be stored in some constant space the space needed to store λ1 λ2 and β respectively is bounded by the number of nodesthe same holds for a given that a single arc can be stored in constant space because the number of arcs in a dependency forest is bounded by the number of nodeshence the worstcase space complexity is o the transition set t for the projective listbased parser is defined in figure 9 and contains four types of transitions transitions for the projective listbased parsing algorithmthe projective listbased parser uses the same basic strategy as its nonprojective counterpart but skips any pair that could give rise to a nonprojective dependency arcthe essential differences are the following the fact that the projective algorithm skips many node pairs that are considered by the nonprojective algorithm makes it more efficient in practice although the worstcase time complexity remains the samefigure 10 shows the transition sequence needed to parse the english sentence in figure 2 with the same output as the stackbased sequences in figures 4 and 6theorem 10 the projective listbased algorithm is correct for the class of projective dependency foreststo show the soundness of the algorithm we show that the dependency graph defined by the initial configuration gc0 is a projective dependency forest and that every transition preserves this propertywe consider each of the relevant conditions in turn keeping in mind that the only transitions that modify the graph are leftarcpl and rightarcpl computational linguistics volume 34 number 4 graph nonprojective only if there is a node k such that i 0 and that q 1now consider the transition tp that results in configuration cpfor the projective listbased algorithm there are only two cases to consider because if tp rightarcpl or tp shift then 0 which contradicts our assumption that q 0case 1 if tp leftarcpl then there is a node k such that i 0 to the special root node 0 with a default label rootparsing accuracy was measured by the labeled attachment score that is the percentage of tokens that are assigned the correct head and dependency label as well as the unlabeled attachment score that is the percentage of tokens with the correct head and the label accuracy that is the percentage of tokens with the correct dependency labelall scores were computed with the scoring software from the conllx shared task evalpl with default settingsthis means that punctuation tokens are excluded in all scoresin addition to parsing accuracy we evaluated efficiency by measuring the learning time and parsing time in seconds for each data setbefore turning to the results of the evaluation we need to fulfill the promise from remarks 1 and 2 to discuss the way in which treebankinduced classifiers approximate oracles and to what extent they satisfy the condition of constanttime operation that was assumed in all the results on time complexity in sections 4 and 5when predicting the next transition at runtime there are two different computations that take nivre deterministic incremental dependency parsing place the first is the classifier returning a transition t as the output class for an input feature vector φ and the second is a check whether the preconditions of t are satisfied in c if the preconditions are satisfied the transition t is performed otherwise a default transition is performed instead8 the time required to compute the classification t of φ depends on properties of the classifier such as the number of support vectors and the number of classes for a multiclass svm classifier but is independent of the length of the input and can therefore be regarded as a constant as far as the time complexity of the parsing algorithm is concerned9 the check of preconditions is a trivial constanttime operation in all cases except one namely the need to check whether there is a path between two nodes for the leftarci and rightarci transitions of the nonprojective listbased algorithmmaintaining the information needed for this check and updating it with each addition of a new arc to the graph is equivalent to the unionfind operations for disjoint set data structuresusing the techniques of path compression and union by rank the amortized time per operation is o per operation where n is the number of elements and α is the inverse of the ackermann function which means that α is less than 5 for all remotely practical values of n and is effectively a small constant with this proviso all the complexity results from sections 4 and 5 can be regarded as valid also for the classifierbased implementation of deterministic incremental dependency parsingtable 3 shows the parsing accuracy obtained for each of the 7 parsers on each of the 13 languages as well as the average over all languages with the top score in each row set in boldfacefor comparison we also include the results of the two top scoring systems in the conllx shared task those of mcdonald lerman and pereira and nivre et al starting with the las we see that the multilingual average is very similar across the seven parsers with a difference of only 058 percentage points between the best and the worst result obtained with the nonprojective and the strictly projective version of the listbased parser respectivelyhowever given the large amount of data some of the differences are nevertheless statistically significant broadly speaking the group consisting of the nonprojective listbased parser and the three pseudoprojective parsers significantly outperforms the group consisting of the three projective parsers whereas there are no significant differences within the two groups10 this shows that the capacity to capture nonprojective dependencies does make a significant difference even though such dependencies are infrequent in most languagesthe best result is about one percentage point below the top scores from the original conllx shared task but it must be remembered that the results in this article have been obtained without optimization of feature representations or learning algorithm parametersthe net effect of this can be seen in the result for the pseudoprojective version of the arceager stackbased parser which is identical to the system used by nivre et al except for the lack of optimization and which suffers a loss of 112 percentage points overallthe results for uas show basically the same pattern as the las results but with even less variation between the parsersnevertheless there is still a statistically significant margin between the nonprojective listbased parser and the three pseudoprojective parsers on the one hand and the strictly projective parsers on the other11 for label accuracy finally the most noteworthy result is that the strictly projective parsers consistently outperform their pseudoprojective counterparts although the difference is statistically significant only for the projective listbased parserthis can be explained by the fact that the pseudoprojective parsing technique increases the number of distinct dependency labels using labels to distinguish not only between different syntactic functions but also between lifted and unlifted arcsit is therefore understandable that the pseudoprojective parsers suffer a drop in pure labeling accuracydespite the very similar performance of all parsers on average over all languages there are interesting differences for individual languages and groups of languagesthese differences concern the impact of nonprojective pseudoprojective and strictly projective parsing on the one hand and the effect of adopting an arceager or an arcstandard parsing strategy for the stackbased parsers on the otherbefore we turn to the evaluation of efficiency we will try to analyze some of these differences in a little more detail starting with the different techniques for capturing nonprojective dependenciesfirst of all we may observe that the nonprojective listbased parser outperforms its strictly projective counterpart for all languages except chinesethe result for chinese is expected given that it is the only data set that does not contain any nonprojective dependencies but the difference in accuracy is very slight thus it seems that the nonprojective parser can also be used without loss in accuracy for languages with very few nonprojective structuresthe relative improvement in accuracy for the nonprojective parser appears to be roughly linear in the percentage of nonprojective dependencies found in the data set with a highly significant correlation the only language that clearly diverges from this trend is german where the relative improvement is much smaller than expectedif we compare the nonprojective listbased parser to the strictly projective stackbased parsers we see essentially the same pattern but with a little more variationfor the arceager stackbased parser the only anomaly is the result for arabic which is significantly higher than the result for the nonprojective parser but this seems to be due to a particularly bad performance of the listbased parsers as a group for this language12 for the arcstandard stackbased parser the data is considerably more noisy which is related to the fact that the arcstandard parser in itself has a higher variance than the other parsers an observation that we will return to later onstill the correlation between relative improvement in accuracy and percentage of nonprojective dependencies is significant for both the arceager parser and the arcstandard parser although clearly not as strong as for the listbased parserit therefore seems reasonable to conclude that the nonprojective parser in general can be expected to outperform a strictly projective parser with a margin that is directly related to the proportion of nonprojective dependencies in the datahaving compared the nonprojective listbased parser to the strictly projective parsers we will now scrutinize the results obtained when coupling the projective parsers with the pseudoprojective parsing technique as an alternative method for capturing nonprojective dependenciesthe overall pattern is that pseudoprojective parsing improves the accuracy of a projective parser for languages with more than 1 of nonprojective dependencies as seen from the results for czech dutch german and portuguesefor these languages the pseudoprojective parser is never outperformed by its strictly projective counterpart and usually does considerably better although the improvements for german are again smaller than expectedfor slovene and turkish we find improvement only for two out of three parsers despite a relatively high share of nonprojective dependencies given that slovene and turkish have the smallest training data sets of all languages this is consistent with previous studies showing that pseudoprojective parsing is sensitive to data sparseness for languages with a lower percentage of nonprojective dependencies the pseudoprojective technique seems to hurt performance more often than not possibly as a result of decreasing the labeling accuracy as noted previouslyit is worth noting that chinese is a special case in this respectbecause there are no nonprojective dependencies in this data set the projectivized training data set will be identical to the original one which means that the pseudoprojective parser will behave exactly as the projective onecomparing nonprojective parsing to pseudoprojective parsing it seems clear that both can improve parsing accuracy in the presence of significant amounts of nonprojective dependencies but the former appears to be more stable in that it seldom or never hurts performance whereas the latter can be expected to have a negative effect on accuracy when the amount of training data or nonprojective dependencies is not high enoughmoreover the nonprojective parser tends to outperform the best pseudoprojective parsers both on average and for individual languagesin fact the pseudoprojective technique outperforms the nonprojective parser only in combination with the arcstandard stackbased parsing algorithm and this seems to be due more to the arcstandard parsing strategy than to the pseudoprojective technique as suchthe relevant question here is therefore why arcstandard parsing seems to work particularly well for some languages with or without pseudoprojective parsinggoing through the results for individual languages it is clear that the arcstandard algorithm has a higher variance than the other algorithmsfor bulgarian dutch and spanish the accuracy is considerably lower than for the other algorithms in most cases by more than one percentage pointbut for arabic czech and slovene we find exactly the opposite pattern with the arcstandard parsers sometimes outperforming the other parsers by more than two percentage pointsfor the remaining languages the arcstandard algorithm performs on a par with the other algorithms13 in order to explain this pattern we need to consider the way in which properties of the algorithms interact with properties of different languages and the way they have been annotated syntacticallyfirst of all it is important to note that the two listbased algorithms and the arceager variant of the stackbased algorithm are all arceager in the sense that an arc is always added at the earliest possible moment that is in the first configuration where i and j are the target tokensfor the arcstandard stackbased parser this is still true for left dependents such that j i but not for right dependents where an arc should be added only at the point where all arcs of the form have already been added this explains why the results for the two listbased parsers and the arceager stackbased parser are so well correlated but it does not explain why the arcstandard strategy works better for some languages but not for othersthe arceager strategy has an advantage in that a right dependent j can be attached to its head i at any time without having to decide whether j itself should have a right dependentby contrast with the arcstandard strategy it is necessary to decide not only whether j is a right dependent of i but also whether it should be added now or later which means that two types of errors are possible even when the decision to attach j to i is correctattaching too early means that right dependents can never be attached to j postponing the attachment too long means that j will never be added to inone of these errors can occur with the arceager strategy which therefore can be expected to work better for data sets where this kind of ambiguity is commonly foundin order for this to be the case there must first of all be a significant proportion of leftheaded structures in the datathus we find that in all the data sets for which the arcstandard parsers do badly the percentage of leftheaded dependencies is in the 5075 rangehowever it must also be pointed out that the highest percentage of all is found in arabic which means that a high proportion of leftheaded structures may be a necessary but not sufficient condition for the arceager strategy to work better than the arcstandard strategywe conjecture that an additional necessary condition is an annotation style that favors more deeply embedded structures giving rise to chains of leftheaded structures where each node is dependent on the preceding one which increases the number of points at which an incorrect decision can be made by an arcstandard parserhowever we have not yet fully verified the extent to which this condition holds for all the data sets where the arceager parsers outperform their arcstandard counterpartsalthough the arceager strategy has an advantage in that the decisions involved in attaching a right dependent are simpler it has the disadvantage that it has to commit earlythis may either lead the parser to add an arc when it is not correct to do so or fail to add the same arc in a situation where it should have been added in both cases because the information available at an early point makes the wrong decision look probablein the first case the arcstandard parser may still get the analysis right if it also seems probable that j should have a right dependent in the second case it may get a second chance to add the arc if it in fact adds a right dependent to j at a later pointit is not so easy to predict what type of structures and annotation will favor the arcstandard parser in this way but it is likely that having many right dependents attached to the root could cause problems for the arceager algorithms since these dependencies determine the global structure and often span long distances which makes it harder to make correct decisions early in the parsing processthis is consistent with earlier studies showing that parsers using the arceager stackbased algorithm tend to predict dependents of the root with lower precision than other algorithms14 interestingly the three languages for which the arcstandard parser has the highest improvement have a very similar annotation based on the prague school tradition of dependency grammar which not only allows multiple dependents of the root but also uses several different labels for these dependents which means that they will be analyzed correctly only if a rightarc transition is performed with the right label at exactly the right point in timethis is in contrast to annotation schemes that use a default label root for dependents of the root where such dependents can often be correctly recovered in postprocessing by attaching all remaining roots to the special root node with the default labelwe can see the effect of this by comparing the two stackbased parsers with respect to precision and recall for the dependency type pred which is the most important label for dependents of the root in the data sets for arabic czech and slovenewhile the arcstandard parser has 7802 precision and 7022 recall averaged over the three languages the corresponding figures for the arceager parser are as low as 6893 and 6593 respectively which represents a drop of almost ten percentage points in precision and almost five percentage points in recallsummarizing the results of the accuracy evaluation we have seen that all four algorithms can be used for deterministic classifierbased parsing with competitive accuracythe results presented are close to the state of the art without any optimization of feature representations and learning algorithm parameterscomparing different algorithms we have seen that the capacity to capture nonprojective dependencies makes a significant difference in general but with languagespecific effects that depend primarily on the frequency of nonprojective constructionswe have also seen that the nonprojective listbased algorithm is more stable and predictable in this respect compared to the use of pseudoprojective parsing in combination with an essentially projective parsing algorithmfinally we have observed quite strong languagespecific effects for the difference between arcstandard and arceager parsing for the stackbased algorithms effects that can be tied to differences in linguistic structure and annotation style between different data sets although a much more detailed error analysis is needed before we can draw precise conclusions about the relative merits of different parsing algorithms for different languages and syntactic representationsbefore we consider the evaluation of efficiency in both learning and parsing it is worth pointing out that the results will be heavily dependent on the choice of support vector machines for classification and cannot be directly generalized to the use of deterministic incremental parsing algorithms together with other kinds of classifiershowever because support vector machines constitute the state of the art in classifierbased parsing it is still worth examining how learning and parsing times vary with the parsing algorithm while parameters of learning and classification are kept constanttable 4 gives the results of the efficiency evaluationlooking first at learning times it is obvious that learning time depends primarily on the number of training instances which is why we can observe a difference of several orders of magnitude in learning time between the biggest training set and the smallest training set for a given parsing algorithmbroadly speaking for any given parsing algorithm the ranking of languages with respect to learning time follows the ranking with respect to training set size with a few noticeable exceptionsthus learning times are shorter than expected relative to other languages for swedish and japanese but longer than expected for arabic and for danishhowever the number of training instances for the svm learner depends not only on the number of tokens in the training set but also on the number of transitions required to parse a sentence of length n this explains why the nonprojective listbased algorithm with its quadratic complexity consistently has longer learning times than the linear stackbased algorithmshowever it can also be noted that the projective listbased algorithm despite having the same worstcase complexity as the nonprojective algorithm in practice behaves much more like the arceager stackbased algorithm and in fact has a slightly lower learning time than the latter on averagethe arcstandard stackbased algorithm finally again shows much more variation than the other algorithmson average it is slower to train than the arceager algorithm and sometimes very substantially so but for a few languages it is actually faster this again shows that learning time depends on other properties of the training sets than sheer size and that some data sets may be more easily separable for the svm learner with one parsing algorithm than with anotherit is noteworthy that there are no consistent differences in learning time between the strictly projective parsers and their pseudoprojective counterparts despite the fact that the pseudoprojective technique increases the number of distinct classes which in turn increases the number of binary classifiers that need to be trained in order to perform multiclass classification with the oneversusone methodthe number of classifiers is m 2 where m is the number of classes and the pseudoprojective technique with the encoding scheme used here can theoretically lead to a quadratic increase in the number of classesthe fact that this has no noticeable effect on efficiency indicates that learning time is dominated by other factors in particular the number of training instancesturning to parsing efficiency we may first note that parsing time is also dependent on the size of the training set through a dependence on the number of support vectors which tend to grow with the size of the training setthus for any given algorithm there is a strong tendency that parsing times for different languages follow the same order as training set sizesthe notable exceptions are arabic turkish and chinese which have higher parsing times than expected and japanese where parsing is surprisingly fastbecause these deviations are the same for all algorithms it seems likely that they are related to specific properties of these data setsit is also worth noting that for arabic and japanese the deviations are consistent across learning and parsing whereas for chinese there is no consistent trend comparing algorithms we see that the nonprojective listbased algorithm is slower than the strictly projective stackbased algorithms which can be expected from the difference in time complexitybut we also see that the projective listbased algorithm despite having the same worstcase complexity as the nonprojective algorithm in practice behaves like the lineartime algorithms and is in fact slightly faster on average than the arceager stackbased algorithm which in turn outperforms the arcstandard stackbased algorithmthis is consistent with the results from oracle parsing reported in nivre which show that with the constraint of projectivity the relation between sentence length and number of transitions for the listbased parser can be regarded as linear in practicecomparing the arceager and the arcstandard variants of the stackbased algorithm we find the same kind of pattern as for learning time in that the arceager parser is faster for all except a small set of languages chinese japanese slovene and turkishonly two of these japanese and slovene are languages for which learning is also faster with the stackbased algorithm which again shows that there is no straightforward correspondence between learning time and parsing timeperhaps the most interesting result of all as far as efficiency is concerned is to be found in the often dramatic differences in parsing time between the strictly projective parsers and their pseudoprojective counterpartsalthough we did not see any clear effect of the increased number of classes hence classifiers on learning time earlier it is quite clear that there is a noticeable effect on parsing time with the pseudoprojective parsers always being substantially slowerin fact in some cases the pseudoprojective parsers are also slower than the nonprojective listbased parser despite the difference in time complexity that exists at least for the stackbased parsersthis result holds on average over all languages and for five out of thirteen of the individual languages and shows that the advantage of lineartime parsing complexity can be outweighed by the disadvantage of a more complex classification problem in pseudoprojective parsingin other words the larger constant associated with a larger cohort of svm classifiers for the pseudoprojective parser can be more important than the better asymptotic complexity of the lineartime algorithm in the range of sentence lengths typically found in natural languagelooking more closely at the variation in sentence length across languages we find that the pseudoprojective parsers are faster than the nonprojective parser for all data sets with an average sentence length above 18for data sets with shorter sentences the nonprojective parser is more efficient in all except three cases bulgarian chinese and japanesefor chinese this is easily explained by the absence of nonprojective dependencies making the performance of the pseudoprojective parsers identical to their strictly projective counterpartsfor the other two languages the low number of distinct dependency labels for japanese and the low percentage of nonprojective dependencies for bulgarian are factors that mitigate the effect of enlarging the set of dependency labels in pseudoprojective parsingwe conclude that the relative efficiency of nonprojective and pseudoprojective parsing depends on several factors of which sentence length appears to be the most important but where the number of distinct dependency labels and the percentage of nonprojective dependencies also play a roledatadriven dependency parsing using supervised machine learning was pioneered by eisner who showed how traditional chart parsing techniques could be adapted for dependency parsing to give efficient parsing with exact inference over a probabilistic model where the score of a dependency tree is the sum of the scores of individual arcsthis approach has been further developed in particular by ryan mcdonald and his colleagues and is now known as spanning tree parsing because the problem of finding the most probable tree under this type of model is equivalent to finding an optimum spanning tree in a dense graph containing all possible dependency arcsif we assume that the score of an individual arc is independent of all other arcs this problem can be solved efficiently for arbitrary nonprojective dependency trees using the chuliuedmonds algorithm as shown by mcdonald et al spanning tree algorithms have so far primarily been combined with online learning methods such as mira the approach of deterministic classifierbased parsing was first proposed for japanese by kudo and matsumoto and for english by yamada and matsumoto in contrast to spanning tree parsing this can be characterized as a greedy inference strategy trying to construct a globally optimal dependency graph by making a sequence of locally optimal decisionsthe first strictly incremental parser of this kind was described in nivre and used for classifierbased parsing of swedish by nivre hall and nilsson and english by nivre and scholz altogether it has now been applied to 19 different languages most algorithms in this tradition are restricted to projective dependency graphs but it is possible to recover nonprojective dependencies using pseudoprojective parsing more recently algorithms for nonprojective classifierbased parsing have been proposed by attardi and nivre the strictly deterministic parsing strategy has been relaxed in favor of nbest parsing by johansson and nugues among othersthe dominant learning method in this tradition is support vector machines but memorybased learning has also been used of the algorithms described in this article the arceager stackbased algorithm is essentially the algorithm proposed for unlabeled dependency parsing in nivre extended to labeled dependency parsing in nivre hall and nilsson and most fully described in nivre the major difference is that the parser is now initialized with the special root node on the stack whereas earlier formulations had an empty stack at initialization15 the arcstandard stackbased algorithm is briefly described in nivre but can also be seen as an incremental version of the algorithm of yamada and matsumoto where incrementality is achieved by only allowing one lefttoright pass over the input whereas yamada and matsumoto perform several iterations in order to construct the dependency graph bottomup breadthfirst as it werethe listbased algorithms are both inspired by the work of covington although the formulations are not equivalentthey have previously been explored for deterministic classifierbased parsing in nivre a more orthodox implementation of covingtons algorithms for datadriven dependency parsing is found in marinov in this article we have introduced a formal framework for deterministic incremental dependency parsing where parsing algorithms can be defined in terms of transition systems that are deterministic only together with an oracle for predicting the next transitionwe have used this framework to analyze four different algorithms proving the correctness of each algorithm relative to a relevant class of dependency graphs and giving complexity results for each algorithmto complement the formal analysis we have performed an experimental evaluation of accuracy and efficiency using svm classifiers to approximate oracles and using data from 13 languagesthe comparison shows that although strictly projective dependency parsing is most efficient both in learning and in parsing the capacity to produce nonprojective dependency graphs leads to better accuracy unless it can be assumed that all structures are strictly projectivethe evaluation also shows that using the nonprojective listbased parsing algorithm gives a more stable improvement in this respect than applying the pseudoprojective parsing technique to a strictly projective parsing algorithmmoreover despite its quadratic time complexity the nonprojective parser is often as efficient as the pseudoprojective parsers in practice because the extended set of dependency labels used in pseudoprojective parsing slows down classificationthis demonstrates the importance of complementing the theoretical analysis of complexity with practical running time experimentsalthough the nonprojective listbased algorithm can be said to give the best tradeoff between accuracy and efficiency when results are averaged over all languages in the sample we have also observed important languagespecific effectsin particular the arceager strategy inherent not only in the arceager stackbased algorithm but also in both versions of the listbased algorithm appears to be suboptimal for some languages and syntactic representationsin such cases using the arcstandard parsing strategy with or without pseudoprojective parsing may lead to significantly higher accuracymore research is needed to determine exactly which properties of linguistic structures and their syntactic analysis give rise to these effectson the whole however the four algorithms investigated in this article give very similar performance both in terms of accuracy and efficiency and several previous studies have shown that both the stackbased and the listbased algorithms can achieve stateoftheart accuracy together with properly trained classifiers i want to thank my students johan hall and jens nilsson for fruitful collaboration and for their contributions to the maltparser system which was used for all experimentsi also want to thank sabine buchholz matthias buchkromann walter daelemans gulsen eryigit jason eisner jan hajiˇc sandra kubler marco kuhlmann yuji matsumoto ryan mcdonald kemal oflazer kenji sagae noah a smith and deniz yuret for useful discussions on topics relevant to this articlei am grateful to three anonymous reviewers for many helpful suggestions that helped improve the final version of the articlethe work has been partially supported by the swedish research council
J08-4003
algorithms for deterministic incremental dependency parsingparsing algorithms that process the input from left to right and construct a single derivation have often been considered inadequate for natural language parsing because of the massive ambiguity typically found in natural language grammarsnevertheless it has been shown that such algorithms combined with treebankinduced classifiers can be used to build highly accurate disambiguating parsers in particular for dependencybased syntactic representationsin this article we first present a general framework for describing and analyzing algorithms for deterministic incremental dependency parsing formalized as transition systemswe then describe and analyze two families of such algorithms stackbased and listbased algorithmsin the former family which is restricted to projective dependency structures we describe an arceager and an arcstandard variant in the latter family we present a projective and a nonprojective variantfor each of the four algorithms we give proofs of correctness and complexityin addition we perform an experimental evaluation of all algorithms in combination with svm classifiers for predicting the next parsing action using data from thirteen languageswe show that all four algorithms give competitive accuracy although the nonprojective listbased algorithm generally outperforms the projective algorithms for languages with a nonnegligible proportion of nonprojective constructionshowever the projective algorithms often produce comparable results when combined with the technique known as pseudoprojective parsingthe linear time complexity of the stackbased algorithms gives them an advantage with respect to efficiency both in learning and in parsing but the projective listbased algorithm turns out to be equally efficient in practicemoreover when the projective algorithms are used to implement pseudoprojective parsing they sometimes become less efficient in parsing than the nonprojective listbased algorithmalthough most of the algorithms have been partially described in the literature before this is the first comprehensive analysis and evaluation of the algorithms within a unified frameworkwe give a systematic description of the arcstandard and arceager algorithms currently two popular transitionbased parsing methods for wordlevel dependency parsing
survey article intercoder agreement for computational linguistics this article is a survey of methods for measuring agreement among corpus annotators it exposes the mathematics and underlying assumptions of agreement coefficients covering krippendorffs alpha as well as scotts pi and cohens kappa discusses the use of coefficients in several annotation tasks and argues that weighted alphalike coefficients traditionally less used than kappalike measures in computational linguistics may be more appropriate for many corpus annotation tasksbut that their use makes the interpretation of the value of the coefficient even harder since the mid 1990s increasing effort has gone into putting semantics and discourse research on the same empirical footing as other areas of computational linguistics this soon led to worries about the subjectivity of the judgments required to create annotated resources much greater for semantics and pragmatics than for the aspects of language interpretation of concern in the creation of early resources such as the brown corpus the british national corpus or the penn treebank problems with early proposals for assessing coders agreement on discourse segmentation tasks led carletta to suggest the adoption of the k coefficient of agreement a variant of cohens x as this had already been used for similar purposes in content analysis for a long time1 carlettas proposals were enormously influential and k quickly became the de facto standard for measuring agreement in computational linguistics not only in work on discourse but also for other annotation tasks during this period however a number of questions have also been raised about k and similar coefficientssome already in carlettas own work ranging from simple questions about the way the coefficient is computed to debates about which levels of agreement can be considered acceptable to the realization that k is not appropriate for all types of agreement di eugenio raised the issue of the effect of skewed distributions on the value of k and pointed out that the original x developed by cohen is based on very different assumptions about coder bias from the k of siegel and castellan which is typically used in clthis issue of annotator bias was further debated in di eugenio and glass and craggs and mcgee wood di eugenio and glass pointed out that the choice of calculating chance agreement by using individual coder marginals or pooled distributions can lead to reliability values falling on different sides of the accepted 067 threshold and recommended reporting both valuescraggs and mcgee wood argued following krippendorff that measures like cohens x are inappropriate for measuring agreementfinally passonneau has been advocating the use of krippendorffs α for coding tasks in cl which do not involve nominal and disjoint categories including anaphoric annotation wordsense tagging and summarization now that more than ten years have passed since carlettas original presentation at the workshop on empirical methods in discourse it is time to reconsider the use of coefficients of agreement in cl in a systematic wayin this article a survey of coefficients of agreement and their use in cl we have three main goalsfirst we discuss in some detail the mathematics and underlying assumptions of the coefficients used or mentioned in the cl and content analysis literaturessecond we also cover in some detail krippendorffs α often mentioned but never really discussed in detail in previous cl literature other than in the papers by passonneau just mentionedthird we review the past ten years of experience with coefficients of agreement in cl reconsidering the issues that have been raised also from a mathematical perspective2we begin with a quick recap of the goals of agreement studies inspired by krippendorff researchers who wish to use handcoded datathat is data in which items are labeled with categories whether to support an empirical claim or to develop and test a computational modelneed to show that such data are reliablethe fundamental assumption behind the methodologies discussed in this article is that data are reliable if coders can be shown to agree on the categories assigned to units to an extent determined by the purposes of the study if different coders produce consistently similar results then we can infer that they have internalized a similar understanding of the annotation guidelines and we can expect them to perform consistently under this understandingreliability is thus a prerequisite for demonstrating the validity of the coding schemethat is to show that the coding scheme captures the truth of the phenomenon being studied in case this matters if the annotators are not consistent then either some of them are wrong or else the annotation scheme is inappropriate for the datahowever it is important to keep in mind that achieving good agreement cannot ensure validity two observers of the same event may well share the same prejudice while still being objectively wrongit is useful to think of a reliability study as involving a set of items a set of categories and a set of coders who assign to each item a unique category labelthe discussions of reliability in the literature often use different notations to express these conceptswe introduce a uniform notation which we hope will make the relations between the different coefficients of agreement clearerconfusion also arises from the use of the letter p which is used in the literature with at least three distinct interpretations namely proportion percent and probability we will use the following notation uniformly throughout the article respectivelythe relevant coefficient will be indicated with a superscript when an ambiguity may arise p is reserved for the probability of a variable and ˆp is an estimate of such probability from observed datafinally we use n with a subscript to indicate the number of judgments of a given type the simplest measure of agreement between two coders is percentage of agreement or observed agreement defined for example by scott as the percentage of judgments on which the two analysts agree when coding the same data independently this is the number of items on which the coders agree divided by the total number of itemsmore precisely and looking ahead to the following discussion observed agreement is the arithmetic mean of the agreement value agri for all items i i defined as follows for example let us assume a very simple annotation scheme for dialogue acts in informationseeking dialogues which makes a binary distinction between the categories statement and inforequest as in the damsl dialogue act scheme two coders classify 100 utterances according to this scheme as shown in table 1percentage agreement for this data set is obtained by summing up the cells on the diagonal and dividing by the total number of items ao 100 07observed agreement enters in the computation of all the measures of agreement we consider but on its own it does not yield values that can be compared across studies because some agreement is due to chance and the amount of chance agreement is affected by two factors that vary from one study to the otherfirst of all as scott points out percentage agreement is biased in favor of dimensions with a small number of categories in other words given two coding schemes for the same phenomenon the one with fewer categories will result in higher percentage agreement just by chanceif two coders randomly classify utterances in a uniform manner using the scheme of table 1 we would expect an equal number of items to fall in each of the four cells in the table and therefore pure chance will cause the coders to agree on half of the items but suppose we want to refine the simple binary coding scheme by introducing a new category check as in the maptask coding scheme if two coders randomly classify utterances in a uniform manner using the three categories in the second scheme they would only agree on a third of the items a simple example of agreement on dialogue act taggingthe second reason percentage agreement cannot be trusted is that it does not correct for the distribution of items among categories we expect a higher percentage agreement when one category is much more common than the otherthis problem already raised by hsu and field among others can be illustrated using the following example suppose 95 of utterances in a particular domain are statement and only 5 are inforequestwe would then expect by chance that 095 095 09025 of the utterances would be classified as statement by both coders and 005 005 00025 as inforequest so the coders would agree on 905 of the utterancesunder such circumstances a seemingly high observed agreement of 90 is actually worse than expected by chancethe conclusion reached in the literature is that in order to get figures that are comparable across studies observed agreement has to be adjusted for chance agreementthese are the measures we will review in the remainder of this articlewe will not look at the variants of percentage agreement used in cl work on discourse before the introduction of kappa such as percentage agreement with an expert and percentage agreement with the majority see carletta for discussion and criticism3 all of the coefficients of agreement discussed in this article correct for chance on the basis of the same ideafirst we find how much agreement is expected by chance let us call this value aethe value 1 ae will then measure how much agreement over and above chance is attainable the value ao ae will tell us how much agreement beyond chance was actually foundthe ratio between ao ae and 1 ae will then tell us which proportion of the possible agreement beyond chance was actually observedthis idea is expressed by the following formulathe three bestknown coefficients s 7r and x and their generalizations all use this formula whereas krippendorffs α is based on a related formula expressed in terms of disagreement all three coefficients therefore yield values of agreement between ae1 ae and 1 with the value 0 signifying chance agreement note also that whenever agreement is less than perfect statement 0 1 05 inforequest 1 0 05 the only sources of disagreement in the coding example of table 4 are the six utterances marked as inforequests by coder a and statements by coder b which receive the maximal weight of 1 and the six utterances marked as inforequests by coder a and checks by coder b which are given a weight of 05the observed disagreement is calculated by summing up all the cells in the contingency table multiplying each cell by its respective weight and dividing the total by the number of items expected disagreement of the weighted coefficients for the data from table 4two issues recently raised by di eugenio and glass concern the behavior of agreement coefficients when the annotation data are severely skewedone issue which di eugenio and glass call the bias problem is that π and κ yield quite different numerical values when the annotators marginal distributions are widely divergent the other issue the prevalence problem is the exceeding difficulty in getting high agreement values when most of the items fall under one categorylooking at these two problems in detail is useful for understanding the differences between the coefficientsthe difference between π and α on the one hand and κ on the other hand lies in the interpretation of the notion of chance agreement whether it is the amount expected from the the actual distribution of items among categories or from individual coder priors as mentioned in section 24 this difference has been the subject of much debate a claim often repeated in the literature is that singledistribution coefficients like π and α assume that different coders produce similar distributions of items among categories with the implication that these coefficients are inapplicable when the annotators show substantially different distributionsrecommendations vary zwick suggests testing the individual coders distributions using the modified χ2 test of stuart and discarding the annotation as unreliable if significant systematic discrepancies are observedin contrast hsu and field recommend reporting the value of κ even when the coders produce different distributions because it is the only index that could legitimately be applied in the presence of marginal heterogeneity likewise di eugenio and glass recommend using κ in the vast majority of discourse and dialoguetagging efforts where the individual coders distributions tend to varyall of these proposals are based on a misconception that artstein and poesio intercoder agreement for cl singledistribution coefficients require similar distributions by the individual annotators in order to work properlythis is not the casethe difference between the coefficients is only in the interpretation of chance agreement πstyle coefficients calculate the chance of agreement among arbitrary coders whereas κstyle coefficients calculate the chance of agreement among the coders who produced the reliability datatherefore the choice of coefficient should not depend on the magnitude of the divergence between the coders but rather on the desired interpretation of chance agreementanother common claim is that individualdistribution coefficients like κ reward annotators for disagreeing on the marginal distributionsfor example di eugenio and glass say that κ suffers from what they call the bias problem described as the paradox that κco our κ increases as the coders become less similar similar reservations about the use of κ have been noted by brennan and prediger and zwick however the bias problem is less paradoxical than it soundsalthough it is true that for a fixed observed agreement a higher difference in coder marginals implies a lower expected agreement and therefore a higher κ value the conclusion that κ penalizes coders for having similar distributions is unwarrantedthis is because ao and ae are not independent both are drawn from the same set of observationswhat κ does is discount some of the disagreement resulting from different coder marginals by incorporating it into aewhether this is desirable depends on the application for which the coefficient is usedthe most common application of agreement measures in cl is to infer the reliability of a largescale annotation where typically each piece of data will be marked by just one coder by measuring agreement on a small subset of the data which is annotated by multiple codersin order to make this generalization the measure must reflect the reliability of the annotation procedure which is independent of the actual annotators usedreliability or reproducibility of the coding is reduced by all disagreementsboth random and systematicthe most appropriate measures of reliability for this purpose are therefore singledistribution coefficients like π and α which generalize over the individual coders and exclude marginal disagreements from the expected agreementthis argument has been presented recently in much detail by krippendorff and reiterated by craggs and mcgee wood at the same time individualdistribution coefficients like κ provide important information regarding the trustworthiness of the data on which the annotators agreeas an intuitive example think of a person who consults two analysts when deciding whether to buy or sell certain stocksif one analyst is an optimist and tends to recommend buying whereas the other is a pessimist and tends to recommend selling they are likely to agree with each other less than two more neutral analysts so overall their recommendations are likely to be less reliableless reproduciblethan those that come from a population of likeminded analyststhis reproducibility is measured by πbut whenever the optimistic and pessimistic analysts agree on a recommendation for a particular stock whether it is buy or sell the confidence that this is indeed the right decision is higher than the same advice from two likeminded analyststhis is why κ rewards biased annotators it is not a matter of reproducibility but rather of trustworthiness having said this we should point out that first in practice the difference between π and κ does not often amount to much moreover the difference becomes smaller as agreement increases because all the points of agreement contribute toward making the coder marginals similar finally one would expect the difference between π and κ to diminish as the number of coders grows this is shown subsequently6 we define b the overall annotator bias in a particular set of coding data as the difference between the expected agreement according to π and the expected agreement according to κ annotator bias is a measure of variance if we take c to be a random variable with equal probabilities for all coders then the annotator bias b is the sum of the variances of p for all categories k k divided by the number of coders c less one this allows us to make the following observations about the relationship between π and κin other words provided enough coders are used it should not matter whether a singledistribution or individualdistribution coefficient is usedthis is not to imply that multiple coders increase reliability the variance of the individual coders distributions can be just as large with many coders as with few coders but its effect on the value of κ decreases as the number of coders grows and becomes more similar to random noisethe same holds for weighted measures too see the extended version of this article for definitions and proofin an annotation study with 18 subjects we compared α with a variant which uses individual coder distributions to calculate expected agreement and found that the values never differed beyond the third decimal point we conclude with a summary of our views concerning the difference between πstyle and κstyle coefficientsfirst of all keep in mind that empirically the difference is small and gets smaller as the number of annotators increasesthen instead of reporting two coefficients as suggested by di eugenio and glass the appropriate coefficient should be chosen based on the task when the coefficient is used to assess reliability a singledistribution coefficient like π or α should be used this is indeed already the practice in cl because siegel and castellans k is identical with πit is also good practice to test artstein and poesio intercoder agreement for cl reliability with more than two coders in order to reduce the likelihood of coders sharing a deviant reading of the annotation guidelineswe touched upon the matter of skewed data in section 23 when we motivated the need for chance correction if a disproportionate amount of the data falls under one category then the expected agreement is very high so in order to demonstrate high reliability an even higher observed agreement is neededthis leads to the socalled paradox that chancecorrected agreement may be low even though ao is high moreover when the data are highly skewed in favor of one category the high agreement also corresponds to high accuracy if say 95 of the data fall under one category label then random coding would cause two coders to jointly assign this category label to 9025 of the items and on average 95 of these labels would be correct for an overall accuracy of at least 857this leads to the surprising result that when data are highly skewed coders may agree on a high proportion of items while producing annotations that are indeed correct to a high degree yet the reliability coefficients remain lowthis surprising result is however justifiedreliability implies the ability to distinguish between categories but when one category is very common high accuracy and high agreement can also result from indiscriminate codingthe test for reliability in such cases is the ability to agree on the rare categories indeed chancecorrected coefficients are sensitive to agreement on rare categoriesthis is easiest to see with a simple example of two coders and two categories one common and the other one rare to further simplify the calculation we also assume that the coder marginals are identical so that π and κ yield the same valueswe can thus represent the judgments in a contingency table with just two parameters e is half the proportion of items on which there is disagreement and δ is the proportion of agreement on the rare categoryboth of these proportions are assumed to be small so the bulk of the items are labeled with the common category by both coders from this table we can calculate ao 1 2e and ae 1 2 22 as well as π and κwhen e and δ are both small the fraction after the minus sign is small as well so π and κ are approximately δ the value we get if we take all the items marked by one a simple example of agreement on dialogue act taggingparticular coder as rare and calculate what proportion of those items were labeled rare by the other coderthis is a measure of the coders ability to agree on the rare categoryin this section we review the use of intercoder agreement measures in cl since carlettas original paper in light of the discussion in the previous sectionswe begin with a summary of krippendorffs recommendations about measuring reliability then discuss how coefficients of agreement have been used in cl to measure the reliability of annotation schemes focusing in particular on the types of annotation where there has been some debate concerning the most appropriate measures of agreementkrippendorff notes with regret the fact that reliability is discussed in only around 69 of studies in content analysisin cl as well not all annotation projects include a formal test of intercoder agreementsome of the best known annotation efforts such as the creation of the penn treebank and the british national corpus do not report reliability results as they predate the carletta paper but even among the more recent efforts many only report percentage agreement as for the creation of the propbank or the ongoing ontonotes annotation even more importantly very few studies apply a methodology as rigorous as that envisaged by krippendorff and other content analystswe therefore begin this discussion of cl practice with a summary of the main recommendations found in chapter 11 of krippendorff even though as we will see we think that some of these recommendations may not be appropriate for cl411 generating data to measure reproducibilitykrippendorffs recommendations were developed for the field of content analysis where coding is used to draw conclusions from the textsa coded corpus is thus akin to the result of a scientific experiment and it can only be considered valid if it is reproduciblethat is if the same coded results can be replicated in an independent coding exercisekrippendorff therefore argues that any study using observed agreement as a measure of reproducibility must satisfy the following requirements some practices that are common in cl do not satisfy these requirementsthe first requirement is violated by the practice of expanding the written coding instructions and including new rules as the data are generatedthe second requirement is often violated by using experts as coders particularly longterm collaborators as such coders may agree not because they are carefully following written instructions but because they know the purpose of the research very wellwhich makes it virtually impossible for others to reproduce the results on the basis of the same coding scheme practices which violate the third requirement include asking coders to discuss their judgments with each other and reach their decisions by majority vote or to consult with each other when problems not foreseen in the coding instructions ariseany of these practices make the resulting data unusable for measuring reproducibilitykrippendorffs own summary of his recommendations is that to obtain usable data for measuring reproducibility a researcher must use data generated by three or more coders chosen according to some clearly specified criteria and working independently according to a written coding scheme and coding instructions fixed in advancekrippendorff also discusses the criteria to be used in the selection of the sample from the minimum number of units to how to make the sample representative of the data population to how to ensure the reliability of the instructions these recommendations are particularly relevant in light of the comments of craggs and mcgee wood which discourage researchers from testing their coding instructions on data from more than one domaingiven that the reliability of the coding instructions depends to a great extent on how complications are dealt with and that every domain displays different complications the sample should contain sufficient examples from all domains which have to be annotated according to the instructions412 establishing significancein hypothesis testing it is common to test for the significance of a result against a null hypothesis of chance behavior for an agreement coefficient this would mean rejecting the possibility that a positive value of agreement is nevertheless due to random codingwe can rely on the statement by siegel and castellan that when sample sizes are large the sampling distribution of k is approximately normal and centered around zerothis allows testing the obtained value of k against the null hypothesis of chance agreement by using the z statisticit is also easy to test krippendorffs α with the interval distance metric against the null hypothesis of chance agreement because the hypothesis α 0 is identical to the hypothesis f 1 in an analysis of variancehowever a null hypothesis of chance agreement is not very interesting and demonstrating that agreement is significantly better than chance is not enough to establish reliabilitythis has already been pointed out by cohen to know merely that x is beyond chance is trivial since one usually expects much more than this in the way of reliability in psychological measurement the same point has been repeated and stressed in many subsequent works the reason for measuring reliability is not to test whether coders perform better than chance but to ensure that the coders do not deviate too much from perfect agreement the relevant notion of significance for agreement coefficients is therefore a confidence intervalcohen implies that when sample sizes are large the sampling distribution of x is approximately normal for any true population value of x and therefore confidence intervals for the observed value of x can be determined using the usual multiples of the standard errordonner and eliasziw propose a more general form of significance test for arbitrary levels of agreementin contrast krippendorff states that the distribution of α is unknown so confidence intervals must be obtained by bootstrapping a software package for doing this is described in hayes and krippendorff 413 interpreting the value of kappalike coefficientseven after testing significance and establishing confidence intervals for agreement coefficients we are still faced with the problem of interpreting the meaning of the resulting valuessuppose for example we establish that for a particular task k 078 005is this good or badunfortunately deciding what counts as an adequate level of agreement for a specific purpose is still little more than a black art as we will see different levels of agreement may be appropriate for resource building and for more linguistic purposesthe problem is not unlike that of interpreting the values of correlation coefficients and in the area of medical diagnosis the best known conventions concerning the value of kappalike coefficients those proposed by landis and koch and reported in figure 1 are indeed similar to those used for correlation coefficients where values above 04 are also generally considered adequate many medical researchers feel that these conventions are appropriate and in language studies a similar interpretation of the values has been proposed by rietveld and van hout in cl however most researchers follow the more stringent conventions from content analysis proposed by krippendorff as reported by carletta content analysis researchers generally think of k 8 as good reliability with 67 k 8 allowing tentative conclusions to be drawn as a result ever since carlettas influential paper cl researchers have attempted to achieve a value of k above the 08 threshold or failing that the 067 level allowing for tentative conclusions however the description of the 067 boundary in krippendorff was actually highly tentative and cautious and in later work krippendorff clearly considers 08 the absolute minimum value of α to accept for any serious purpose even a cutoff point of α 800 is a pretty low standard recent content analysis practice seems to have settled for even more stringent requirements a recent textbook neuendorf analyzing several proposals concerning acceptable reliability concludes that reliability coefficients of 90 or greater would be acceptable to all 80 or greater would be acceptable in most situations and below that there exists great disagreement this is clearly a fundamental issueideally we would want to establish thresholds which are appropriate for the field of cl but as we will see in the rest of this section a decade of practical experience has not helped in settling the matterin fact weighted coefficients while arguably more appropriate for many annotation tasks make the issue of deciding when the value of a coefficient indicates sufficient agreement even kappa values and strength of agreement according to landis and koch more complicated because of the problem of determining appropriate weights we will return to the issue of interpreting the value of the coefficients at the end of this article414 agreement and machine learningin a recent article reidsma and carletta point out that the goals of annotation in cl differ from those of content analysis where agreement coefficients originatea common use of an annotated corpus in cl is not to confirm or reject a hypothesis but to generalize the patterns using machinelearning algorithmsthrough a series of simulations reidsma and carletta demonstrate that agreement coefficients are poor predictors of machinelearning success even highly reproducible annotations are difficult to generalize when the disagreements contain patterns that can be learned whereas highly noisy and unreliable data can be generalized successfully when the disagreements do not contain learnable patternsthese results show that agreement coefficients should not be used as indicators of the suitability of annotated data for machine learninghowever the purpose of reliability studies is not to find out whether annotations can be generalized but whether they capture some kind of observable realityeven if the pattern of disagreement allows generalization we need evidence that this generalization would be meaningfulthe decision whether a set of annotation guidelines are appropriate or meaningful is ultimately a qualitative one but a baseline requirement is an acceptable level of agreement among the annotators who serve as the instruments of measurementreliability studies test the soundness of an annotation scheme and guidelines which is not to be equated with the machinelearnability of data produced by such guidelinesthe simplest and most common coding in cl involves labeling segments of text with a limited number of linguistic categories examples include partofspeech tagging dialogue act tagging and named entity taggingthe practices used to test reliability for this type of annotation tend to be based on the assumption that the categories used in the annotation are mutually exclusive and equally distinct from one another this assumption seems to have worked out well in practice but questions about it have been raised even for the annotation of parts of speech let alone for discourse coding tasks such as dialogue act codingwe concentrate here on this latter type of coding but a discussion of issues raised for pos named entity and prosodic coding can be found in the extended version of the articledialogue act tagging is a type of linguistic annotation with which by now the cl community has had extensive experience several dialogueactannotated spoken language corpora now exist such as maptask switchboard verbmobil and communicator among othershistorically dialogue act annotation was also one of the types of annotation that motivated the introduction in cl of chancecorrected coefficients of agreement and as we will see it has been the type of annotation that has generated the most discussion concerning annotation methodology and measuring agreementa number of coding schemes for dialogue acts have achieved values of k over 08 and have therefore been assumed to be reliable for example k 083 for the 13tag maptask coding scheme k 08 for the 42tag switchboarddamsl scheme k 090 for the smaller 20tag subset of the cstar scheme used by doran et al all of these tests were based on the same two assumptions that every unit is assigned to exactly one category and that these categories are distincttherefore again unweighted measures and in particular k tend to be used for measuring intercoder agreementhowever these assumptions have been challenged based on the observation that utterances tend to have more than one function at the dialogue act level for a useful survey see popescubelis an assertion performed in answer to a question for instance typically performs at least two functions at different levels asserting some informationthe dialogue act that we called statement in section 23 operating at what traum and hinkelman called the core speech act leveland confirming that the question has been understood a dialogue act operating at the grounding level and usually known as acknowledgment in older dialogue act tagsets acknowledgments and statements were treated as alternative labels at the same level forcing coders to choose one or the other when an utterance performed a dual function according to a wellspecified set of instructionsby contrast in the annotation schemes inspired from these newer theories such as damsl coders are allowed to assign tags along distinct dimensions or levelstwo annotation experiments testing this solution to the multitag problem with the damsl scheme were reported in core and allen and di eugenio et al in both studies coders were allowed to mark each communicative function independently that is they were allowed to choose for each utterance one of the statement tags one of the influencingaddresseefutureaction tags and so forthand agreement was evaluated separately for each dimension using k core and allen found values of k ranging from 076 for answer to 042 for agreement to 015 for committingspeakerfutureactionusing different coding instructions and on a different corpus di eugenio et al observed higher agreement ranging from k 093 to 054 these relatively low levels of agreement led many researchers to return to flat tagsets for dialogue acts incorporating however in their schemes some of the insights motivating the work on schemes such as damslthe best known example of this type of approach is the development of the switchboarddamsl tagset by jurafsky shriberg and biasca which incorporates many ideas from the multidimensional theories of dialogue acts but does not allow marking an utterance as both an acknowledgment and a statement a choice has to be madethis tagset results in overall agreement of k 080interestingly subsequent developments of switchboarddamsl backtracked on some of these decisionsfor instance the icsimrda tagset developed for the annotation of the icsi meeting recorder corpus reintroduces some of the damsl ideas in that annotators are allowed to assign multiple switchboarddamsl labels to utterances shriberg et al achieved a comparable reliability to that obtained with switchboarddamsl but only when using a tagset of just five classmapsshriberg et al also introduced a hierarchical organization of tags to improve reliabilitythe dimensions of the damsl scheme can be viewed as superclasses of dialogue acts which share some aspect of their meaningfor instance the dimension of influencingaddresseefutureaction includes the two dialogue acts openoption and directive both of which bring into consideration a future action to be performed by the addresseeat least in principle an organization of this type opens up the possibility for coders to mark an utterance with the superclass in case they do not feel confident that the utterance satisfies the additional requirements for openoption or directivethis in turn would do away with the need to make a choice between these two optionsthis possibility was not pursued in the studies using the original damsl that we are aware of but was tested by shriberg et al and subsequent work in particular geertzen and bunt who were specifically interested in the idea of using hierarchical schemes to measure partial agreement and in addition experimented with weighted coefficients of agreement for their hierarchical tagging scheme specifically κwgeertzen and bunt tested intercoder agreement with bunts dit a scheme with 11 dimensions that builds on ideas from damsl and from dynamic interpretation theory in dit tags can be hierarchically related for example the class informationseeking is viewed as consisting of two classes yesno question and whquestion the hierarchy is explicitly introduced in order to allow coders to leave some aspects of the coding undecidedfor example check is treated as a subclass of ynq in which in addition the speaker has a weak belief that the proposition that forms the belief is truea coder who is not certain about the dialogue act performed using an utterance may simply choose to tag it as ynqthe distance metric d proposed by geertzen and bunt is based on the criterion that two communicative functions are related 1 if they stand in an ancestoroffspring relation within a hierarchyfurthermore they argue the magnitude of d should be proportional to the distance between the functions in the hierarchya leveldependent correction factor is also proposed so as to leave open the option to make disagreements at higher levels of the hierarchy matter more than disagreements at the deeper level the results of an agreement test with two annotators run by geertzen and bunt show that taking into account partial agreement leads to values of κw that are higher than the values of κ for the same categories particularly for feedback a class for which core and allen got low agreementof course even assuming that the values of κw and κ were directly comparablewe remark on the difficulty of interpreting the values of weighted coefficients of agreement in section 44it remains to be seen whether these higher values are a better indication of the extent of agreement between coders than the values of unweighted κthis discussion of coding schemes for dialogue acts introduced issues to which we will return for other cl annotation tasks as wellthere are a number of wellestablished schemes for largescale dialogue act annotation based on the assumption of mutual exclusivity between dialogue act tags whose reliability is also well known if one of these schemes is appropriate for modeling the communicative intentions found in a task we recommend to our readers to use itthey should also realize however that the mutual exclusivity assumption is somewhat dubiousif a multidimensional or hierarchical tagset is used readers should also be aware that weighted coefficients do capture partial agreement and need not automatically result in lower reliability or in an explosion in the number of labelshowever a hierarchical scheme may not reflect genuine annotation difficulties for example in the case of dit one might argue that it is more difficult to confuse yesno questions with whquestions than with statementswe will also see in a moment that interpreting the results with weighted coefficients is difficultwe will return to both of these problems in what followsbefore labeling can take place the units of annotation or markables need to be identifieda process krippendorff calls unitizingthe practice in cl for the forms of annotation discussed in the previous section is to assume that the units are linguistic constituents which can be easily identified such as words utterances or noun phrases and therefore there is no need to check the reliability of this processwe are aware of few exceptions to this assumption such as carletta et al on unitization for move coding and our own work on the gnome corpus in cases such as text segmentation however the identification of units is as important as their labeling if not more important and therefore checking agreement on unit identification is essentialin this section we discuss current cl practice with reliability testing of these types of annotation before briefly summarizing krippendorffs proposals concerning measuring reliability for unitizing431 segmentation and topic markingdiscourse segments are portions of text that constitute a unit either because they are about the same topic or because they have to do with achieving the same intention or performing the same dialogue game 7 the analysis of discourse structureand especially the identification of discourse segmentsis the type of annotation that more than any other led cl researchers to look for ways of measuring reliability and agreement as it made them aware of the extent of disagreement on even quite simple judgments subsequent research identified a number of issues with discourse structure annotation above all the fact that segmentation though problematic is still much easier than marking more complex aspects of discourse structure such as identifying the most important segments or the rhetorical relations between segments of different granularityas a result many efforts to annotate discourse structure concentrate only on segmentationthe agreement results for segment coding tend to be on the lower end of the scale proposed by krippendorff and recommended by carlettahearst for instance found k 0647 for the boundarynot boundary distinction reynar measuring agreement between his own annotation and the trec segmentation of broadcast news reports k 0764 for the same task ries reports even lower agreement of k 036teufel carletta and moens who studied agreement on the identification of argumentative zones found high reliability for their three main zones although lower for the whole scheme for intentionbased segmentation passonneau and litman in the prek days reported an overall percentage agreement with majority opinion of 89 but the agreement on boundaries was only 70for conversational games segmentation carletta et al reported promising but not entirely reassuring agreement on where games began whereas the agreement on transaction boundaries was k 059exceptions are two segmentation efforts carried out as part of annotations of rhetorical structuremoser moore and glendening achieved an agreement of k 09 for the highest level of segmentation of their rda annotation carlson marcu and okurowski reported very high agreement over the identification of the boundaries of discourse units the building blocks of their annotation of rhetorical structurethis however was achieved by employing experienced annotators and with considerable trainingone important reason why most agreement results on segmentation are on the lower end of the reliability scale is the fact known to researchers in discourse analysis from as early as levin and moore that although analysts generally agree on the bulk of segments they tend to disagree on their exact boundariesthis phenomenon was also observed in more recent studies see for example the discussion in passonneau and litman the comparison of the annotations produced by seven coders of the same text in figure 5 of hearst or the discussion by carlson marcu and okurowski who point out that the boundaries between elementary discourse units tend to be very blurry see also pevzner and hearst for similar comments made in the context of topic segmentation algorithms and klavans popper and passonneau for selecting definition phrasesthis blurriness of boundaries combined with the prevalence effects discussed in section 32 also explains the fact that topic annotation efforts which were only concerned with roughly dividing a text into segments generally report lower agreement than the studies whose goal is to identify smaller discourse unitswhen disagreement is mostly concentrated in one class if the total number of units to annotate remains the same then expected agreement on this class is lower when a greater proportion of the units to annotate belongs to this classwhen in addition this class is much less numerous than the other classes overall agreement tends to depend mostly on agreement on this classfor instance suppose we are testing the reliability of two different segmentation schemesinto broad discourse segments and into finer discourse unitson a text of 50 utterances and that we obtain the results in table 8case 1 would be a situation in which coder a and coder b agree that the text consists of two segments obviously agree on its initial and final boundaries but disagree by one position on the intermediate boundarysay one of them places it at utterance 25 the other at utterance 26nevertheless because expected agreement is so highthe coders agree on the classification of 98 of the utterancesthe value of k is fairly lowin case 2 the coders disagree on three times as many utterances but k is higher than in the first case because expected agreement is substantially lower the fact that coders mostly agree on the bulk of discourse segments but tend to disagree on their boundaries also makes it likely that an allornothing coefficient like k calculated on individual boundaries would underestimate the degree of agreement suggesting low agreement even among coders whose segmentations are mostly similara weighted coefficient of agreement like α might produce values more in keeping with intuition but we are not aware of any attempts at measuring agreement on segmentation using weighted coefficientswe see two main optionswe suspect that the methods proposed by krippendorff for measuring agreement on unitizing may be appropriate for the purpose of measuring agreement on discourse segmentationa second option would be to measure agreement not on individual boundaries but on windows spanning several units as done in the methods proposed to evaluate the performance of topic detection algorithms such as pk or windowdiff 432 unitizing it is often assumed in cl annotation practice that the units of analysis are natural linguistic objects and therefore there is no need to check agreement on their identificationas a result agreement is usually measured on the labeling of units rather than on the process of identifying them we have just seen however two coding tasks for which the reliability of unit identification is a crucial part of the overall reliability and the problem of markable identification is more pervasive than is generally acknowledgedfor example when the units to be labeled are syntactic constituents it is common practice to use a parser or chunker to identify the markables and then to allow the coders to correct the parsers outputin such cases one would want to know how reliable the coders corrections arewe thus need a general method of testing relibility on markable identificationthe one proposal for measuring agreement on markable identification we are aware of is the αyou coefficient a nontrivial variant of α proposed by krippendorff a full presentation of the proposal would require too much space so we will just present the core ideaunitizing is conceived of as consisting of two separate steps identifying boundaries between units and selecting the units of interestif a unit identified by one coder overlaps a unit identified by the other coder the amount of disagreement is the square of the lengths of the nonoverlapping segments if a unit identified by one coder does not overlap any unit of interest identified by the other coder the amount of disagreement is the square of the length of the whole unitthis distance metric is used in calculating observed and expected disagreement and αyou itselfwe refer the reader to krippendorff for detailskrippendorffs αyou is not applicable to all cl tasksfor example it assumes that units may not overlap in a single coders output yet in practice there are many the difference between overlapping units is d s2 s2 annotation schemes which require coders to label nested syntactic constituentsfor continuous segmentation tasks αyou may be inappropriate because when a segment identified by one annotator overlaps with two segments identified by another annotator the distance is smallest when the one segment is centered over the two rather than aligned with one of themnevertheless we feel that when the nonoverlap assumption holds and the units do not cover the text exhaustively testing the reliabilty of unit identification may prove beneficialto our knowledge this has never been tested in clthe annotation tasks discussed so far involve assigning a specific label to each category which allows the various agreement measures to be applied in a straightforward wayanaphoric annotation differs from the previous tasks because annotators do not assign labels but rather create links between anaphors and their antecedentsit is therefore not clear what the labels should be for the purpose of calculating agreementone possibility would be to consider the intended referent as the label as in named entity tagging but it would not make sense to predefine a set of labels applicable to all texts because different objects are mentioned in different textsan alternative is to use the marked antecedents as labelshowever we do not want to count as a disagreement every time two coders agree on the discourse entity realized by a particular noun phrase but just happen to mark different words as antecedentsconsider the reference of the underlined pronoun it in the following dialogue excerpt 8 pick up oranges some of the coders in a study we carried out indicated the noun phrase engine e2 as antecedent for the second it in utterance 31 whereas others indicated the immediately preceding pronoun which they had previously marked as having engine e2 as antecedentclearly we do not want to consider these coders to be in disagreementa solution to this dilemma has been proposed by passonneau use the emerging coreference sets as the labels for the purpose of calculating agreementthis requires using weighted measures for calculating agreement on such sets and consequently it raises serious questions about weighted measuresin particular about the interpretability of the results as we will see shortly441 passonneaus proposalpassonneau recommends measuring agreement on anaphoric annotation by using sets of mentions of discourse entities as labels that is the emerging anaphoriccoreference chainsthis proposal is in line with the methods developed to evaluate anaphora resolution systems but using anaphoric chains as labels would not make unweighted measures such as k a good measure for agreementpractical experience suggests that except when a text is very short few annotators will catch all mentions of a discourse entity most will forget to mark a few with the result that the chains differ from coder to coder and agreement as measured with k is always very lowwhat is needed is a coefficient that also allows for partial disagreement between judgments when two annotators agree on part of the coreference chain but not on all of itpassonneau suggests solving the problem by using α with a distance metric that allows for partial agreement among anaphoric chainspassonneau proposes a distance metric based on the following rationale two sets are minimally distant when they are identical and maximally distant when they are disjoint between these extremes sets that stand in a subset relation are closer than ones that merely intersectthis leads to the following distance metric between two sets a and balternative distance metrics take the size of the anaphoric chain into account based on measures used to compare sets in information retrieval such as the coefficient of community of jaccard and the coincidence index of dice in later work passonneau offers a refined distance metric which she called masi obtained by multiplying passonneaus original metric dp by the metric derived from jaccard dj442 experience with α for anaphoric annotationin the experiment mentioned previously we used 18 coders to test α and k under a variety of conditionswe found that even though our coders by and large agreed on the interpretation of anaphoric expressions virtually no coder ever identified all the mentions of a discourse entityas a result even though the values of α and k obtained by using the id of the antecedent as label were pretty similar the values obtained when using anaphoric chains as labels were drastically differentthe value of α increased because examples where coders linked a markable to different antecedents in the same chain were no longer considered as disagreementshowever the value of k was drastically reduced because hardly any coder identified all the mentions of discourse entities the study also looked at the matter of individual annotator bias and as mentioned in section 31 we did not find differences between α and a xstyle version of α beyond the third decimal pointthis similarity is what one would expect given the result about annotator bias from section 31 and given that in this experiment we used 18 annotatorsthese very small differences should be contrasted with the differences resulting from the choice of distance metrics where values for the fullchain condition ranged from α 0642 using jaccard as distance metric to α 0654 using passonneaus metric to the value for dice reported in figure 3 α 0691these differences raise an important issue concerning the application of αlike measures for cl tasks using α makes it difficult to compare the results of different annotation experiments in that a poor value or a high value might result from too strict or too generous distance metrics making it even more important to develop a methodology to identify appropriate values for these coefficientsthis issue is further emphasized by the study reported next443 discourse deixisa second annotation study we carried out shows even more clearly the possible side effects of using weighted coefficientsthis study was concerned with the annotation of the antecedents of references to abstract objects such as the example of the pronoun that in utterance 76 previous studies of discourse deixis annotation showed that these are extremely difficult judgments to make except perhaps for identifying the type of object so we simplified the task by only requiring our participants to identify the boundaries of the area of text in which the antecedent was introducedeven so we found a great variety in how these boundaries were marked exactly as in the case of discourse segmentation discussed earlier our participants broadly agreed on the area of text but disagreed on a comparison of the values of α and k for anaphoric annotation its exact boundaryfor instance in this example nine out of ten annotators marked the antecedent of that as a text segment ending with the word elmira but some started with the word so some started with we some with ship and some with onewe tested a number of ways to measure partial agreement on this task and obtained widely different resultsfirst of all we tested three setbased distance metrics inspired by the passonneau proposals that we just discussed we considered discourse segments to be sets of words and computed the distance between them using passonneaus metric jaccard and diceusing these three metrics we obtained α values of 055 045 and 055 we should note that because antecedents of different expressions rarely overlapped the expected disagreement was close to 1 so the value of α turned out to be very close to the complement of the observed disagreement as calculated by the different distance metricsnext we considered methods based on the position of words in the textthe first method computed differences between absolute boundary positions each antecedent was associated with the position of its first or last word in the dialogue and agreement was calculated using α with the interval distance metricthis gave us α values of 0998 for the beginnings of the antecedentevoking area and 0999 for the endsthis is because expected disagreement is exceptionally low coders tend to mark discourse antecedents close to the referring expression so the average distance between antecedents of the same expression is smaller than the size of the dialogue by a few orders of magnitudethe second method associated each antecedent with the position of its first or last word relative to the beginning of the anaphoric expressionthis time we found extremely low values of α 0167 for beginnings of antecedents and 0122 for ends barely in the positive sidethis shows that agreement among coders is not dramatically better than what would be expected if they just marked discourse antecedents at a fixed distance from the referring expressionthe three ranges of α that we observed show agreement on the identity of discourse antecedents their position in the dialogue and their position relative to referring expressions respectivelythe middle range shows variability of up to 10 percentage points depending on the distance metric chosenthe lesson is that once we start using weighted measures we cannot anymore interpret the value of α using traditional rules of thumb such as those proposed by krippendorff or by landis and kochthis is because depending on the way we measure agreement we can report α values ranging from 0122 to 0998 for the very same experimentnew interpretation methods have to be developed which will be task and distancemetric specificwe will return to this issue in the conclusionsword sense tagging is one of the hardest annotation taskswhereas in the case of partofspeech and dialogue act tagging the same categories are used to classify all units in the case of word sense tagging different categories must be used for each word which makes writing a single coding manual specifying examples for all categories impossible the only option is to rely on a dictionaryunfortunately different dictionaries make different distinctions and often coders cannot make the finegrained distinctions that trained lexicographers can makethe problem is particularly serious for verbs which tend to be polysemous rather than homonymous these difficulties and in particular the difficulty of tagging senses with a finegrained repertoire of senses such as that provided by dictionaries or by wordnet have been highlighted by the three senseval initiativesalready during the first senseval veronis carried out two studies of intercoder agreement on word sense tagging in the socalled romanseval taskone study was concerned with agreement on polysemythat is the extent to which coders agreed that a word was polysemous in a given contextsix naive coders were asked to make this judgment about 600 french words using the repertoire of senses in the petit larousseon this task a percentage agreement of 068 for nouns 074 for verbs and 078 for adjectives was observed corresponding to k values of 036 037 and 067 respectivelythe 20 words from each category perceived by the coders in this first experiment to be most polysemous were then used in a second study of intercoder agreement on the sense tagging task which involved six different naive codersinterestingly the coders in this second experiment were allowed to assign multiple tags to words although they did not make much use of this possibility so κw was used to measure agreementin this experiment veronis observed pairwise agreement of 063 for verbs 071 for adjectives and 073 for nouns corresponding to κw values of 041 041 and 046 but with a wide variety of values when measured per wordranging from 0007 for the adjective correct to 092 for the noun detentionsimilarly mediocre results for intercoder agreement between naive coders were reported in the subsequent editions of sensevalagreement studies for senseval2 where wordnet senses were used as tags reported a percentage agreement for verb senses of around 70 whereas for senseval3 mihalcea chklovski and kilgarriff report a percentage agreement of 673 and average k of 058two types of solutions have been proposed for the problem of low agreement on sense taggingthe solution proposed by kilgarriff is to use professional lexicographers and arbitrationthe study carried out by kilgarriff does not therefore qualify as a true study of replicability in the sense of the terms used by krippendorff but it did show that this approach makes it possible to achieve percentage agreement of around 955an alternative approach has been to address the problem of the inability of naive coders to make finegrained distinctions by introducing coarsergrained classification schemes which group together dictionary senses hierarchical tagsets were also developed such as hector or indeed wordnet itself in the case of buitelaar and palmer dang and fellbaum the supersenses were identified by hand whereas bruce and wiebe and veronis used clustering methods such as those from bruce and wiebe to collapse some of the initial sense distinctions9 palmer dang and fellbaum illustrate this practice with the example of the verb call which has 28 finegrained senses in wordnet 17 they conflate these senses into a small number of groups using various criteriafor example four senses can be grouped in a group they call group 1 on the basis of subcategorization frame similarities palmer dang and fellbaum achieved for the english verb lexical sense task of senseval2 a percentage agreement among coders of 82 with grouped senses as opposed to 71 with the original wordnet sensesbruce and wiebe found that collapsing the senses of their test word on the basis of their use by coders and merging the two classes found to be harder to distinguish resulted in an increase of the value of k from 0874 to 0898using a related technique veronis found that agreement on noun word sense tagging went up from a k of around 045 to a k of 086we should note however that the post hoc merging of categories is not equivalent to running a study with fewer categories to begin withattempts were also made to develop techniques to measure partial agreement with hierarchical tagsetsa first proposal in this direction was advanced by melamed and resnik who developed a coefficient for hierarchical tagsets that could be used in senseval for measuring agreement with tagsets such as hectormelamed and resnik proposed to normalize the computation of observed and expected agreement by taking each label which is not a leaf in the tag hierarchy and distributing it down to the leaves in a uniform way and then only computing agreement on the leavesfor example with a tagset like the one in table 9 the cases in which the coders used the label group 1 would be uniformly distributed down and added in equal measure to the number of cases in which the coders assigned each of the four wordnet labelsthe method proposed in the paper has however problematic properties when used to measure intercoder agreementfor example suppose tag a dominates two subtags a1 and a2 and that two coders mark a particular item as aintuitively we would want to consider this a case of perfect agreement but this is not what the method proposed by melamed and resnik yieldsthe annotators marks are distributed over the two subtags each with probability 05 and then the agreement is computed by summing the joint probabilities over the two subtags of melamed and resnik 2000 with the result that the agreement over the item turns out to be 052 052 05 instead of 1to correct this dan melamed suggested replacing the product in equation with a minimum operatorhowever the calculation of expected agreement of melamed and resnik 2000 still gives the amount of agreement which is expected if coders are forced to choose among leaf nodes which makes this method inappropriate for coding schemes that do not force coders to do thisone way to use melamed and resniks proposal while avoiding the discrepancy between observed and expected agreement is to treat the proposal not as a new coefficient but rather as a distance metric to be plugged into a weighted coefficient like αlet a and b be two nodes in a hierarchical tagset let l be the set of all leaf nodes in the tagset and let p be the probability of selecting a leaf node l given an arbitrary node t when the probability mass of t is distributed uniformly to all the nodes dominated by t we can reinterpret melameds modification of equation in melamed and resnik as a metric measuring the distance between nodes a and bthis metric has the desirable propertiesit is 0 when tags a and b are identical 1 when the tags do not overlap and somewhere in between in all other casesif we use this metric for krippendorffs α we find that observed agreement is exactly the same as in melamed and resnik with the product operator replaced by minimum we can also use other distance metrics with αfor example we could associate with each sense an extended sensea set es including the sense itself and its grouped senseand then use setbased distance metrics from section 44 for example passonneaus dpto illustrate how this approach could be used to measure agreement on word sense annotation suppose that two coders have to annotate the use of call in the following sentence this gene called gametocide is carried into the plant by a virus that remains active for a few daysthe standard guidelines require coders to assign a wn sense to wordsunder such guidelines if coder a classifies the use of called in the above example as an instance of wn1 whereas coder b annotates it as an instance of wn3 we would find total disagreement which seems excessively harsh as the two senses are clearly relatedhowever by using the broader senses proposed by palmer dang and fellbaum in combination with a distance metric such as the one just proposed it is possible to get more flexible and we believe more realistic assessments of the degree of agreement in situations such as thisfor instance in case the reliability study had already been carried out under the standard senseval guidelines the distance metric proposed above could be used to identify post hoc cases of partial agreement by adding to each wn sense its hypernyms according to the groupings proposed by palmer dang and fellbaumfor example as annotation could be turned into a new set label wn1label and bs mark into the set table wn3label which would give a distance d 23 indicating a degree of overlapthe method for computing agreement proposed here could could also be used to allow coders to choose either a more specific label or one of palmer dang and fellbaums superlabelsfor example suppose a sticks to wn1 but b decides to mark the use above using palmer dang and fellbaums label category then we would still find a distance d 13an alternative way of using α for word sense annotation was developed and tested by passonneau habash and rambow their approach is to allow coders to assign multiple labels for wordsenses as done by veronis and more recently by rosenberg and binkowski for text classification labels and by poesio and artstein for anaphorathese multilabel sets can then be compared using the masi distance metric for α the purpose of this article has been to expose the reader to the mathematics of chancecorrected coefficients of agreement as well as the current state of the art of using these coefficients in clour hope is that readers come to view agreement studies not as an additional chore or hurdle for publication but as a tool for analysis which offers new insights into the annotation processwe conclude by summarizing what in our view are the main recommendations emerging from ten years of experience with coefficients of agreementthese can be grouped under three main headings methodology choice of coefficients and interpretation of coefficientsour first recommendation is that annotation efforts should perform and report rigorous reliability testingthe last decade has already seen considerable improvement from the absence of any tests for the penn treebank or the british national corpus to the central role played by reliability testing in the penn discourse treebank and ontonotes but even the latter efforts only measure and report percent agreementwe believe that part of the reluctance to report chancecorrected measures is the difficulty in interpreting themhowever our experience is that chancecorrected coefficients of agreement do provide a better indication of the quality of the resulting annotation than simple percent agreement and moreover the detailed calculations leading to the coefficients can be very revealing as to where the disagreements are located and what their sources may bea rigorous methodology for reliability testing does not in our opinion exclude the use of expert coders and here we feel there may be a motivated difference between the fields of content analysis and clthere is a clear tradeoff between the complexity of the judgments that coders are required to make and the reliability of such judgments and we should strive to devise annotation schemes that are not only reliable enough to be replicated but also sophisticated enough to be useful in content analysis conclusions are drawn directly from annotated corpora so the emphasis is more on replicability whereas in cl corpora constitute a resource which is used by other processes so the emphasis is more towards usefulnessthere is also a tradeoff between the sophistication of judgments and the availability of coders who can make such judgmentsconsequently annotation by experts is often the only practical way to get useful corpora for clcurrent practice achieves high reliability either by using professionals or through intensive training this means that results are not replicable across sites and are therefore less reliable than annotation by naive coders adhering to written instructionswe feel that interannotator agreement studies should still be carried out as they serve as an assurance that the results are replicable when the annotators are chosen from the same population as the original annotatorsan important additional assurance should be provided in the form of an independent evaluation of the task for which the corpus is used one of the goals of this article is to help authors make an informed choice regarding the coefficients they use for measuring agreementwhile coefficients other than k specifically cohens x and krippendorffs α have appeared in the cl literature as early as carletta and passonneau and litman they had not sprung into general awareness until the publication of di eugenio and glass and passonneau regarding the question of annotator bias there is an overwhelming consensus in cl practice k and α are used in the vast majority of the studies we reportedwe agree with the view that k and α are more appropriate as they abstract away from the bias of specific codersbut we also believe that ultimately this issue of annotator bias is of little consequence because the differences get smaller and smaller as the number of annotators grows we believe that increasing the number of annotators is the best strategy because it reduces the chances of accidental personal biaseshowever krippendorffs α is indispensable when the category labels are not equally distinct from one anotherwe think there are at least two types of coding schemes in which this is the case hierarchical tagsets and setvalued interpretations such as those proposed for anaphoraat least in the second case weighted coefficients are almost unavoidablewe therefore recommend using α noting however that the specific choice of weights will affect the overall numerical resultwe view the lack of consensus on how to interpret the values of agreement coefficients as a serious problem with current practice in reliability testing and as one of the main reasons for the reluctance of many in cl to embark on reliability studiesunlike significance values which report a probability agreement coefficients report a magnitude and it is less clear how to interpret such magnitudesour own experience is consistent with that of krippendorff both in our earlier work and in the more recent efforts we found that only values above 08 ensured an annotation of reasonable quality we therefore feel that if a threshold needs to be set 08 is a good valuethat said we doubt that a single cutoff point is appropriate for all purposesfor some cl studies particularly on discourse useful corpora have been obtained while attaining reliability only at the 07 levelwe agree therefore with craggs and mcgee wood that setting a specific agreement threshold should not be a prerequisite for publicationinstead as recommended by di eugenio and glass and others researchers should report in detail on the methodology that was followed in collecting the reliability data whether agreement was statistically significant and provide a confusion matrix or agreement table so that readers can find out whether overall figures of agreement hide disagreements on less common categoriesfor an example of good practice in this respect see teufel and moens the decision whether a corpus is good enough for publication should be based on more than the agreement scorespecifically an important consideration is an independent evaluation of the results that are based on the corpuscomments and discussionspecial thanks to klaus krippendorff for an extremely detailed review of an earlier version of this articlewe are also extremely grateful to the british library in london which made accessible to us virtually every paper we needed for this research
J08-4004
survey article intercoder agreement for computational linguisticsthis article is a survey of methods for measuring agreement among corpus annotatorsit exposes the mathematics and underlying assumptions of agreement coefficients covering krippendorffs alpha as well as scotts pi and cohens kappa discusses the use of coefficients in several annotation tasks and argues that weighted alphalike coefficients traditionally less used than kappalike measures in computational linguistics may be more appropriate for many corpus annotation tasks but that their use makes the interpretation of the value of the coefficient even hardera comprehensive overview of methods for measuring the interannotator agreement in various areas of computational linguistics was given in this work
articles recognizing contextual polarity an exploration of features for phraselevel sentiment analysis many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity however the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the words prior polarity positive words are used in phrases expressing negative sentiments or vice versa also quite often words that are positive or negative out of context are neutral in context meaning they are not even being used to express a sentiment the goal of this work is to automatically distinguish between prior and contextual polarity with a focus on understanding which features are important for this task because an important aspect of the problem is identifying when polar terms are being used in neutral contexts features for distinguishing between neutral and polar instances are evaluated as well as features for distinguishing between positive and negative contextual polarity the evaluation includes assessing the performance of features across multiple machine learning algorithms for all learning algorithms except one the combination of all features together gives the best performance another facet of the evaluation considers how the presence of neutral instances affects the performance offeatures for distinguishing between positive and negative polarity these experiments show that the presence of neutral instances greatly degrades the performance of these features and that perhaps the best way to improve performance across all polarity classes is to improve the systems ability to identify when an instance is neutral many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity however the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the words prior polaritypositive words are used in phrases expressing negative sentiments or vice versaalso quite often words that are positive or negative out of context are neutral in context meaning they are not even being used to express a sentimentthe goal of this work is to automatically distinguish between prior and contextual polarity with a focus on understanding which features are important for this taskbecause an important aspect of the problem is identifying when polar terms are being used in neutral contexts features for distinguishing between neutral and polar instances are evaluated as well as features for distinguishing between positive and negative contextual polaritythe evaluation includes assessing the performance of features across multiple machine learning algorithmsfor all learning algorithms except one the combination of all features together gives the best performanceanother facet of the evaluation considers how the presence of neutral instances affects the performance offeatures for distinguishing between positive and negative polaritythese experiments show that the presence of neutral instances greatly degrades the performance of these features and that perhaps the best way to improve performance across all polarity classes is to improve the systems ability to identify when an instance is neutralsentiment analysis is a type of subjectivity analysis that focuses on identifying positive and negative opinions emotions and evaluations expressed in natural languageit has been a central component in applications ranging from recognizing inflammatory messages to tracking sentiments over time in online discussions to classifying positive and negative reviews although a great deal of work in sentiment analysis has targeted documents applications such as opinion question answering and review mining to extract opinions about companies and products require sentencelevel or even phraselevel analysisfor example if a question answering system is to successfully answer questions about peoples opinions it must be able not only to pinpoint expressions of positive and negative sentiments such as we find in sentence but also to determine when an opinion is not being expressed by a word or phrase that typically does evoke one such as condemned in sentence a common approach to sentiment analysis is to use a lexicon with information about which words and phrases are positive and which are negativethis lexicon may be manually compiled as is the case with the general inquirer a resource often used in sentiment analysisalternatively the information in the lexicon may be acquired automaticallyacquiring the polarity of words and phrases is itself an active line of research in the sentiment analysis community pioneered by the work of hatzivassiloglou and mckeown on predicting the polarity or semantic orientation of adjectivesvarious techniques have been proposed for learning the polarity of wordsthey include corpusbased techniques such as using constraints on the cooccurrence in conjunctions of words with similar or opposite polarity and statistical measures of word association as well as techniques that exploit information about lexical relationships and glosses in resources such as wordnetacquiring the polarity of words and phrases is undeniably important and there are still open research challenges such as addressing the sentiments of different senses of words and so onhowever what the polarity of a given word or phrase is when it is used in a particular context is another problem entirelyconsider for example the underlined positive and negative words in the following sentencethe first underlined word is trustalthough many senses of the word trust express a positive sentiment in this case the word is not being used to express a sentiment at allit is simply part of an expression referring to an organization that has taken on the charge of caring for the environmentthe adjective well is considered positive and indeed it is positive in this contexthowever the same is not true for the words reason and reasonableout of context we would consider both of these words to be positive1 in context the word reason is being negated changing its polarity from positive to negativethe phrase no reason at all to believe changes the polarity of the proposition that follows because reasonable falls within this proposition its polarity becomes negativethe word polluters has a negative connotation but here in the context of the discussion of the article and its position in the sentence polluters is being used less to express a sentiment and more to objectively refer to companies that polluteto clarify how the polarity of polluters is affected by its subject role consider the purely negative sentiment that emerges when it is used as an object they are polluterswe call the polarity that would be listed for a word in a lexicon the words prior polarity and we call the polarity of the expression in which a word appears considering the context of the sentence and document the words contextual polarityalthough words often do have the same prior and contextual polarity many times a words prior and contextual polarities differwords with a positive prior polarity may have a negative contextual polarity or vice versaquite often words that are positive or negative out of context are neutral in context meaning that they are not even being used to express a sentimentsimilarly words that are neutral out of context neither positive or negative may combine to create a positive or negative expression in contextthe focus of this work is on the recognition of contextual polarityin particular disambiguating the contextual polarity of words with positive or negative prior polaritywe begin by presenting an annotation scheme for marking sentiment expressions and their contextual polarity in the multiperspective question answering opinion corpuswe show that given a set of subjective expressions contextual polarity can be annotated reliablyusing the contextual polarity annotations we conduct experiments in automatically distinguishing between prior and contextual polaritybeginning with a large lexicon of clues tagged with prior polarity we identify the contextual polarity of the instances of those clues in the corpusthe process that we use has two steps first classifying each clue as being in a neutral or polar phrase and then disambiguating the contextual polarity of the clues marked as polarfor each step in the process we experiment with a variety of features and evaluate the performance of the features using several different machine learning algorithmsour experiments reveal a number of interesting findingsfirst being able to accurately identify neutral contextual polaritywhen a positive or negative clue is not being used to express a sentimentis an important aspect of the problemthe importance of neutral examples has previously been noted for classifying the sentiment of documents but ours is the first work to explore how neutral instances affect classifying the contextual polarity of words and phrasesin particular we found that the performance of features for distinguishing between positive and negative polarity greatly degrades when neutral instances are included in the experimentswe also found that achieving the best performance for recognizing contextual polarity requires a wide variety of featuresthis is particularly true for distinguishing between neutral and polar instancesalthough some features help to increase polar or neutral recall or precision it is only the combination of features together that achieves significant improvements in accuracy over the baselinesour experiments show that for distinguishing between positive and negative instances features capturing negation are clearly the most importanthowever there is more to the story than simple negationfeatures that capture relationships between instances of clues also perform well indicating that identifying features that represent more complex interdependencies between sentiment clues may be an important avenue for future researchthe remainder of this article is organized as followssection 2 gives an overview of some of the things that can influence contextual polarityin section 3 we describe our corpus and present our annotation scheme and interannotator agreement study for marking contextual polaritysections 4 and 5 describe the lexicon used in our experiments and how the contextual polarity annotations are used to determine the goldstandard tags for instances from the lexiconin section 6 we consider what kind of performance can be expected from a simple priorpolarity classifiersection 7 describes the features that we use for recognizing contextual polarity and our experiments and results are presented in section 8in section 9 we discuss related work and we conclude in section 10phraselevel sentiment analysis is not a simple problemmany things besides negation can influence contextual polarity and even negation is not always straightforwardnegation may be local or involve longerdistance dependencies such as the negation of the proposition or the negation of the subject in addition certain phrases that contain negation words intensify rather than change polarity contextual polarity may also be influenced by modality whether the proposition is asserted to be real or not real word sense the syntactic role of a word in the sentence whether the word is the subject or object of a copular verb and diminishers such as little polanyi and zaenen give a detailed discussion of many of these types of polarity influencersmany of these contextual polarity influencers are represented as features in our experimentscontextual polarity may also be influenced by the domain or topicfor example the word cool is positive if used to describe a car but it is negative if it is used to describe someone is demeanorsimilarly a word such as fever is unlikely to be expressing a sentiment when used in a medical contextwe use one feature in our experiments to represent the topic of the documentanother important aspect of contextual polarity is the perspective of the person who is expressing the sentimentfor example consider the phrase failed to defeat in the sentence israel failed to defeat hezbollahfrom the perspective of israel failed to defeat is negativefrom the perspective of hezbollah failed to defeat is positivetherefore the contextual polarity of this phrase ultimately depends on the perspective of who is expressing the sentimentalthough automatically detecting this kind of pragmatic influence on polarity is beyond the scope of this work this as well as the other types of polarity influencers all are considered when annotating contextual polarityfor the experiments in this work we need a corpus that is annotated comprehensively for sentiment expressions and their contextual polarityrather than building a corpus from scratch we chose to add contextual polarity annotations to the existing annotations in the multiperspective question answering opinion corpus2 the mpqa corpus is a collection of englishlanguage versions of news documents from the world pressthe documents contain detailed expressionlevel annotations of attributions and private states private states are mental and emotional states they include beliefs speculations intentions and sentiments among othersalthough sentiments are not distinguished from other types of private states in the existing annotations they are a subset of what already is annotatedthis makes the annotations in the mpqa corpus a good starting point for annotating sentiment expressions and their contextual polaritywhen developing our annotation scheme for sentiment expressions and contextual polarity there were three main questions to addressfirst which of the existing annotations in the mpqa corpus have the possibility of being sentiment expressionssecond which of the possible sentiment expressions actually are expressing sentimentsthird what coding scheme should be used for marking contextual polaritythe mpqa annotation scheme has four types of annotations objective speech event frames two types of private state frames and agent frames that are used for marking speakers of speech events and experiencers of private statesa full description of the mpqa annotation scheme and an agreement study evaluating key aspects of the scheme are found in wiebe wilson and cardie the two types of private state frames direct subjective frames and expressive subjective element frames are where we will find sentiment expressionsdirect subjective frames are used to mark direct references to private states as well as speech events in which private states are being expressedfor example in the following sentences fears praised and said are all marked as direct subjective annotationsthe word fears directly refers to a private state praised refers to a speech event in which a private state is being expressed and said is marked as direct subjective because a private state is being expressed within the speech event referred to by saidexpressive subjective elements indirectly express private states through the way something is described or through a particular wordingin example the phrase full of absurdities is an expressive subjective elementsubjectivity refers to the linguistic expression of private states hence the names for the two types of private state annotationsall expressive subjective elements are included in the set of annotations that have the possibility of being sentiment expressions but the direct subjective frames to include in this set can be pared down furtherdirect subjective frames have an attribute expression intensity that captures the contribution of the annotated word or phrase to the overall intensity of the private state being expressedexpression intensity ranges from neutral to highin the given sentences fears and praised have an expression intensity of medium and said has an expression intensity of neutrala neutral expression intensity indicates that the direct subjective phrase itself is not contributing to the expression of the private stateif this is the case then the direct subjective phrase cannot be a sentiment expressionthus only direct subjective annotations with a nonneutral expression intensity are included in the set of annotations that have the possibility of being sentiment expressionswe call this set of annotations the union of the expressive subjective elements and the direct subjective frames with a nonneutral intensity the subjective expressions in the corpus these are the annotations we will mark for contextual polaritytable 1 gives a sample of subjective expressions marked in the mpqa corpusalthough many of the words and phrases express what we typically think of as sentiments others do not for example believes very definitely and unconditionally and without delaynow that we have identified which annotations have the possibility of being sentiment expressions the next question is which of these annotated words and phrases are actually expressing sentimentswe define a sentiment as a positive or negative emotion evaluation or stanceon the left of table 2 are examples of positive sentiments examples of negative sentiments are on the rightsample of subjective expressions from the mpqa corpus victory of justice and freedom such a disadvantageous situation grown tremendously must such animosity not true at all throttling the voice imperative for harmonious society disdain and wrath glorious so exciting disastrous consequences could not have wished for a better situation believes freak show the embodiment of twosided justice if you are not with us you are against us appalling vehemently denied very definitely everything good and nice once and for all under no circumstances shameful mum most fraudulent terrorist and extremist enthusiastically asked number one democracy hate seems to think gross misstatement indulging in bloodshe would and their lunaticism surprised to put it mildly take justice to prehistoric times unconditionally and without delay so conservative that it makes pat buchanan look vegetarian those digging graves for others get engraved themselves lost the reputation of commitment to principles of human justice ultimately the demon they have reared will eat up their own vitals the final issue to address is the actual annotation scheme for marking contextual polaritythe scheme we developed has four tags positive negative both and neutralthe positive tag is used to mark positive sentimentsthe negative tag is used to mark negative sentimentsthe both tag is applied to expressions in which both a positive and negative sentiment are being expressedsubjective expressions with positive negative or both tags are our sentiment expressionsthe neutral tag is used for all other subjective expressions including emotions evaluations and stances that are neither positive or negativeinstructions for the contextualpolarity annotation scheme are available at httpwwwcspittedumpqadatabasereleasepolaritycodinginstructionstxtfollowing are examples from the corpus of each of the different contextualpolarity annotationseach underlined word or phrase is a subjective expression that was marked in the original mpqa annotations3 in bold following each subjective expression is the contextual polarity with which it was annotatedto measure the reliability of the polarity annotation scheme we conducted an agreement study with two annotators4 using 10 documents from the mpqa corpusthe 10 documents contain 447 subjective expressionstable 3 shows the contingency table for the two annotators judgmentsoverall agreement is 82 with a kappa value of 072as part of the annotation scheme annotators are asked to judge how certain they are in their polarity tagsfor 18 of the subjective expressions at least one annotator used the uncertain tag when marking polarityif we consider these cases to be borderline and exclude them from the study percent agreement increases to 90 and kappa rises to 084table 4 shows the revised contingency table with the uncertain cases removedthis shows that annotator agreement is especially high when both annotators are certain and that annotators are certain for over 80 of their tagsnote that all annotations are included in the experimentsin total all 19962 subjective expressions in the 535 documents of the mpqa corpus were annotated with their contextual polarity as just described5 three annotators carried out the task the two who participated in the annotation study and a third who was trained later6 table 5 gives the distribution of the contextual polarity tagslooking at this table we see that a small majority of subjective expressions are expressing a positive negative or both sentimentwe refer to these expressions as polar in contextmany of the subjective expressions are neutral and do not express a sentimentthis suggests that although sentiment is a major type of subjectivity distinguishing other prominent types of subjectivity will be important for future work in subjectivity analysisas many nlp applications operate at the sentence level one important issue to consider is the distribution of sentences with respect to the subjective expressions they containin the 11112 sentences in the mpqa corpus 28 contain no subjective expressions 24 contain only one and 48 contain two or moreof the 5304 sentences containing two or more subjective expressions 17 contain mixtures of positive and negative expressions and 61 contain mixtures of polar and neutral subjective expressionsfor the experiments in this article we use a lexicon of over 8000 subjectivity cluessubjectivity clues are words and phrases that may be used to express private statesin other words subjectivity clues have subjective usages though they may have objective usages as wellfor this work only singleword clues are usedto compile the lexicon we began with the list of subjectivity clues from riloff and wiebe which includes the positive and negative adjectives from hatzivassiloglou and mckeown the words in this list were grouped in previous work according to their reliability as subjectivity clueswords that are subjective in most contexts are considered strong subjective clues indicated by the strongsubj tagwords that may only have certain subjective usages are considered weak subjective clues indicated by the weaksubj tagwe expanded the list using a dictionary and a thesaurus and added words from the general inquirer positive and negative word lists that we judged to be potentially subjective7 we also gave the new words strongsubj and weaksubj reliability tagsthe final lexicon has a coverage of 67 of subjective expressions in the mpqa corpus where coverage is the percentage of subjective expressions containing one or more instances of clues from the lexiconthe coverage of just sentiment expressions is even higher 75the next step was to tag the clues in the lexicon with their prior polarity positive negative both or neutrala word in the lexicon is tagged as positive if out of context it seems to evoke something positive and negative if it seems to evoke something negativeif a word has both positive and negative meanings it is tagged with the polarity that seems the most commona word is tagged as both if it is at the same time both positive and negativefor example the word bittersweet evokes something both positive and negativewords like brag are also tagged as both because the one who is bragging is expressing something positive yet at the same time describing someone as bragging is expressing a negative evaluation of that persona word is tagged as neutral if it does not evoke anything positive or negativefor words that came from positive and negative word lists we largely retained their original polarityhowever we did change the polarity of a word if we strongly disagreed with its original class8 for example the word apocalypse is listed as positive in the general inquirer we changed its prior polarity to negative for our lexiconby far the majority of clues in the lexicon are marked as having either positive or negative prior polarityonly a small number of clues are marked as having both positive and negative polaritywe refer to the set of clues marked as positive negative or both as sentiment cluesa total of 69 of the clues in the lexicon are marked as neutralexamples of neutral clues are verbs such as feel look and think and intensifiers such as deeply entirely and practicallyalthough the neutral clues make up a small proportion of the total words in the lexicon we retain them for our later experiments in recognizing contextual polarity because many of them are good clues that a sentiment is being expressed including them increases the coverage of the systemat the end of the previous section we considered the distribution of sentences in the mpqa corpus with respect to the subjective expressions they containit is interesting to compare that distribution with the distribution of sentences with respect to the instances they contain of clues from the lexiconwe find that there are more sentences with two or more clue instances than sentences with two or more subjective expressions more importantly many more sentences have mixtures of positive and negative clue instances than actually have mixtures of positive and negative subjective expressionsonly 880 sentences have a mixture of both positive and negative subjective expressions whereas 3234 sentences have a mixture of positive and negative clue instancesthus a large number of positive and negative instances are either neutral in context or they are combining to form more complex polarity expressionseither way this provides strong evidence of the need to be able to disambiguate the contextual polarity of subjectivity and sentiment cluesin the experiments described in the following sections the goal is to classify the contextual polarity of the expressions that contain instances of the subjectivity clues in our lexiconhowever determining which clue instances are part of the same expression and identifying expression boundaries are not the focus of this workthus instead of trying to identify and label each expression in the following experiments each clue instance is labeled individually as to its contextual polaritywe define the goldstandard contextual polarity of a clue instance in terms of the manual annotations as followsif a clue instance is not in a subjective expression its gold class is neutralif a clue instance appears in just one subjective expression or in multiple subjective expressions with the same contextual polarity its gold class is the contextual polarity of the subjective expressionif a clue instance appears in a mixture of negative and neutral subjective expressions its gold class is negative if it is in a mixture of positive and neutral subjective expressions its gold class is positivefinally if a clue instance appears in at least one positive and one negative subjective expression then its gold class is botha clue instance can appear in more than one subjective expression because in the mpqa annotation scheme it is possible for direct subjective frames and expressive subjective elements frames to overlapbefore delving into the task of recognizing contextual polarity an important question to address is how useful prior polarity alone is for identifying contextual polarityto answer this question we create a classifier that simply assumes the contextual polarity of a clue instance is the same as the clues prior polaritywe explore this classifiers performance on a small amount of development data which is not part of the data used in the subsequent experimentsthis simple classifier has an accuracy of 48from the confusion matrix given in table 6 we see that 76 of the errors result from words with nonneutral prior polarity appearing in phrases with neutral contextual polarityonly 12 of the errors result from words with neutral prior polarity appearing in expressions with nonneutral contextual polarity and only 11 of the errors come from words with a positive or negative prior polarity appearing in expressions with the opposite contextual polaritytable 6 also shows that positive clues tend to be used in negative expressions far more often than negative clues tend to be used in positive expressionsgiven that by far the largest number of errors come from clues with positive negative or both prior polarity appearing in neutral contexts we were motivated to try a twostep approach to the problem of sentiment classificationthe first step neutral polar classification tries to determine if an instance is neutral or polar in contextthe second step polarity classification takes all instances that step one classified as polar and tries to disambiguate their contextual polaritythis twostep approach is illustrated in figure 1the features used in our experiments were motivated both by the literature and by exploration of the contextualpolarity annotations in our development dataa number twostep approach to recognizing contextual polarity of features were inspired by the paper on contextualpolarity influencers by polanyi and zaenan other features are those that have been found useful in the past for recognizing subjective sentences for distinguishing between neutral and polar instances we use the features listed in table 7for ease of description we group the features into six sets word features general modification features polarity modification features structure features sentence features and one document featureword features in addition to the word token the word features include the parts of speech of the previous word the word itself and the next wordthe prior polarity and reliability class features represent those pieces of information about the clue which are taken from the lexicongeneral modification features these are binary features that capture different types of relationships involving the clue instancethe first four features involve relationships with the word immediately before or after the clue instancethe preceded by adjective feature is true if the clue instance is a noun preceded by an adjectivethe preceded by adverb feature is true if the preceding word is an adverb other than notthe preceded by intensifier feature is true if the preceding word is an intensifier and the self intensifier feature is true if the clue instance itself is an intensifiera word is considered to be an intensifier if it appears in a list of intensifiers and if it precedes a word of the appropriate part of speech the list of intensifiers is a compilation of those listed in quirk et al intensifiers identified from existing entries in the subjectivity lexicon and intensifiers identified during explorations of the development datathe modifiesmodifed by features involve the dependency parse tree of the sentence obtained by first parsing the sentence and then converting the tree into its dependency representation in a dependency representation every node in the tree structure is a surface word the parent word is called the head and its children are its modifiersthe edge between a parent and a child specifies the grammatical relationship between the two wordsfigure 2 shows an example of a dependency parse treeinstances of clues in the tree are marked with the clues prior polarity and reliability class from the lexiconfor each clue instance the modifiesmodifed by features capture whether there are adj mod or vmod relationships between the clue instance and any other instances from the lexiconspecifically the modifies strongsubj feature is true if the clue instance and its parent share an adj mod or vmod relationship and if its parent is an instance of a strongsubj clue from the lexiconthe modifies weaksubj feature is the same except that it looks in the parent for an instance of a weaksubj cluethe modified by strongsubj the dependency tree for the sentence the human rights report poses a substantial challenge to the yous interpretation of good and evilprior polarity and reliability class are marked in parentheses for words that match clues from the lexicon feature is true for a clue instance if one of its children is an instance of a strongsubj clue and if the clue instance and its child share an adj mod or vmod relationshipthe modified by weaksubj feature is the same except that it looks for instances of weaksubj clues in the childrenalthough the adj and vmod relationships are typically local the mod relationship involves longerdistance as well as local dependenciesfigure 2 helps to illustrate these featuresthe modifies weaksubj feature is true for substantial because substantial modifies challenge which is an instance of a weaksubj cluefor rights the modifies weaksubj feature is false because rights modifies report which is not an instance of a weaksubj cluethe modified by weaksubj feature is false for substantial because it has no modifiers that are instances of weaksubj cluesfor challenge the modified by weaksubj feature is true because it is being modified by substantial which is an instance of a weaksubj cluepolarity modification features the modifies polarity modified by polarity and conj polarity features capture specific relationships between the clue instance and other sentiment clues it may be related toif the clue instance and its parent in the dependency tree share an obj adj mod or vmod relationship the modifies polarity feature is set to the prior polarity of the parentif the parent is not in the priorpolarity lexicon its prior polarity is considered neutralif the clue instance is at the root of the tree and has no parent the value of the feature is notmodthe modified by polarity feature is similar looking for adj mod and vmod relationships and other sentiment clues in the children of the clue instancethe conj polarity feature determines if the clue instance is in a conjunctionif so the value of this feature is its siblings prior polarityas before if the sibling is not in the wilson wiebe and hoffmann recognizing contextual polarity lexicon its prior polarity is neutralif the clue instance is not in a conjunction the value for this feature is notmodfigure 2 also helps to illustrate these modification featuresthe word substantial with positive prior polarity modifies the word challenge with negative prior polaritytherefore the modifies polarity feature is negative for substantial and the modified by polarity feature is positive for challengethe words good and evil are in a conjunction together thus the conj polarity feature is negative for good and positive for evilstructure features these are binary features that are determined by starting with the clue instance and climbing up the dependency parse tree toward the root looking for particular relationships words or patternsthe in subject feature is true if we find a subj relationship on the path to the rootthe in copular feature is true if in subject is false and if a node along the path is both a main verb and a copular verbthe in passive feature is true if a passive verb pattern is found on the climbthe in subject and in copular features were motivated by the intuition that the syntactic role of a word may influence whether a word is being used to express a sentimentfor example consider the word polluters in each of the following two sentencesin the first sentence polluters is simply being used as a referring expressionin the second sentence polluters is clearly being used to express a negative evaluation of the farmersthe motivation for the in passive feature was previous work by riloff and wiebe who found that different words are more or less likely to be subjective depending on whether they are in the active or passivesentence features these are features that previously were found useful for sentencelevel subjectivity classification they include counts of strongsubj and weaksubj clue instances in the current previous and next sentences counts of adjectives and adverbs other than not in the current sentence and binary features to indicate whether the sentence contains a pronoun a cardinal number and a modal other than willdocument feature there is one document feature representing the topic or domain of the documentthe motivation for this feature is that whether or not a word is expressing a sentiment or even a private state in general may depend on the subject of the discoursefor example the words fever and sufferer may express a negative sentiment in certain contexts but probably not in a health or medical context as is the case in the following sentence the disease can be contracted if a person is bitten by a certain tick or if a person comes into contact with the blood of a congo fever suffererin the creation of the mpqa corpus about twothirds of the documents were selected to be on one of the 10 topics listed in table 8the documents for each topic were identified by human searches and by an information retrieval systemthe remaining documents were semirandomly selected from a very large pool of documents from the world pressin the corpus these documents are listed with the topic miscellaneousrather than leaving these documents unlabeled we chose to label them using the following general domain categories economics general politics health report events and war and terrorismtable 9 lists the features that we use for step two polarity classificationword token word prior polarity and the polaritymodification features are the same as described for neutralpolar classificationwe use two features to capture two different types of negationthe negated feature is a binary feature that is used to capture more local negations its value is true if a negation word or phrase is found within the four words preceding the clue instance and if the negation word is not also in a phrase that acts as an intensifier rather than a negatorexamples of phrases that intensify rather than negate are not only and nothing if notthe negated subject feature captures a longerdistance type of negationthis feature is true if the subject of the clause containing the clue instance is negatedfor example the negated subject feature is true for support in the following sentence no politically prudent israeli could support either of themthe last three polarity features look in a window of four words before the clue instance searching for the presence of particular types of polarity influencersgeneral polarity shifters reverse polarity negative polarity shifters typically make the polarity of an expression negative positive polarity shifters typically make the polarity of an expression positive the polarity influencers that we used were identified through explorations of the development datawe have two primary goals with our experiments in recognizing contextual polaritythe first is to evaluate the features described in section 7 as to their usefulness for this taskthe second is to investigate the importance of recognizing neutral instances recognizing when a sentiment clue is not being used to express a sentimentfor classifying contextual polarityto evaluate features we investigate their performance both together and separately across several different learning algorithmsvarying the learning algorithm allows us to verify that the features are robust and that their performance is not the artifact of a particular algorithmwe experiment with four different types of machine learning boosting memorybased learning rule learning and support vector learningfor boosting we use boostexter adaboostmhfor rule learning we use ripper for memorybased learning we use timbl ib1 for support vector learning we use svmlight and svmmulticlass svmlight is used for the experiments involving binary classification and svmmulticlass is used for experiments with more than two classesthese machine learning algorithms were chosen because they have been used successfully for a number of natural language processing tasks and they represent several different types of learningfor all of the classification algorithms except for svm the features for a clue instance are represented as they are presented in section 7for svm the representations for numeric and discretevalued features are changednumeric features such as the count of strongsubj clue instances in a sentence are scaled to range between 0 and 1discretevalued features such as the reliability class feature are converted into multiple binary featuresfor example the reliability class feature is represented by two binary features one for whether the clue instance is a strongsubj clue and one for whether the clue instance is a weaksubj clueto investigate the importance of recognizing neutral instances we perform two sets of polarity classification experimentsfirst we experiment with classifying the polarity of all goldstandard polar instancesthe clue instances identified as polar in context by the manual polarity annotationssecond we experiment with using the polar instances identified automatically by the neutralpolar classifiersbecause the second set of experiments includes the neutral instances misclassified in step one we can compare results for the two sets of experiments to see how the noise of neutral instances affects the performance of the polarity featuresall experiments are performed using 10fold cross validation over a test set of 10287 sentences from 494 mpqa corpus documentswe measure performance in terms of accuracy recall precision and fmeasureaccuracy is simply the total number of instances correctly classifiedrecall precision and fmeasure for a given class c are defined as followsrecall is the percentage of all instances of class c correctly identifiedall instances of c precision is the percentage of instances classified as class c that are class c in truthprec instances of c correctly identified all instances identified as c fmeasure is the harmonic mean of recall and precisionin our twostep process for recognizing contextual polarity the first step is neutralpolar classification determining whether each instance of a clue from the lexicon is neutral or polar in contextin our test set there are 26729 instances of clues from the lexiconthe features we use for this step were listed above in table 7 and described in section 71in this section we perform two sets of experimentsin the first we compare the results of neutralpolar classification using all the neutralpolar features against two baselinesthe first baseline uses just the word token featurethe second baseline uses the word token and prior polarity featuresin the second set of experiments we explore the performance of different sets of features for neutralpolar classificationresearch has shown that the performance of learning algorithms for nlp tasks can vary widely depending on their parameter settings and that the optimal parameter settings can also vary depending on the set of features being evaluated although the goal of this work is not to identify the optimal configuration for each algorithm and each set of features we still want to make a reasonable attempt to find a good configuration for each algorithmto do this we perform 10fold cross validation of the more challenging baseline classifier on the development data varying select parameter settingsthe results from those experiments are then used to select the parameter settings for each algorithmfor boostexter we vary the number of rounds of boostingfor timbl we vary the value for k and the distance metric for ripper we vary whether negative tests are disallowed for nominal and set valued attributes and how much to simplify the hypothesis for svm we experiment with linear polynomial and radial basis function kernelstable 10 gives the settings selected for the neutralpolar classification experiments for the different learning algorithmstable 11for each algorithm we give the results for the two baseline classifiers followed by the results for the classifier trained using all the neutralpolar featuresthe results shown in bold are significantly better than both baselines for the given algorithmworking together how well do the neutralpolar features performfor boostexter timbl and ripper the classifiers trained using all the features improve significantly over the two baselines in terms of accuracy polar recall polar fmeasure and neutral precisionneutral fmeasure is also higher but not significantly sothese consistent results across three of the four algorithms show that the neutralpolar features are helpful for determining when a sentiment clue is actually being used to express a sentimentinterestingly ripper is the only algorithm for which the wordtoken baseline performed better than the wordpriorpol baselinenevertheless the prior polarity feature is an important component in the performance of the ripper classifier using all the featuresexcluding prior polarity from this classifier results in a significant decrease in performance for every metricdecreases range from 25 for neutral recall to 95 for polar recallthe best svm classifier is the wordpriorpol baselinein terms of accuracy this classifier does not perform much worse than the boostexter and timbl classifiers that use all the neutralpolar features the svm wordpriorpol baseline classifier has an accuracy of 756 and both the boostexter and timbl classifiers have an accuracy of 765however the boostexter and timbl classifiers using all the features perform notably better in terms of polar recall and fmeasurethe boostexter and timbl classifiers have polar recalls that are 7 and 92 higher than the svm baselinepolar fmeasures for boostexter and timbl are 39 and 45 higherthese increases are significant for p 001812 feature set evaluationto evaluate the contribution of the various features for neutralpolar classification we perform a series of experiments in which different sets of neutralpolar features are added to the wordpriorpol baseline and new classifiers are trainedwe then compare the performance of these new classifiers to the wordpriorpol baseline with the exception of the ripper classifiers which we compare to the higher word baselinetable 12 lists the sets of features tested in these experimentsthe feature sets generally correspond to how the neutralpolar features are presented in table 7 although some of the groups are broken down into more finegrained sets that we believe capture meaningful distinctionstable 13 gives the results for these experimentsincreases and decreases for a given metric as compared to the wordpriorpol baseline are indicated by or respectivelywhere changes are significant at the p 01 level or are used and where changes are significant at the p 005 level or are usedan nc indicates no change compared to the baselinewhat does table 13 reveal about the performance of various feature sets for neutral polar classificationmost noticeable is that no individual feature sets stand out as strong performersthe only significant improvements in accuracy come from the partsofspeech and reliabilityclass feature sets for ripperthese improvements are perhaps not surprising given that the ripper baseline was much lower to begin withvery few feature sets show any improvement for svmagain this is not unexpected given that all the features together performed worse than the wordpriorpol baseline increases and decreases for a given metric as compared to the wordpriorpol baseline are indicated by or respectively or indicates the change is significant at the p 01 level or indicates significance at the p 005 level nc indicates no change for svmthe performance of the feature sets for boostexter and timbl are perhaps the most revealingin the previous experiments using all the features together these algorithms produced classifiers with the same high performancein these experiments six different feature sets for each algorithm show improvements in accuracy over the baseline yet none of those improvements are significantthis suggests that achieving the highest performance for neutralpolar classification requires a wide variety of features working together in combinationwe further test this result by evaluating the effect of removing the features that produced either no change or a drop in accuracy from the respective allfeature classifiersfor example we train a timbl neutralpolar classifier using all the features except for those in the precededpos intensify structure cursentcounts and topic feature sets and then compare the performance of this new classifier to the timbl allfeature classifieralthough removing the nonperforming features has little effect for boostexter performance does drop for both timbl and ripperthe primary source of this performance drop is a decrease in polar recall 2 for timbl and 32 for ripperalthough no feature sets stand out in table 13 as far as giving an overall high performance there are some features that consistently improve performance across the different algorithmsthe reliability class of the clue instance improves accuracy over the baseline for all four algorithmsit is the only feature that does sothe relclassmod features give improvements for all metrics for boostexter ripper and timbl as well as improving polar fmeasure for svmthe partsofspeech features are also fairly consistent improving performance for all the algorithms except for svmthere are also a couple of feature sets that consistently do not improve performance for any of the algorithms the intensify and precededpos featuresfor the second step of recognizing contextual polarity we classify the polarity of all clue instances identified as polar in step onethe features for polarity classification were listed in table 9 and described in section 72we investigate the performance of the polarity features under two conditions perfect neutralpolar recognition and automatic neutralpolar recognitionfor condition 1 we identify the polar instances according to the goldstandard manual contextualpolarity annotationsin the test data 9835 instances of the clues from the lexicon are polar in context according to the manual annotationsexperiments under condition 1 classify these instances as having positive negative or both polarityfor condition 2 we take the best performing neutralpolar classifier for each algorithm and use the output from those algorithms to identify the polar instancesbecause polar instances now are being identified automatically there will be noise in the form of misclassified neutral instancestherefore for experiments under condition 2 we include the neutral class and perform fourway classification instead of threewaycondition 1 allows us to investigate the performance of the different polarity features without the noise of misclassified neutral instancesalso because the set of polar instances being classified is the same for all the algorithms condition 1 allows us to compare the performance of the polarity features across the different algorithmshowever condition 2 is the more natural oneit allows us to see how the noise of neutral instances affects the performance of the polarity featuresthe following sections describe three sets of experimentsfirst we investigate the performance of the polarity features used together for polarity classification under condition 1as before the word and wordpriorpol classifiers provide our baselinesin the second set of experiments we explore the performance of different sets of features for polarity classification again assuming perfect recognition of the polar instancesfinally we experiment with polarity classification using all the polarity features under condition 2 automatic recognition of the polar instancesas before we use the development data to select the parameter settings for each algorithmthe settings for polarity classification are given in table 14they were selected based on the performance of the wordpriorpol baseline classifier under condition 2821 classification results condition 1the results for polarity classification using all the polarity features assuming perfect neutralpolar recognition for step one are given in table 15for each algorithm we give the results for the two baseline classifiers followed by the results for the classifier trained using all the polarity featuresfor the metrics where the polarity features perform statistically better than both baselines the results are given in boldhow well do the polarity features perform working all togetherfor all algorithms the polarity classifier using all the features significantly outperforms both baselines in terms of accuracy positive fmeasure and negative fmeasurethese consistent improvements in performance across all four algorithms show that these features are quite useful for polarity classificationone interesting thing that table 15 reveals is that negative polarity words are much more straightforward to recognize than positive polarity words at least in this corpusfor the negative class precisions and recalls for the wordpriorpol baseline range from 822 to 872for the positive class precisions and recalls for the wordpriorpol baseline range from 637 to 767however it is with the positive class that polarity features seem to help the mostwith the addition of the polarity features positive fmeasure improves by 5 points on average improvements in negative fmeasures average only 275 points822 feature set evaluationto evaluate the performance of the various features for polarity classification we again perform a series of ablation experimentsas before we start with the wordpriorpol baseline classifier add different sets of polarity features train new classifiers and compare the results of the new classifiers to the baselineincreases and decreases for a given metric as compared to the wordpriorpol baseline are indicated by or respectively or indicates the change is significant at the p 01 level or indicates significance at the p 005 leveltable 16 lists the sets of features tested in each experiment and table 17 shows the results of the experimentsresults are reported as they were previously in section 812 with increases and decreases compared to the baseline for a given metric indicated by or respectivelylooking at table 17 we see that all three sets of polarity features help to increase performance as measured by accuracy and positive and negative fmeasuresthis is true for all the classification algorithmsas we might expect including the negation features has the most marked effect on the performance of polarity classification with statistically significant improvements for most metrics across all the algorithms9 the polaritymodification features also seem to be important for polarity classification in particular for disambiguating the positive instancesfor all the algorithms except timbl including the polaritymodification features results in significant improvements for at least one of the positive metricsthe polarity shifters also help classification but they seem to be the weakest of the features including them does not result in significant improvements for any algorithmanother question that is interesting to consider is how much the word token feature contributes to polarity classification given all the other polarity featuresis it enough to know the prior polarity of a word whether it is being negated and how it is related to other polarity influencersto answer this question we train classifiers using all the polarity features except for word tokentable 18 gives the results for these classifiers for comparison the results for the allfeature polarity classifiers are also giveninterestingly excluding the word token feature produces only small changes in the overall resultsthe results for boostexter and ripper are slightly lower and the results for svm are practically unchangedtimbl actually shows a slight improvement with the exception of the both classthis provides further evidence of the strength of the polarity featuresalso a classifier not tied to actual word tokens may potentially be a more domainindependent classifier823 classification results condition 2the experiments in section 821 show that the polarity features perform well under the ideal condition of perfect recognition of polar instancesthe next question to consider is how well the polarity features perform under the more natural but lessthanperfect condition of automatic recognition of polar instancesto investigate this the polarity classifiers for each algorithm in these experiments start with the polar instances identified by the best performing neutralpolar classifier for that algorithm the results for these experiments are given in table 19as before statistically significant improvements over both baselines are given in boldhow well do the polarity features perform in the presence of noise from misclassified neutral instancesour first observation comes from comparing table 15 with table 19 polarity classification results are much lower for all classifiers with the noise of neutral instancesyet in spite of this the polarity features still produce classifiers that outperform the baselinesfor three of the four algorithms the classifier using all the polarity features has the highest accuracyfor boostexter and timbl the improvements in accuracy over both baselines are significantalso for all algorithms using the polarity features gives the highest positive and negative fmeasuresbecause the set of polarity instances being classified by each algorithm is different we cannot directly compare the results from one algorithm to the nextalthough the twostep approach to recognizing contextual polarity allows us to focus our investigation on the performance of features for both neutralpolar classification and polarity classification the question remains how does the twostep approach compare to recognizing contextual polarity in a single classification stepthe results shown in table 20 help to answer this questionthe first row in table 20 for each algorithm shows the combined result for the two stages of classificationfor boostexter timbl and ripper this is the combination of results from using all the neutralpolar features for step one together with the results from using all of the polarity features for step two10 for svm this is the combination of results from the wordpriorpol baseline from step one together with results for using all the polarity features for step tworecall that the wordpriorpol classifier was the best neutralpolar classifier for svm the second rows for boostexter timbl and ripper show the results of a single classifier trained to recognize contextual polarity using all the neutralpolar and polarity features togetherfor svm the second row shows the results of classifying the contextual polarity using just the word token featurethis classifier outperformed all others for svmin the table the best result for each metric for each algorithm is highlighted in boldwhen comparing the twostep and onestep approaches contrary to our expectations we see that the onestep approach performs about as well or better than the twostep approach for recognizing contextual polarityfor svm the improvement in accuracy achieved by the twostep approach is significant but this is not true for the other algorithmsone fairly consistent difference between the two approaches is that the twostep approach scores slightly higher for neutral fmeasure and the onestep approach achieves higher fmeasures for the polarity classesthe difference in negative fmeasure is significant for boostexter timbl and ripperthe exception to this is svmfor svm the twostep approach achieves significantly higher positive and negative fmeasuresone last question we consider is how much the neutralpolar features contribute to the performance of the onestep classifiersthe third line in table 20 for boostexter timbl and ripper gives the results for a onestep classifier trained without the neutral polar featuresalthough the differences are not always large excluding the neutral polar features consistently degrades performance in terms of accuracy and positive negative and neutral fmeasuresthe drop in negative fmeasure is significant for all three algorithms the drop in neutral fmeasure is significant for boostexter and timbl and the drop in accuracy is significant for timbl and ripper the modest drop in performance that we see when excluding the neutralpolar features in the onestep approach seems to suggest that discriminating between neutral and polar instances is helpful but not necessarily crucialhowever consider figure 3in this figure we show the fmeasures for the positive negative and both classes for the boostexter polarity classifier that uses the goldstandard neutralpolar instances and for the boostexter onestep polarity classifier that uses all features plotting the same sets of results for the other three algorithms produces very similar figuresthe difference when the classifiers have to contend with the noise from neutral instances is dramaticalthough table 20 shows that there is room for improvement across all the contextual polarity classes figure 3 shows us that perhaps the best way to achieve these improvements is to improve the ability to discriminate the neutral class from the othersother researchers who have worked on classifying the contextual polarity of sentiment expressions are yi et al popescu and etzioni and suzuki takamura and okumura yi et al use a lexicon and manually developed patterns to classify contextual polaritytheir patterns are highquality yielding quite high precision over the set of expressions that they evaluatepopescu and etzioni use an unsupervised classification technique called relaxation labeling to recognize the contextual polarity of words that are at the heads of select opinion phrasesthey take an iterative approach using relaxation labeling first to determine the contextual polarities of the words then again to label the polarities of the words with respect to their targetsa third stage of relaxation labeling then is used to assign final polarities to the words taking into consideration the presence of other polarity terms and negationas we do popescu and etzioni use features that represent conjunctions and dependency relations between polarity wordssuzuki et al use a bootstrapping approach to classify the polarity of tuples of adjectives and their target nouns in japanese blogsincluded in the features that they use are the words that modify the adjectives and the word that the adjective modifiesthey consider the effect of a single negation term the japanese equivalent of notour work in recognizing contextual polarity differs from this research on expressionlevel sentiment analysis in several waysfirst the set of expressions they evaluate is limited either to those that target specific items of interest such as products and product features or to tuples of adjectives and nounsin contrast we seek to classify the contextual polarity of all instances of words from a large lexicon of subjectivity clues that appear in the corpusincluded in the lexicon are not only adjectives but nouns verbs adverbs and even modalsour work also differs from other research in the variety of features that we useas other researchers do we consider negation and the words that directly modify or are modified by the expression being classifiedhowever with negation we have features for both local and longerdistance types of negation and we take care to count negation terms only when they are actually being used to negate excluding for example negation terms when they are used in phrases that intensify we also include contextual features to capture the presence of other clue instances in the surrounding sentences and features that represent the reliability of clues from the lexiconfinally a unique aspect of the work presented in this article is the evaluation of different features for recognizing contextual polaritywe first presented the features explored in this research in wilson wiebe and hoffman but this work significantly extends that initial evaluationwe explore the performance of features across different learning algorithms and we evaluate not only features for discriminating between positive and negative polarity but features for determining when a word is or is not expressing a sentiment in the first place this is also the first work to evaluate the effect of neutral instances on the performance of features for discriminating between positive and negative contextual polarityrecognizing contextual polarity is just one facet of the research in automatic sentiment analysisresearch ranges from work on learning the prior polarity of words and phrases to characterizing the sentiment of documents such as recognizing inflammatory messages tracking sentiment over time in online discussions and classifying the sentiment of online messages customer feedback data or product and movie reviews identifying prior polarity is a different task than recognizing contextual polarity although the two tasks are complementarythe goal of identifying prior polarity is to automatically acquire the polarity of words or phrases for listing in a lexiconour work on recognizing contextual polarity begins with a lexicon of words with established prior polarities and then disambiguates in the corpus the polarity being expressed by the phrases in which instances of those words appearto make the relationship between that task and ours clearer some word lists that are used to evaluate methods for recognizing prior polarity are included in the priorpolarity lexicon used in our experimentsfor the most part the features explored in this work differ from the ones used to identify prior polarity with just a few exceptionsusing a feature to capture conjunctions between clue instances was motivated in part by the work of hatzivassiloglou and mckeown they use constraints on the cooccurrence in conjunctions of words with similar or opposite polarity to predict the prior polarity of adjectivesesuli and sebastiani consider negation in some of their experiments involving wordnet glossestakamura et al use negation words and phrases including phrases such as lack of that are members in our lists of polarity shifters and conjunctive expressions that they collect from corporaesuli and sebastiani is the only work in priorpolarity identification to include a neutral category and to consider a threeway classification between positive negative and neutral wordsalthough identifying prior polarity is a different task they report a finding similar to ours namely that accuracy is lower when neutral words are includedsome research in sentiment analysis classifies the sentiments of sentencesmorinaga et al yu and hatzivassiloglou kim and hovy hu and liu and grefenstette et al11 all begin by first creating priorpolarity lexiconsyu and hatzivassiloglou then assign a sentiment to a sentence by averaging the prior semantic orientations of instances of lexicon words in the sentencethus they do not identify the contextual polarity of individual phrases containing clue instances which is the focus of this workmorinaga et al only consider the positive or negative clue instance in each sentence that is closest to some target reference kim and hovy hu and liu and grefenstette et al multiply or count the prior polarities of clue instances in the sentencethese researchers also consider local negation to reverse polarity with morinaga et al also taking into account the negating effect of words like insufficienthowever they do not use the other types of features that we consider in our experimentskaji and kitsuregawa take a different approach to recognizing positive and negative sentencesthey bootstrap from information easily obtained in pro and con html tables and lists and from one highprecision linguistic pattern to automatically construct a large corpus of positive and negative sentencesthey then use this corpus to train a naive bayes sentence classifierin contrast to our work sentiment classification in all of this research is restricted to identifying only positive and negative sentences in addition only one sentiment is assigned per sentence our system assigns contextual polarity to individual expressions which would allow for a sentence to be assigned to multiple sentiment categoriesas we saw when exploring the contextual polarity annotations it is not uncommon for sentences to contain more than one sentiment expressionclassifying the sentiment of documents is a very different task than recognizing the contextual polarity of words and phraseshowever some researchers have reported findings about documentlevel classification that are similar to our findings about phraselevel classificationbai et al argue that dependencies among key sentiment terms are important for classifying document sentimentsimilarly we show that features for capturing when clue instances modify each other are important for phraselevel classification in particular for identifying positive expressionsgamon achieves his best results for document classification using a wide variety of features including rich linguistic features such as features that capture constituent structure features that combine partofspeech and semantic relations and features that capture tense informationwe also achieve our best results for phraselevel classification using a wide variety of features many of which are linguistically richkennedy and inkpen report consistently higher results for document sentiment classification when select polarity influencers including negators and intensifiers are included12 koppel and schler demonstrate the importance of neutral examples for documentlevel classificationin this work we show that being able to correctly identify neutral instances is also very important for phraselevel sentiment analysisbeing able to determine automatically the contextual polarity of words and phrases is an important problem in sentiment analysisin the research presented in this article we tackle this problem and show that it is much more complex than simply determining whether a word or phrase is positive or negativein our analysis of a corpus with annotations of subjective expressions and their contextual polarity we find that positive and negative words from a lexicon are used in neutral contexts much more often than they are used in expressions of the opposite polaritythe importance of identifying when contextual polarity is neutral is further revealed in our classification experiments when neutral instances are excluded the performance of features for distinguishing between positive and negative polarity greatly improvesa focus of this research is on understanding which features are important for recognizing contextual polaritywe experiment with a wide variety of linguistically motivated features and we evaluate the performance of these features using several different machine learning algorithmsfeatures for distinguishing between neutral and polar instances are evaluated as well as features for distinguishing between positive and negative contextual polarityfor classifying neutral and polar instances we find that although some features produce significant improvements over the baseline in terms of polar or neutral recall or precision it is the combination of features together that is needed to achieve significant improvements in accuracyfor classifying positive and negative contextual polarity features for capturing negation prove to be the most importanthowever we find that features that also perform well are those that capture when a word is modifying or being modified by other polarity termsthis suggests that identifying features that represent more complex interdependencies between polarity clues will be an important avenue for future researchanother direction for future work will be to expand our lexicon using existing techniques for acquiring the prior polarity of words and phrasesit follows that a larger lexicon will have a greater coverage of sentiment expressionshowever expanding the lexicon with automatically acquired priorpolarity tags may result in an even greater proportion of neutral instances to contend withgiven the degradation in performance created by the neutral instances whether expanding the lexicon automatically will result in improved performance for recognizing contextual polarity is an empirical questionfinally the overall goal of our research is to use phraselevel sentiment analysis in higherlevel nlp tasks such as opinion question answering and summarizationwe would like to thank the anonymous reviewers for their valuable comments and suggestionsthis work was supported in part by an andrew mellow predoctoral fellowship by the nsf under grant iis0208798 by the advanced research and development activity and by the european ist programme through the amida integrated project fp60033812
J09-3003
articles recognizing contextual polarity an exploration of features for phraselevel sentiment analysismany approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity however the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the words prior polaritypositive words are used in phrases expressing negative sentiments or vice versaalso quite often words that are positive or negative out of context are neutral in context meaning they are not even being used to express a sentimentthe goal of this work is to automatically distinguish between prior and contextual polarity with a focus on understanding which features are important for this taskbecause an important aspect of the problem is identifying when polar terms are being used in neutral contexts features for distinguishing between neutral and polar instances are evaluated as well as features for distinguishing between positive and negative contextual polaritythe evaluation includes assessing the performance of features across multiple machine learning algorithmsfor all learning algorithms except one the combination of all features together gives the best performanceanother facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polaritythese experiments show that the presence of neutral instances greatly degrades the performance of these features and that perhaps the best way to improve performance across all polarity classes is to improve the systems ability to identify when an instance is neutralwe explore the difference between prior and contextual polarity words that lose polarity in context or whose polarity is reversed because of context
generating phrasal and sentential paraphrases a survey of datadriven methods the task of paraphrasing is inherently familiar to speakers of all languages moreover the task of automatically generating or extracting semantic equivalences for the various units of language words phrases and sentencesis an important part of natural language processing and is being increasingly employed to improve the performance of several nlp applications in this article we attempt to conduct a comprehensive and applicationindependent survey of datadriven phrasal and sentential paraphrase generation methods while also conveying an appreciation for the importance and potential use of paraphrases in the field of nlp research recent work done in manual and automatic construction of paraphrase corpora is also examined we also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation the task of paraphrasing is inherently familiar to speakers of all languagesmoreover the task of automatically generating or extracting semantic equivalences for the various units of language words phrases and sentencesis an important part of natural language processing and is being increasingly employed to improve the performance of several nlp applicationsin this article we attempt to conduct a comprehensive and applicationindependent survey of datadriven phrasal and sentential paraphrase generation methods while also conveying an appreciation for the importance and potential use of paraphrases in the field of nlp researchrecent work done in manual and automatic construction of paraphrase corpora is also examinedwe also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generationalthough everyone may be familiar with the notion of paraphrase in its most fundamental sense there is still room for elaboration on how paraphrases may be automatically generated or elicited for use in language processing applicationsin this survey we make an attempt at such an elaborationan important outcome of this survey is the discovery that there are a large variety of paraphrase generation methods each with widely differing sets of characteristics in terms of performance as well as ease of deploymentwe also find that although many paraphrase methods are developed with a particular application in mind all methods share the potential for more general applicabilityfinally we observe that the choice of the most appropriate method for an application depends on proper matching of the characteristics of the produced paraphrases with an appropriate methodit could be argued that it is premature to survey an area of research that has shown promise but has not yet been tested for a long enough period however we believe this argument actually strengthens the motivation for a survey that can encourage the community to use paraphrases by providing an applicationindependent cohesive and condensed discussion of datadriven paraphrase generation techniqueswe should also acknowledge related work that has been done on furthering the communitys understanding of paraphraseshirst presents a comprehensive survey of paraphrasing focused on a deep analysis of the nature of a paraphrasethe current survey focuses instead on delineating the salient characteristics of the various paraphrase generation methods with an emphasis on describing how they could be used in several different nlp applicationsboth these treatments provide different but valuable perspectives on paraphrasingthe remainder of this section formalizes the concept of a paraphrase scopes out the coverage of this surveys discussion and provides broader context and motivation by discussing applications in which paraphrase generation has proven useful along with examplessection 2 briefly describes the tasks of paraphrase recognition and textual entailment and their relationship to paraphrase generation and extractionsection 3 forms the major contribution of this survey by examining various corporabased techniques for paraphrase generation organized by corpus typesection 4 examines recent work done to construct various types of paraphrase corpora and to elicit human judgments for such corporasection 5 considers the task of evaluating the performance of paraphrase generation and extraction techniquesfinally section 6 provides a brief glimpse of the future trends in paraphrase generation and section 7 concludes the survey with a summarythe concept of paraphrasing is most generally defined on the basis of the principle of semantic equivalence a paraphrase is an alternative surface form in the same language expressing the same semantic content as the original formparaphrases may occur at several levelsindividual lexical items having the same meaning are usually referred to as lexical paraphrases or more commonly synonyms for example and however lexical paraphrasing cannot be restricted strictly to the concept of synonymythere are several other forms such as hyperonymy where one of the words in the paraphrastic relationship is either more general or more specific than the other for example and the term phrasal paraphrase refers to phrasal fragments sharing the same semantic contentalthough these fragments most commonly take the form of syntactic phrases and they may also be patterns with linked variables for example two sentences that represent the same semantic content are termed sentential paraphrases for example although it is possible to generate very simple sentential paraphrases by simply substituting words and phrases in the original sentence with their respective semantic equivalents it is significantly more difficult to generate more interesting ones such as culicover describes some common forms of sentential paraphrasesthe idea of paraphrasing has been explored in conjunction with and employed in a large number of natural language processing applicationsgiven the difficulty inherent in surveying such a diverse task an unfortunate but necessary remedy is to impose certain limits on the scope of our discussionin this survey we will be restricting our discussion to only automatic acquisition of phrasal paraphrases and on generation of sentential paraphrasesmore specifically this entails the exclusion of certain categories of paraphrasing workhowever as a compromise for the interested reader we do include a relatively comprehensive list of references in this section for the work that is excluded from the surveyfor one we do not discuss paraphrasing techniques that rely primarily on knowledgebased resources such as dictionaries handwritten rules and formal grammars we also refrain from discussing work on purely lexical paraphrasing which usually comprises various ways to cluster words occurring in similar contexts 1 exclusion of general lexical paraphrasing methods obviously implies that other lexical methods developed just for specific applications are also excluded methods at the other end of the spectrum that paraphrase suprasentential units such as paragraphs and entire documents are also omitted from discussion finally we also do not discuss the notion of nearsynonymy before describing the techniques used for paraphrasing it is essential to examine the broader context of the application of paraphrasesfor some of the applications we discuss subsequently the use of paraphrases in the manner described may not yet be the normhowever wherever applicable we cite recent research that promises gains in performance by using paraphrases for these applicationsalso note that we only discuss those paraphrasing techniques that can generate the types of paraphrases under examination in this survey phrasal and sentential131 query and pattern expansionone of the most common applications of paraphrasing is the automatic generation of query variants for submission to information retrieval systems or of patterns for submission to information extraction systemsculicover describes one of the earliest theoretical frameworks for query keyword expansion using paraphrasesone of the earliest works to implement this approach generates several simple variants for compound nouns in queries submitted to a technical information retrieval systemfor example original circuit details variant 1 details about the circuit variant 2 the details of circuits 1 inferring words to be similar based on similar contexts can be thought of as the most common instance of employing distributional similaritythe concept of distributional similarity also turns out to be quite important for phrasal paraphrase generation and is discussed in more detail in section 31 these techniques is usually effected by utilizing the query log to determine semantic similarityjacquemin generates morphological syntactic and semantic variants for phrases in the agricultural domainfor example original simultaneous measurements variant concurrent measures original development area variant area of growth ravichandran and hovy use semisupervised learning to induce several paraphrastic patterns for each question type and use them in an opendomain question answering systemfor example for the inventor question type they generate riezler et al expand a query by generating nbest paraphrases for the query and then using any new words introduced therein as additional query termsfor example for the query how to live with cat allergies they may generate the following two paraphrasesthe novel words in the two paraphrases are highlighted in bold and are used to expand the original query finally paraphrases have also been used to improve the task of relation extraction most recently bhagat and ravichandran collect paraphrastic patterns for relation extraction by applying semisupervised paraphrase induction to a very large monolingual corpusfor example for the relation of acquisition they collect task for a given set of data and using the output so created as a reference against which to measure the performance of the systemthe two applications where comparison against humanauthored reference output has become the norm are machine translation and document summarizationin machine translation evaluation the translation hypotheses output by a machine translation system are evaluated against reference translations created by human translators by measuring the ngram overlap between the two however it is impossible for a single reference translation to capture all possible verbalizations that can convey the same semantic contentthis may unfairly penalize translation hypotheses that have the same meaning but use ngrams that are not present in the referencefor example the given system output s will not have a high score against the reference r even though it conveys precisely the same semantic content s we must consider the entire communityr we must bear in mind the community as a wholeone solution is to use multiple reference translations which is expensivean alternative solution tried in a number of recent approaches is to address this issue by allowing the evaluation process to take into account paraphrases of phrases in the reference translation so as to award credit to parts of the translation hypothesis that are semantically even if not lexically correct in evaluation of document summarization automatically generated summaries are also evaluated against reference summaries created by human authors zhou et al propose a new metric called paraeval that leverages an automatically extracted database of phrasal paraphrases to inform the computation of ngram overlap between peer summaries and multiple model summaries133 machine translationbesides being used in evaluation of machine translation systems paraphrasing has also been applied to directly improve the translation processcallisonburch koehn and osborne use automatically induced paraphrases to improve a statistical phrasebased machine translation systemsuch a system works by dividing the given sentence into phrases and translating each phrase individually by looking up its translation in a tablethe coverage of the translation system is improved by allowing any source phrase that does not have a translation in the table to use the translation of one of its paraphrasesfor example if a given spanish sentence contains the phrase presidente de brazil but the system does not have a translation for it another spanish phrase such as presidente brasileno may be automatically detected as a paraphrase of presidente de brazil then if the translation table contains a translation for the paraphrase the system can use the same translation for the given phrasetherefore paraphrasing allows the translation system to properly handle phrases that it does not otherwise know how to translateanother important issue for statistical machine translation systems is that of reference sparsitythe fundamental problem that translation systems have to face is that there is no such thing as the correct translation for any sentencein fact any given source sentence can often be translated into the target language in many valid waysbecause there can be many correct answers almost all models employed by smt systems require in addition to a large bitext a heldout development set comprising multiple highquality humanauthored reference translations in the target language in order to tune their parameters relative to a translation quality metrichowever given the time and cost implications of such a process most such data sets usually have only a single reference translationmadnani et al generate sentential paraphrases and use them to expand the available reference translations for such sets so that the machine translation system can learn a better set of system parametersa problem closely related to and as important as generating paraphrases is that of assigning a quantitative measurement to the semantic similarity of two phrases or even two given pieces of text a more complex formulation of the task would be to detect or recognize which sentences in the two texts are paraphrases of each other both of these task formulations fall under the category of paraphrase detection or recognitionthe latter formulation of the task has become popular in recent years and paraphrase generation techniques that require monolingual parallel or comparable corpora can benefit immensely from this taskin general paraphrase recognition can be very helpful for several nlp applicationstwo examples of such applications are texttotext generation and information extractiontexttotext generation applications rely heavily on paraphrase recognitionfor a multidocument summarization system detecting redundancy is a very important concern because two sentences from different documents may convey the same semantic content and it is important not to repeat the same information in the summaryon this note barzilay and mckeown exploit the redundancy present in a given set of sentences by detecting paraphrastic parts and fusing them into a single coherent sentencerecognizing similar semantic content is also critical for text simplification systems information extraction enables the detection of regularities of information structureevents which are reported many times about different individuals and in different formsand making them explicit so that they can be processed and used in other wayssekine shows how to use paraphrase recognition to cluster together extraction patterns to improve the cohesion of the extracted informationanother recently proposed natural language processing task is that of recognizing textual entailment a piece of text t is said to entail a hypothesis h if humans reading t will infer that h is most likely truethe observant reader will notice that in this sense the task of paraphrase recognition can simply be formulated as bidirectional entailment recognitionthe task of recognizing entailment is an applicationindependent task and has important ramifications for almost all other language processing tasks that can derive benefit from some form of applied semantic inferencefor this reason the task has received noticeable attention in the research community and annual communitywide evaluations of entailment systems have been held in the form of the recognizing textual entailment challenges looking towards the future dagan suggests that the textual entailment task provides a comprehensive framework for semantic inference and argues for building a concrete inference engine that not only recognizes entailment but also searches for all entailing texts given an entailment hypothesis h and conversely generates all entailed statements given a text t given such an engine dagan claims that paraphrase generation is simply a matter of generating all entailed statements given any sentencealthough this is a very attractive proposition that defines both paraphrase generation and recognition in terms of textual entailment there are some important caveatsfor example textual entailment cannot guarantee that the entailed hypothesis h captures all of the same meaning as the given text t consider the following example although both h1 and h2 are entailed by t they are not strictly paraphrases of t because some of the semantic content has not been carried overthis must be an important consideration when building the proposed entailment engineof course even these approximately semantically equivalent constructions may prove useful in an appropriate downstream applicationthe relationship between paraphrasing and entailment is more tightly entwined than it might appearentailment recognition systems sometimes rely on the use of paraphrastic templates or patterns as inputs and might even use paraphrase recognition to improve their performance in fact examination of some rte data sets in an attempt to quantitatively determine the presence of paraphrases has shown that a large percentage of the set consists of paraphrases rather than typical entailments it has also been observed that in the entailment challenges it is relatively easy for submitted systems to recognize constructions that partially overlap in meaning from those that are actually bound by an entailment relationon the flip side work has also been done to extend entailment recognition techniques for the purpose of paraphrase recognition detection of semantic similarity and to some extent that of bidirectional entailment is usually an implicit part of paraphrase generationhowever given the interesting and diverse work that has been done in both these areas we feel that any significant discussion beyond the treatment above merits a separate detailed surveyin this section we explore in detail the datadriven paraphrase generation approaches that have emerged and have become extremely popular in the last decade or sothese corpusbased methods have the potential of covering a much wider range of paraphrasing phenomena and the advantage of widespread availability of corporawe organize this section by the type of corpora used to generate the paraphrases a single monolingual corpus monolingual comparable corpora monolingual parallel corpora and bilingual parallel corporathis form of organization in our opinion is the most instructive because most of the algorithmic decisions made for paraphrase generation will depend heavily on the type of corpus usedfor instance it is reasonable to assume that a different set of considerations will be paramount when using a large single monolingual corpus than when using bilingual parallel corporahowever before delving into the actual paraphrasing methods we believe that it would be very useful to explain the motivation behind distributional similarity an extremely popular technique that can be used for paraphrase generation with several different types of corporawe devote the following section to such an explanationthe idea that a language possesses distributional structure was first discussed at length by harris the term represents the notion that one can describe a language in terms of relationships between the occurrences of its elements relative to the occurrence of other elementsthe name for the phenomenon is derived from an elements distributionsets of elements in particular positions that the element occurs with to produce an utterance or a sentencemore specifically harris presents several empirical observations to support the hypothesis that such a structure exists naturally for languagehere we closely quote these observations given these observations it is relatively easy to characterize the concept of distributional similarity words or phrases that share the same distributionthe same set of words in the same context in a corpustend to have similar meaningsfigure 1 shows the basic idea behind phrasal paraphrase generation techniques that leverage distributional similaritythe input corpus is usually a single or set of monolingual corpora after preprocessingwhich may include tagging the parts of speech generating parse trees and other transformationsthe next step is to extract pairs of words or phrases that occur in the same context in the corpora and hence may be considered semantically equivalentthis extraction may be accomplished by several means although it is possible to stop at this point and consider this list as the final output the list usually contains a lot of noise and may require additional filtering based on other criteria such as collocations counts from another corpus finally some techniques may go even further and attempt to generalize the filtered list of paraphrase pairs into templates or rules which may then be applied to other sentences to generate their paraphrasesnote that generalization as a postprocessing step may not be necessary if the induction process can extract distributionally similar patterns directlyone potential disadvantage of relying on distributional similarity is that items that are distributionally similar may not necessarily end up being paraphrastic both a general architecture for paraphrasing approaches leveraging the distributional similarity hypothesis elements of the pairs can occur in similar contexts but are not semantically equivalentin this section we concentrate on paraphrase generation methods that operate on a single monolingual corpusmost if not all such methods usually perform paraphrase induction by employing the idea of distributional similarity as outlined in the previous sectionbesides the obvious caveat discussed previously regarding distributional similarity we find that the other most important factor affecting the performance of these methods is the choice of distributional ingredientsthat is the features used to formulate the distribution of the extracted unitswe consider three commonly used techniques that generate phrasal paraphrases from a single monolingual corpus but use very different distributional features in terms of complexitythe first uses only surfacelevel features and the other two use features derived from additional semantic knowledgealthough the latter two methods are able to generate more sophisticated paraphrases by virtue of more specific and more informative ingredients we find that doing so usually has an adverse effect on their coveragepasca and dienes use as their input corpus a very large collection of web documents taken from the repository of documents crawled by googlealthough using web documents as input data does require a nontrivial preprocessing phase since such documents tend to be noisier there are certainly advantages to the use of web documents as the input corpus it does not require parallel documents and can allow leveraging of even larger document collectionsin addition the extracted paraphrases are not tied to any specific domain and are suitable for general applicationalgorithm 1 shows the details of the induction processsteps 36 extract all ngrams of a specific kind from each sentence each ngram has lc words at the beginning between m1 to m2 words in the middle and another lc words at the endsteps 713 can intuitively be interpreted as constructing a textual anchor aby concatenating a fixed number of words from the left and the rightfor each candidate paraphrase c and storing the tuple in h these anchors are taken to constitute the distribution of the words and phrases under inspectionfinally each occurrence of a pair of potential paraphrases that is a pair sharing one or more anchors is countedthe final set of phrasal paraphrastic pairs is returnedthis algorithm embodies the spirit of the hypothesis of distributional similarity it considers all words and phrases that are distributionally similarthose that occur with the same sets of anchors to be paraphrases of each otheradditionally the larger the set of shared anchors for two candidate phrases the stronger the likelihood that they are paraphrases of each otherafter extracting the list of paraphrases less likely phrasal paraphrases are filtered out by using an appropriate count thresholdpasca and dienes attempt to make their anchors even more informative by attempting variants where they extract the ngrams only from sentences that include specific additional information to be added to the anchorfor example in one variant they only use sentences where the candidate phrase is surrounded by named entities algorithm 1 induce a set of phrasal paraphrase pairs h with associated counts from a corpus of preprocessed web documentssummaryextract all ngrams from the corpus longer than a prestipulated lengthcompute a lexical anchor for each extracted ngrampairs of ngrams that share lexical anchors are then construed to be paraphrases on both sides and they attach the nearest pair of entities to the anchoras expected the paraphrases do improve in quality as the anchors become more specifichowever they also report that as anchors are made more specific by attaching additional information the likelihood of finding a candidate pair with the same anchor is reducedthe ingredients for measuring distributional similarity in a single corpus can certainly be more complex than simple phrases used by pasca and dieneslin and pantel discuss how to measure distributional similarity over dependency tree paths in order to induce generalized paraphrase templates such as2 whereas single links between nodes in a dependency tree represent direct semantic relationships a sequence of links or a path can be understood to represent an indirect relationshiphere a path is named by concatenating the dependency relationships and lexical items along the way but excluding the lexical items at the endin this way a path can actually be thought of as a pattern with variables at either endconsider the first dependency tree in figure 2one dependency path that we could extract would be between the node john and the node problemwe start at john and see that the first item in the tree is the dependency relation subject that connects a noun to a verb and so we append that information to the path3 the next item in the tree is the word found and we append its lemma to the pathnext is the semantic relation object connecting a verb to a noun and we append thatthe process continues until we reach the other slot at which point we stop4 the extracted path is shown below the treesimilarly we can extract a path for the second dependency treelet us briefly mention the terminology associated with such paths intuitively one can imagine a path to be a complex representation of the pattern x finds answer to y where x and y are variablesthis representation for a path is a perfect fit for an extended version of the distributional similarity hypothesis if similar sets of words fill the same variables for two different patterns then the patterns may be considered to have similar meaning which is indeed the case for the paths in figure 2lin and pantel use newspaper text as their input corpus and create dependency parses for all the sentences in the corpus in the preprocessing stepalgorithm 2 provides the details of the rest of the process steps 1 and 2 extract the paths and compute their distributional properties and steps 314 extract pairs of paths which are two different dependency tree paths that are considered paraphrastic because the same words are used to fill the corresponding slots in both the pathsthe implied meaning of each dependency path is also shown similar insofar as such properties are concerned5 at the end we have sets of paths that are considered to have similar meanings by the algorithmthe performance of their dependencypath based algorithm depends heavily on the root of the extracted pathfor example whereas verbs frequently tend to have several modifiers nouns tend to have no more than onehowever if a word has any fewer than two modifiers no path can go through it as the roottherefore the algorithm tends to perform better for paths with verbal rootsanother issue is that this algorithm despite the use of more informative distributional features can generate several incorrect or implausible paraphrase patterns recent work has shown how to filter out incorrect inferences when using them in a downstream application finally there is no reason for the distributional features to be in the same language as the one in which the paraphrases are desiredwu and zhou describe a bilingual approach to extract english relationbased paraphrastic patterns of the form where w1 and w2 are english words connected by a dependency link with the semantic relation r figure 3 shows a simple example based on their approachfirst instances of one type of pattern are extracted from a parsed monolingual corpusin the figure for example a single instance of the pattern has been extractedseveral new potentially paraphrastic english candidate patterns are then induced by replacing each of the english words with its synonyms in wordnet one at a timethe figure shows the list of induced patterns for the given examplenext each of the english words in each candidate pattern is translated to chinese via a bilingual dictionary6 using chinese translations as the distributional elements to extract a set of english paraphrastic patterns from a large english corpusgiven that the bilingual dictionary may contain multiple chinese translations for a given english word several chinese patterns may be created for each english candidate patterneach chinese pattern is assigned a probability value via a simple bagofwords translation model and a language model all translated patterns along with their probability values are then considered to be features of the particular english candidate patternany english pattern can subsequently be compared to another by computing cosine similarity over their shared featuresenglish collocation pairs whose similarity value exceeds some threshold are construed to be paraphrasticthe theme of a tradeoff between the precision of the generated paraphrase setby virtue of the increased informativeness of the distributional featuresand its coverage is seen in this work as wellwhen using translations from the bilingual dictionary a knowledgerich resource the authors report significantly higher precision than comparable methods that rely only on monolingual information to compute the distributional similaritypredictably they also find that recall values obtained with their dictionarybased method are lower than those obtained by other methodsparaphrase generation techniques using a single monolingual corpus have to rely on some form of distributional similarity because there are no explicit clues available that indicate semantic equivalencein the next section we look at paraphrasing methods operating over data that does contain such explicit cluesit is also possible to generate paraphrastic phrase pairs from a parallel corpus where each component of the corpus is in the same languageobviously the biggest advantage of parallel corpora is that the sentence pairs are paraphrases almost by definition they represent different renderings of the same meaning created by different translators making different lexical choicesin effect they contain pairs of sentences that are either semantically equivalent or have significant semantic overlapextraction of phrasal paraphrases can then be effected by extracting phrasal correspondences from a set of sentences that represent the same semantic contentwe present four techniques in this section that generate paraphrases by finding such correspondencesthe first two techniques attempt to do so by relying again on the paradigm of distributional similarity one by positing a bootstrapping distributional similarity algorithm and the other by simply adapting the previously described dependency path similarity algorithm to work with a parallel corpusthe next two techniques rely on more direct nondistributional methods to compute the required correspondencesbarzilay and mckeown align phrasal correspondences by attempting to move beyond a singlepass distributional similarity methodthey propose a bootstrapping algorithm that allows for the gradual refinement of the features used to determine similarity and yields improved paraphrase pairsas their input corpus they use multiple humanwritten english translations of literary texts such as madame bovary and twenty thousand leagues under the sea that are expected to be rich in paraphrastic expressions because different translators would use their own words while still preserving the meaning of the original textthe parallel components are obtained by performing sentence alignment on the corpora to obtain sets of parallel sentences that are then lemmatized partofspeech tagged and chunked in order to identify all the verb and noun phrasesthe bootstrapping algorithm is then employed to incrementally learn better and better contextual features that are then leveraged to generate semantically similar phrasal correspondencesfigure 4 shows the basic steps of the algorithmto seed the algorithm some fake paraphrase examples are extracted by using identical words from either side of the aligned sentence pairfor example given the following sentence pair s1 emma burst into tears and he tried to comfort hers2 emma cried and he tried to console hera bootstrapping algorithm to extract phrasal paraphrase pairs from monolingual parallel corpora may be extracted as positive examples and may be extracted as negative examplesonce the seeding examples are extracted the next step is to extract contextual features for both the positive and the negative examplesthese features take the form of aligned partofspeech sequences of a given length from the left and the right of the examplefor instance we can extract the contextual feature of length 1 for the positive example this particular contextual feature contains two tuples one for each sentencethe first tuple indicates that in the first sentence the pos tag sequence to the left of the word tried is a personal pronoun and the pos tag sequence to the right of tired is the preposition tothe second tuple is identical for this casenote that the tags of identical tokens are indicated as such by subscripts on the pos tagsall such features are extracted for both the positive and the negative examples for all lengths less than or equal to some specified lengthin addition a strength value is calculated for each positive contextual feature f using maximum likelihood estimation as follows strengthnumber of positive examples surrounded by f _ total occurrences off the extracted list of contextual features is thresholded on the basis of this strength valuethe remaining contextual rules are then applied to the corpora to obtain additional positive and negative paraphrase examples that in turn lead to more refined contextual rules and so onthe process is repeated for a fixed number of iterations or until no new paraphrase examples are producedthe list of extracted paraphrases at the end of the final iteration represents the final output of the algorithmin total about 9 000 phrasal paraphrases are extracted from 11 translations of five works of classic literaturefurthermore the extracted paraphrase pairs are also generalized into about 25 patterns by extracting partofspeech tag sequences corresponding to the tokens of the paraphrase pairsbarzilay and mckeown also perform an interesting comparison with another technique that was originally developed for compiling translation lexicons from bilingual parallel corpora this technique first compiles an initial lexicon using simple cooccurrence statistics and then uses a competitive linking algorithm to improve the quality of the lexiconthe authors apply this technique to their monolingual parallel data and observe that the extracted paraphrase pairs are of much lower quality than the pairs extracted by their own methodwe present similar observations in section 35 and highlight that although more recent translation techniques specifically ones that use phrases as units of translationare better suited to the task of generating paraphrases than the competitive linking approach they continue to suffer from the same problem of low precisionon the other hand such techniques can take good advantage of large bilingual corpora and capture a much larger variety of paraphrastic phenomenaibrahim katz and lin propose an approach that applies a modified version of the dependency path distributional similarity algorithm proposed by lin and pantel to the same monolingual parallel corpus used by barzilay and mckeown the authors claim that their technique is more tractable than lin and pantels approach since the sentencealigned nature of the input parallel corpus obviates the need to compute similarity over tree paths drawn from sentences that have zero semantic overlapfurthermore they also claim that their technique exploits the parallel nature of a corpus more effectively than barzilay and mckeowns approach simply because their technique uses tree paths and not just lexical informationspecifically they propose the following modifications to lin and pantels algorithm despite the authors claims they offer no quantitative evaluation comparing their paraphrases with those from lin and pantel or from barzilay and mckeown it is also possible to find correspondences between the parallel sentences using a more direct approach instead of relying on distributional similaritypang knight and marcu propose an algorithm to align sets of parallel sentences driven entirely by the syntactic representations of the sentencesthe alignment algorithm outputs a merged lattice from which lexical phrasal and sentential paraphrases can simply be read offmore specifically they use the multipletranslation chinese corpus that was originally developed for machine translation evaluation and contains 11 humanwritten english translations for each sentence in a news documentusing several sentences explicitly equivalent in semantic content has the advantage of being a richer source for paraphrase inductionas a preprocessing step any group that contains sentences longer than 45 words is discardednext each sentence in each of the groups is parsedall the parse trees are then iteratively merged into a shared forestthe merging algorithm proceeds topdown and continues to recursively merge constituent nodes that are expanded identicallyit stops upon reaching the leaves or upon encountering the same constituent node expanded using different grammar rulesfigure 5 shows how the merging algorithm would work on two simple parse treesin the figure it is apparent that the leaves of the forest encode paraphrasing informationhowever the merging only allows identical constituents to be considered as paraphrasesin addition keywordbased heuristics need to be employed to prevent inaccurate merging of constituent nodes due to say alternations of active and passive voices among the the merging algorithm how the merging algorithm works for two simple parse trees to produce a shared forestnote that for clarity not all constituents are expanded fullyleaf nodes with two entries represent paraphrases the word lattice generated by linearizing the forest in sentences in the grouponce the forest is created it is linearized to create the word lattice by traversing the nodes in the forest topdown and producing an alternative path in the lattice for each merged nodefigure 5 shows the word lattice generated for the simple twotree forestthe lattices also require some postprocessing to remove redundant edges and nodes that may have arisen due to parsing errors or limitations in the merging algorithmthe final output of the paraphrasing algorithm is a set of word lattices one for each sentence groupthese lattices can be used as sources of lexical as well as phrasal paraphrasesall alternative paths between any pair of nodes can be considered to be paraphrases of each otherfor example besides the obvious lexical paraphrases the paraphrase pair can also be extracted from the lattice in figure 5in addition each path between the start and the end nodes in the lattice represents a sentential paraphrase of the original 11 sentences used to create the latticethe direct alignment approach is able to leverage the sheer width of the input corpus to do away entirely with any need for measuring distributional similarityin general it has several advantagesit can capture a very large number of paraphrases each lattice has on the order of hundreds or thousands of paths depending on the average length of the sentence group that it was generated fromin addition the paraphrases produced are of better quality than other approaches employing parallel corpora for paraphrase induction discussed so farhowever the approach does have a couple of drawbacks the lattices described is built using 11 manually written translations of the same sentence each by a different translatorthere are very few corpora that provide such a large number of human translationsin recent years most mt corpora have had no more than four references which would certainly lead to much sparser word lattices and smaller numbers of paraphrases that can be extractedin fact given the cost and amount of effort required for humans to translate a relatively large corpus it is common to encounter corpora with only a single human translationwith such a corpus of course this technique would be unable to produce any paraphrasesone solution might be to augment the relatively few human translations with translations obtained from automatic machine translation systemsin fact the corpus used also contains besides the 11 human translations 6 translations of the same sentence by machine translation systems available on the internet at the timehowever no experiments are performed with the automatic translationsfinally an even more direct method to align equivalences in parallel sentence pairs can be effected by building on word alignment techniques from the field of statistical machine translation current stateoftheart smt methods rely on unsupervised induction of word alignment between two bilingual parallel sentences to extract translation equivalences that can then be used to translate a given sentence in one language into another languagethe same methods can be applied to monolingual parallel sentences without any loss of generalityquirk brockett and dolan use one such method to extract phrasal paraphrase pairsfurthermore they use these extracted phrasal pairs to construct sentential paraphrases for new sentencesmathematically quirk brockett and dolans approach to sentential paraphrase generation may be expressed in terms of the typical channel model equation for statistical machine translation the equation denotes the search for the optimal paraphrase ep for a given sentence e we may use bayes theorem to rewrite this as where p is an ngram language model providing a probabilistic estimate of the fluency of a hypothesis ep and p is the translation model or more appropriately for paraphrasing the replacement model providing a probabilistic estimate of what is essentially the semantic adequacy of the hypothesis paraphrasetherefore the optimal sentential paraphrase may loosely be described as one that fluently captures most if not all of the meaning contained in the input sentenceit is important to provide a brief description of the parallel corpus used here because unsupervised induction of word alignments typically requires a relatively large number of parallel sentence pairsthe monolingual parallel corpus is constructed by scraping online news sites for clusters of articles on the same topicsuch clusters contain the full text of each article and the dates and times of publicationafter removing the markup the authors discard any pair of sentences in a cluster where the difference in the lengths or the edit distance is larger than some stipulated valuethis method yields a corpus containing approximately 140 000 quasiparallel sentence pairs where e1 e11e21 em1 e2 e12e22 en2the following examples show that the proposed method can work well for more details on the creation of this corpus we refer the reader to dolan quirk and brockett and more specifically to section 4algorithm 3 shows how to algorithm 3 generate a set m of phrasal paraphrases with associated likelihood values from a monolingual parallel corpus c summaryestimate a simple english to english phrase translation model from c using word alignmentsuse this model to create sentential paraphrases as explained later4 for each wordaligned sentence pair a in c do 5 extract pairs of contiguous subsequences such that generate a set of phrasal paraphrase pairs and compute a probability value for each such pairin step 2 a simple parameter estimation technique is used to compute for later use the probability of replacing any given word with anotherstep 3 computes a word alignment between each pair of sentencesthis alignment indicates for each word ei in one string that word ej in the other string from which it was most likely produced steps 47 extract from each pair of sentences pairs of short contiguous phrases that are aligned with each other according to this alignmentnote that each such extracted pair is essentially a phrasal paraphrasefinally a probability value is computed for each such pair by assuming that each word of the first phrase can be replaced with each word of the second phrasethis computation uses the lexical replacement probabilities computed in step 2now that a set of scored phrasal pairs has been extracted these pairs can be used to generate paraphrases for any unseen sentencegeneration proceeds by creating a lattice for the given sentencegiven a sentence e the lattice is populated as follows figure 6 shows an example latticeonce the lattice has been constructed it is straightforward to extract the 1best paraphrase by using a dynamic programming algorithm such as viterbi decoding and extracting the optimal path from the lattice as scored by the product of an ngram language model and the replacement modelin addition as with smt decoding it is also possible to extract a list of nbest paraphrases from the lattice by using the appropriate algorithms quirk brockett and dolan borrow from the statistical machine translation literature so as to align phrasal equivalences as well as to utilize the aligned phrasal equivalences to rewrite new sentencesthe biggest advantage of this method is its smt inheritance it is possible to produce multiple sentential paraphrases for any new a paraphrase generation lattice for the sentence he ate lunch at a cafe near parisalternate paths between various nodes represent phrasal replacementsthe probability values associated with each edge are not shown for the sake of clarity sentence and there is no limit on the number of sentences that can be paraphrased7 however there are certain limitations all of these limitations combined lead to paraphrases that although grammatically sound contain very little varietymost sentential paraphrases that are generated involve little more than simple substitutions of words and short phrasesin section 35 we will discuss other approaches that also find inspiration from statistical machine translation and attempt to circumvent the above limitations by using a bilingual parallel corpus instead of a monolingual parallel corpuswhereas it is clearly to our advantage to have monolingual parallel corpora such corpora are usually not very readily availablethe corpora usually found in the real world are comparable instead of being truly parallel parallelism between sentences is replaced by just partial semantic and topical overlap at the level of documentstherefore for monolingual comparable corpora the task of finding phrasal correspondences becomes harder because the two corpora may only be related by way of describing events under the same topicin such a scenario possible paraphrasing methods either forgo any attempts at directly finding such correspondences and fall back to the distributional similarity workhorse or attempt to directly induce a form of coarsegrained alignment between the two corpora and leverage this alignmentin this section we describe three methods that generate paraphrases from such comparable corporathe first method falls under category here the elements whose distributional similarity is being measured are paraphrastic patterns and the distributions themselves are the named entities with which the elements occur in various sentencesin contrast the next two methods fall under category and attempt to directly discover correspondences between two comparable corpora by leveraging a novel alignment algorithm combined with some similarity heuristicsthe difference between the two latter methods lies only in the efficacy of the alignment algorithmshinyama et al use two sets of 300 news articles from two different japanese newspapers from the same day as their source of paraphrasesthe comparable nature of the articles is ensured because both sets are from the same dayduring preprocessing all named entities in each article are tagged and dependency parses are created for each sentence in each articlethe distributional similarity driven algorithm then proceeds as follows at the end the output is a list of generalized paraphrase patterns with named entity types as variablesfor example the algorithm may generate the following two patterns as paraphrases is promoted to the promotion of to is decided as a later refinement sekine makes a similar attempt at using distributional similarity over named entity pairs in order to produce a list of fully lexicalized phrasal paraphrases for specific concepts represented by keywordsthe idea of enlisting named entities as proxies for detecting semantic equivalence is interesting and has certainly been explored before however it has some obvious disadvantagesthe authors manually evaluate the technique by generating paraphrases for two specific domains and find that while the precision is reasonably good the coverage is very low primarily due to restrictions on the patterns that may be extracted in step 2in addition if the average number of entities in sentences is low the likelihood of creating incorrect paraphrases is confirmed to be higherlet us now consider the altogether separate idea of deriving coarsegrained correspondences by leveraging the comparable nature of the corporabarzilay and lee attempt to do so by generating compact sentence clusters in template form separately from each corpora and then pairing up templates from one corpus with those from the otheronce the templates are paired up a new incoming sentence that matches one member of a template pair gets rendered as the other member thereby generating a paraphrasethey use as input a pair of corpora the first consisting of clusters of news articles published by agence france presse and the second consisting of those published by reutersthe two corpora may be considered comparable since the articles in each are related to the same topic and were published during the same time framealgorithm 4 shows some details of how their technique workssteps 318 show how to cluster topically related sentences construct a word lattice from such a cluster and convert that into a slotted latticebasically a word lattice with certain nodes recast as variables or empty slotsthe clustering is done so as to bring together sentences pertaining to the same topics and having similar structurethe word lattice is the product of an algorithm that computes a multiplesequence alignment for a cluster of sentences a very brief outline of such an algorithm originally developed to compute an alignment for a set of three or more protein or dna sequences is as follows9 the word lattice so generated now needs to be converted into a slotted lattice to allow its use as a paraphrase templateslotting is performed based on the following intuition areas of high variability between backbone nodes that is several distinct parallel paths may correspond to template arguments and can be collapsed into one slot that can be filled by these argumentshowever multiple parallel paths may also appear in the lattice because of simple synonymy and those paths must be retained for paraphrase generation to be usefulto differentiate between the two cases a synonymy threshold s of 30 is used as shown in steps 814the basic idea behind the threshold is that as the number of sentences increases the number of different arguments will increase faster than the number of synonymsfigure 7 shows how a very simple word lattice may be generalized into a slotted latticeonce all the slotted lattices have been constructed for each corpus steps 1924 try to match the slotted lattices extracted from one corpus to those extracted from the other by referring back to the sentence clusters from which the original lattices were algorithm 4 generate set m of matching lattice pairs given a pair of comparable corpora c1 and c2summarygather topically related sentences from c1 into clustersdo the same for c2convert each sentence cluster into a slotted lattice using a multiplesequence alignment algorithmcompare all lattice pairs and output those likely to be paraphrastic generated comparing the sentences that were written on the same day and computing a comparison score based on overlap between the sets of arguments that fill the slotsif this computed score is greater than some fixed threshold value b then the two lattices are considered to be paraphrases of each otherbesides generating pairs of paraphrastic patterns the authors go one step further and actually use the patterns to generate paraphrases for new sentencesgiven such a sentence s the first step is to find an existing slotted lattice from either corpus that aligns best with s in terms of the previously mentioned alignment scoring functionif some lattice is found as a match then all that remains is to take all corresponding lattices from the other corpus that are paired with this lattice and use them to create an example showing the generalization of the word lattice into a slotted lattice the word lattice is produced by aligning seven sentencesnodes having indegrees 1 occur in more than one sentencenodes with thick incoming edges occur in all sentences multiple rewritings for s rewriting in this context is a simple matter of substitution for each slot in the matching lattice we know not only the argument from the sentence that fills it but also the slot in the corresponding rewriting latticeas far as the quality of acquired paraphrases is concerned this approach easily outperforms almost all other sentential paraphrasing approaches surveyed in this articlehowever a paraphrase is produced only if the incoming sentence matches some existing template which leads to a strong bias favoring quality over coveragein addition the construction and generalization of lattices may become computationally expensive when dealing with much larger corporawe can also compare and contrast barzilay and lees work and the work from section 33 that seems most closely related that of pang knight and marcu both take sentences grouped together in a cluster and align them into a lattice using a particular algorithmpang knight and marcu have a predefined size for all clusters since the input corpus is an 11way parallel corpushowever barzilay and lee have to construct the clusters from scratch because their input corpus has no predefined notion of parallelism at the sentence levelboth approaches use word lattices to represent and induce paraphrases since a lattice can efficiently and compactly encode ngram similarities between a large number of sentenceshowever the two approaches are also different in that pang knight and marcu use the parse trees of all sentences in a cluster to compute the alignment whereas barzilay and lee use only surface level informationfurthermore barzilay and lee can use their slotted lattice pairs to generate paraphrases for novel and unseen sentences whereas pang knight and marcu cannot paraphrase new sentences at allshen et al attempt to improve barzilay and lees technique by trying to include syntactic constraints in the cluster alignment algorithmin that way it is doing something similar to what pang knight and marcu do but with a comparable corpus instead of a parallel onemore precisely whereas barzilay and lee use a relatively simple alignment scoring function based on purely lexical features shen et al try to bring syntactic features into the mixthe motivation is to constrain the relatively free nature of the alignment generated by the msa algorithmwhich may lead to the generation of grammatically incorrect sentencesby using informative syntactic featuresin their approach even if two words are a lexical matchas defined by barzilay and lee they are further inspected in terms of certain predefined syntactic featurestherefore when computing the alignment similarity score two lexically matched words across a sentence pair are not considered to fully match unless their score on syntactic features also exceeds a preset thresholdthe syntactic features constituting the additional constraints are defined in terms of the output of a chunk parsersuch a parser takes as input the syntactic trees of the sentences in a topic cluster and provides the following information for each word with this information and a heuristic to compute the similarity between two words in terms of their pos and iob tags the alignment similarity score can be calculated as the sum of the heuristic similarity value for the given two words and the heuristic similarity values for each corresponding node in the two iob chainsif this score is higher than some threshold and the two words have similar positions in their respective sentences then the words are considered to be a match and can be alignedgiven this alignment algorithm the word lattice representing the global alignment is constructed in an iterative manner similar to the msa approachshen et al present evidence from a manual evaluation that sentences sampled from lattices constructed via the syntactically informed alignment method receive higher grammaticality scores as compared to sentences from the lattices constructed via the purely lexical methodhowever they present no analysis of the actual paraphrasing capacity of their presumably better aligned latticesindeed they explicitly mention that their primary goal is to measure the correlation between the syntaxaugmented scoring function and the correctness of the sentences being generated from such lattices even if the sentences do not bear a paraphrastic relationship to the inputeven if one were to assume that the syntaxbased alignment method would result in better paraphrases it still would not address the primary weakness of barzilay and lees method paraphrases are only generated for new sentences that match an already existing lattice leading to lower coveragein the last decade there has been a resurgence in research on statistical machine translationthere has also been an accompanying dramatic increase in the number of available bilingual parallel corpora due to the strong interest in smt from both the public and private sectorsrecent research in paraphrase generation has attempted to leverage these very large bilingual corporain this section we look at such approaches that rely on the preservation of meaning across languages and try to recover said meaning by using cues from the second languageusing bilingual parallel corpora for paraphrasing has the inherent advantage that sentences in the other language are exactly semantically equivalent to sentences in the intended paraphrasing languagetherefore the most common way to generate paraphrases with such a corpus exploits both its parallel and bilingual natures align phrases across the two languages and consider all coaligned phrases in the intended language to be paraphrasesthe bilingual phrasal alignments can simply be generated by using the automatic techniques developed for the same task in the smt literaturetherefore arguably the most important factor affecting the performance of these techniques is usually the quality of the automatic bilingual phrasal alignment techniquesone of the most popular methods leveraging bilingual parallel corpora is that proposed by bannard and callisonburch this technique operates exactly as described above by attempting to infer semantic equivalence between phrases in the same language indirectly with the second language as a bridgetheir approach builds on one of the initial steps used to train a phrasebased statistical machine translation system such systems rely on phrase tablesa tabulation of correspondences between phrases in the source language and phrases in the target languagethese tables are usually extracted by inducing word alignments between sentence pairs in a training corpus and then incrementally building longer phrasal correspondences from individual words and shorter phrasesonce such a tabulation of bilingual phrasal correspondences is available correspondences between phrases in one language may be inferred simply by using the phrases in the other language as pivotsalgorithm 5 shows how monolingual phrasal correspondences are extracted from a bilingual corpus c by using word alignmentssteps 37 extract bilingual phrasal correspondences from each sentence pair in the corpus by using heuristically induced bidirectional word alignmentsfigure 8 illustrates this extraction process for two example sentence pairsfor each pair a matrix shows the alignment between the chinese and the english wordselement of the matrix is filled if there is an alignment link between the ith chinese word and the jth english word ejall phrase pairs consistent with the word alignment are then extracteda consistent phrase pair can intuitively be thought of as a submatrix where all alignment points for its rows and columns are inside it and never outsidenext steps 811 take all english phrases that correspond to the same foreign phrase and infer them all to be paraphrases of each other10 for example the english paraphrase pair is obtained from figure 8 by pivoting on the chinese phrase shown underlined for both matricesalgorithm 5 generate set m of monolingual paraphrase pairs given a bilingual parallel corpus c summaryextract bilingual phrase pairs from c using word alignments and standard smt heuristicspivot all pairs of english phrases on any shared foreign phrases and consider them paraphrasesthe alignment notation from algorithm 3 is employed where both p and p can be computed using maximum likelihood estimation as part of the bilingual phrasal extraction process number of times f is extracted with ej number of times f is extracted with any e p once the probability values are obtained the most likely paraphrase can be chosen for any phrasebannard and callisonburch are able to extract millions of phrasal paraphrases from a bilingual parallel corpussuch an approach is able to capture a large variety of paraphrastic phenomena in the inferred paraphrase pairs but is seriously limited by the bilingual word alignment techniqueeven stateoftheart alignment methods from smt are known to be notoriously unreliable when used for aligning phrase pairsthe authors find via manual evaluation that the quality of the phrasal paraphrases obtained via manually constructed word alignments is significantly better than that of the paraphrases obtained from automatic alignmentsit has been widely reported that the existing bilingual word alignment techniques are not ideal for use in translation and furthermore improving these techniques does not always lead to an improvement in translation performancemore details on the relationship between word alignment and smt can be found in the comprehensive smt survey recently published by lopez paraphrasing done via bilingual corpora relies on the word alignments in the same way as a translation system would and therefore would be equally susceptible to the shortcomings of the word alignment techniquesto determine how noisy automatic word alignments affect paraphrasing done via bilingual corpora we inspected a sample of paraphrase pairs that were extracted when using arabica language significantly different from englishas the pivot language11 in this study we found that the paraphrase pairs in the sample set could be grouped into the following three broad categories form of one of the words in the phrases and cannot really be considered paraphrasesexamples besides there being obvious linguistic differences between arabic and english the primary reason for the generation of phrase pairs that lie in categories and is incorrectly induced alignments between the english and arabic words and hence phrasestherefore a good portion of subsequent work on paraphrasing using bilingual corpora as discussed below focuses on using additional machinery or evidence to cope with the noisy alignment processbefore we continue we believe it would be useful to draw a connection between bannard and callisonburchs work and that of wu and zhou as discussed in section 32note that both of these techniques rely on a secondary language to provide the cues for generating paraphrases in the primary languagehowever wu and zhou rely on a precompiled bilingual dictionary to discover these cues whereas bannard and callisonburch have an entirely datadriven discovery processin an attempt to address some of the noisy alignment issues callisonburch recently proposed an improvement that places an additional syntactic constraint on the phrasal paraphrases extracted via the pivotbased method from bilingual corpora and showed that using such a constraint leads to a significant improvement in the quality of the extracted paraphrases12 the syntactic constraint requires that the extracted paraphrase be of the same syntactic type as the original phrasewith this constraint estimating the paraphrase probability now requires the incorporation of syntactic type into the equation where s denotes the syntactic type of the english phrase e as before maximum likelihood estimation is employed to compute the two component probabilities number of times f is extracted with ej and type s number of times f is extracted with any e and type s p if the syntactic types are restricted to be simple constituents then using this constraint will actually exclude some of the paraphrase pairs that could have been extracted in the unconstrained approachthis leads to the familiar precisionrecall tradeoff it only extracts paraphrases that are of higher quality but the approach has a significantly lower coverage of paraphrastic phenomena that are not necessarily syntactically motivatedto increase the coverage complex syntactic types such as those used in combinatory categorial grammars are employed which can help denote a syntactic constituent with children missing on the left andor right hand sidesan example would be the complex type vp which denotes a verb phrase missing a noun phrase to its right which in turn is missing a plural noun to its rightthe primary benefit of using complex types is that less useful paraphrastic phrase pairs from different syntactic categories such as that would have been allowed in the unconstrained pivotbased approach are now disallowedthe biggest advantage of this approach is the use of syntactic knowledge as one form of additional evidence in order to filter out phrase pairs from categories and as defined in the context of our manual inspection of pivotbased paraphrases aboveindeed the authors conduct a manual evaluation to show that the syntactically constrained paraphrase pairs are better than those produced without such constraintshowever there are two additional benefits of this technique we must also note that requiring syntactic constraints for pivotbased paraphrase extraction restricts the approach to those languages where a reasonably good parser is availablean obvious extension of the callisonburch style approach is to use the collection of pivoted englishtoenglish phrase pairs to generate sentential paraphrases for new sentencesmadnani et al combine the pivotbased approach to paraphrase acquisition with a welldefined englishtoenglish translation model that is then used in an smt system yielding sentential paraphrases by means of translating input english sentenceshowever instead of fully lexicalized phrasal correspondences as in the fundamental units of translation are hierarchical phrase pairsthe latter can be extracted from the former by replacing aligned subphrases with nonterminal symbolsfor example given the initial phrase pair growth rate has been effectively contained the hierarchical phrase pair can be formed13 each hierarchical phrase pair can also have certain features associated with it that are estimated via maximum likelihood estimation during the extraction processsuch phrase pairs can formally be considered the rules of a bilingual synchronous contextfree grammar translation with scfgs is equivalent to parsing the string in the source language using these rules to generate the highestscoring tree and then reading off the tree in target orderfor the purposes of this survey it is sufficient to state that efficient methods to extract such rules to estimate their features and to translate with them are now well establishedfor more details on building scfgbased models and translating with them we refer the reader to once a set of bilingual hierarchical rules has been extracted along with associated features the pivoting trick can be applied to infer monolingual hierarchical paraphrase pairs however the patterns are not the final output and are actually used as rules from a monolingual scfg grammar in order to define an englishtoenglish translation modelfeatures for each monolingual rule are estimated in terms of the features of the bilingual pairs that the rule was inferred froma sentential paraphrase can then be generated for any given sentence by using this model along with an ngram language model and a regular smt decoder to paraphrase any sentence just as one would translate bilinguallythe primary advantage of this approach is the ability to produce good quality sentential paraphrases by leveraging the smt machinery to address the noise issuehowever although the decoder and the language model do serve to counter the noisy word alignment process they do so only to a degree and not entirelyagain we must draw a connection between this work and that of quirk brockett and dolan because both treat paraphrasing as monolingual translationhowever as outlined in the discussion of that work quirk brockett and dolan use a relatively simplistic translation model and decoder which leads to paraphrases with little or no lexical varietyin contrast madnani et al use a more complex translation model and an unmodified stateoftheart smt decoder to produce paraphrases that are much more diverseof course the reliance of the latter approach on automatic word alignments does inevitably lead to much noisier sentential paraphrases than those produced by quirk brockett and dolankok and brockett present a novel take on generating phrasal paraphrases with bilingual corporaas with most approaches based on parallel corpora they also start with phrase tables extracted from such corpora along with the corresponding phrasal translation probabilitieshowever instead of performing the usual pivoting step with the bilingual phrases in the table they take a graphical approach and represent each phrase in the table as a node leading to a bipartite graphtwo nodes in the graph are connected to each other if they are aligned to each otherin order to extract paraphrases they sample random paths in the graph from any english node to anothernote that the traditional pivot step is equivalent to a path of length two one english phrase to the foreign pivot phrase and then to the potentially paraphrastic english phraseby allowing paths of lengths longer than two this graphical approach can find more paraphrases for any given english phrasefurthermore instead of restricting themselves to a single bilingual phrase table they take as input a number of phrase tables each corresponding to a different pair of six languagessimilar to the singletable case each phrase in each table is represented as a node in a graph that is no longer bipartite in natureby allowing edges to exist between nodes of all the languages if they are aligned the pivot can now even be a set of nodes rather than a single node in another languagefor example one could easily find the following path in such a graph ate lunch aßen zu ittag aten een hapje had a bite in general each edge is associated with a weight corresponding to the bilingual phrase translation probabilityrandom walks are then sampled from the graph in such a way that only paths of high probability end up contributing to the extracted paraphrasesobviously the alignment errors discussed in the context of simple pivoting will also have an adverse effect on this approachin order to prevent this the authors add special feature nodes to the graph in addition to regular nodesthese feature nodes represent domainspecific knowledge of what would make good paraphrasesfor example nodes representing syntactic equivalence classes of the start and end words of the english phrases are addedthis indicates that phrases that start and end with the same kind of words are likely to be paraphrasesastute readers will make the following observations about the syntactic feature nodes used by the authors the authors extract paraphrases for a small set of input english paraphrases and show that they are able to generate a larger percentage of correct paraphrases compared to the syntactically constrained approach proposed by callisonburch they conduct no formal evaluation of the coverage of their approach but show that in a limited setting it is higher than that for the syntactically constrained pivotbased approachhowever they perform no comparisons of their coverage with the original pivotbased approach before we present some specific techniques from the literature that have been employed to evaluate paraphrase generation methods it is important to examine some recent work that has been done on constructing paraphrase corporaas part of this work human subjects are generally asked to judge whether two given sentences are paraphrases of each otherwe believe that a detailed examination of this manual evaluation task provides an illuminating insight into the nature of a paraphrase in a practical rather than a theoretical contextin addition it has obvious implications for any method whether manual or automatic that is used to evaluate the performance of a paraphrase generatordolan and brockett were the first to attempt to build a paraphrase corpus on a large scalethe microsoft research paraphrase corpus is a collection of 5801 sentence pairs each manually labeled with a binary judgment as to whether it constitutes a paraphrase or notas a first step the corpus was created using a heuristic extraction method in conjunction with an svmbased classifier that was trained to select likely sentential paraphrases from a large monolingual corpus containing news article clustershowever the more interesting aspects of the task were the subsequent evaluation of these extracted sentence pairs by human annotators and the set of issues encountered when defining the evaluation guidelines for these annotatorsit was observed that if the human annotators were instructed to mark only the sentence pairs that were strictly semantically equivalent or that exhibited bidirectional entailment as paraphrases then the results were limited to uninteresting sentence pairs such as the following s1 the euro rose above us118 the highest price since its january 1999 launchs2 the euro rose above 118 the highest level since its launch in january 1999s1 however without a carefully controlled study there was little clear proof that the operation actually improves peoples livess2 but without a carefully controlled study there was little clear proof that the operation improves peoples livesinstead they discovered that most of the complex paraphrasesones with alternations more interesting than simple lexical synonymy and local syntactic changes exhibited varying degrees of semantic divergencefor example therefore in order to be able to create a richer paraphrase corpus one with many complex alternations the instructions to the annotators had to be relaxed the degree of mismatch accepted before a sentence pair was judged to be fully semantically divergent was left to the human subjectsit is also reported that given the idiosyncratic nature of each sentence pair only a few formal guidelines were generalizable enough to take precedence over the subjective judgments of the human annotatorsdespite the somewhat loosely defined guidelines the interannotator agreement for the task was 84however a kappa score of 62 indicated that the task was overall a difficult one at the end 67 of the sentence pairs were judged to be paraphrases of each other and the rest were judged to be nonequivalent14 although the msrp corpus is a valuable resource and its creation provided valuable insight into what constitutes a paraphrase in the practical sense it does have some shortcomingsfor example one of the heuristics used in the extraction process was that the two sentences in a pair must share at least three wordsusing this constraint rules out any paraphrase pairs that are fully lexically divergent but still semantically equivalentthe small size of the corpus when combined with this and other such constraints precludes the use of the corpus as training data for a paraphrase generation or extraction systemhowever it is fairly useful as a freely available test set to evaluate paraphrase recognition methodson a related note fujita and inui take a more knowledgeintensive approach to building a japanese corpus containing sentence pairs with binary paraphrase judgments and attempt to focus on variety and on minimizing the human annotation costthe corpus contains 2031 sentence pairs each with a human judgment indicating whether the paraphrase is correct or notto build the corpus they first stipulate a typology of paraphrastic phenomena and then manually create a set of morphosyntactic paraphrasing rules and patterns describing each type of paraphrasing phenomenona paraphrase generation system using these rules is then applied to a corpus containing japanese news articles and example paraphrases are generated for the sentences in the corpusthese paraphrase pairs are then handed to two human annotators who create binary judgments for each pair indicating whether or not the paraphrase is correctusing a classoriented approach is claimed to have a twofold advantage the biggest disadvantage of this approach is that only two types of paraphrastic phenomena are used lightverb constructions and transitivity alternations the corpus indeed captures almost all examples of both types of paraphrastic phenomena and any that are absent can be easily covered by adding one or two more patterns to the classthe claim of reduced annotation cost is not necessarily borne out by the observationsdespite partitioning the annotation task by types it was still difficult to provide accurate annotation guidelinesthis led to a significant difference in annotation timewith some annotations taking almost twice as long as othersgiven the small size of the corpus it is unlikely that it may be used as training data for corpusbased paraphrase generation methods and like the msrp corpus would be best suited to the evaluation of paraphrase recognition techniquesmost recently cohn callisonburch and lapata describe a different take on the creation of a monolingual parallel corpus containing 900 sentence pairs with paraphrase annotations that can be used for both development and evaluation of paraphrase systemsthese paraphrase annotations take the form of alignments between the words and sequences of words in each sentence pair these alignments are analogous to the word and phrasalalignments induced in smt systems that were illustrated in section 35as is the case with smt alignments the paraphrase annotations can be of different forms onewordtooneword onewordtomanywords as well as fully phrasal alignments15 the authors start from a sentencealigned paraphrase corpus compiled from three corpora that we have already described elsewhere in this survey the sentence pairs judged equivalent from the msrp corpus the multiple translation chinese corpus of multiple humanwritten translations of chinese news stories used by pang knight and marcu and two english translations of the french novel twenty thousand leagues under the sea a subset of the monolingual parallel corpus used by barzilay and mckeown the words in each sentence pair from this corpus are then aligned automatically to produce the initial paraphrase annotations that are then refined by two human annotatorsthe annotation guidelines required that the annotators judge which parts of a given sentence pair were in correspondence and to indicate this by creating an alignment between those parts two parts were said to correspond if they could be substituted for each other within the specific context provided by the respective sentence pairin addition the annotators were instructed to classify the created alignments as either sure or possible for example given the following paraphrastic sentence pair the phrase pair will be aligned as a sure correspondence whereas the phrase pair will be aligned as a possible correspondenceother examples of possible correspondences could include the same stem expressed as different partsofspeech or two nonsynonymous verbs for more details on the alignment guidelines that were provided to the annotators we refer the reader to extensive experiments are conducted to measure interannotator agreements and obtain good agreement values but they are still low enough to confirm that it is difficult for humans to recognize paraphrases even when the task is formulated differentlyoverall such a paraphrase corpus with detailed paraphrase annotations is much more informative than a corpus containing binary judgments at the sentence level such as the msrp corpusas an example because the corpus contains paraphrase annotations at the word as well as phrasal levels it can be used to build systems that can learn from these annotations and generate not only fully lexicalized phrasal paraphrases but also syntactically motivated paraphrastic patternsto demonstrate the viability of the corpus for this purpose a grammar induction algorithm is appliedoriginally developed for sentence compressionto the parsed version of their paraphrase corpus and the authors show that they can learn paraphrastic patterns such as those shown in figure 9in general building paraphrase corpora whether it is done at the sentence level or at the subsentential level is extremely useful for the fostering of further research and development in the area of paraphrase generationwhereas other language processing tasks such as machine translation and document summarization usually have multiple annual communitywide evaluations using an example of syntactically motivated paraphrastic patterns that can be extracted from the paraphrase corpus constructed by cohn callisonburch and lapata standard test sets and manual as well as automated metrics the task of automated paraphrasing does notan obvious reason for this disparity could be that paraphrasing is not an application in and of itselfhowever the existence of similar evaluations for other tasks that are not applications such as dependency parsing and word sense disambiguation suggests otherwisewe believe that the primary reason is that over the years paraphrasing has been employed in an extremely fragmented fashionparaphrase extraction and generation are used in different forms and with different names in the context of different applications this usage pattern does not allow researchers in one community to share the lessons learned with those from other communitiesin fact it may even lead to research being duplicated across communitieshowever more recent worksome of it discussed in this surveyon extracting phrasal paraphrases does include direct evaluation of the paraphrasing itself the original phrase and its paraphrase are presented to multiple human judges along with the contexts in which the phrase occurs in the original sentence who are asked to determine whether the relationship between the two phrases is indeed paraphrastic a more direct approach is to substitute the paraphrase in place of the original phrase in its sentence and present both sentences to judges who are then asked to judge not only their semantic equivalence but also the grammaticality of the new sentence motivation for such substitutionbased evaluation is discussed in callisonburch the basic idea being that items deemed to be paraphrases may behave as such only in some contexts and not othersszpektor shnarch and dagan posit a similar form of evaluation for textual entailment wherein the human judges are not only presented with the entailment rule but also with a sample of sentences that match its lefthand side and then asked to assess whether the rule holds under each specific instancesentential paraphrases may be evaluated in a similar fashion without the need for any surrounding context an intrinsic evaluation of this form must employ the usual methods for avoiding any bias and for maximizing interjudge agreementin addition we believe that given the difficulty of this task even for human annotators adherence to strict semantic equivalence may not always be a suitable guideline and intrinsic evaluations must be very carefully designeda number of these approaches also perform extrinsic evaluations in addition to the intrinsic one by utilizing the extracted or generated paraphrases to improve other applications such as machine translation and others as described in section 1another option when evaluating the quality of a paraphrase generation method is that of using automatic measuresthe traditional automatic evaluation measures of precision and recall are not particularly suited to this task because in order to use them a list of reference paraphrases has to be constructed against which these measures may be computedgiven that it is extremely unlikely that any such list will be exhaustive any precision and recall measurements will not be accuratetherefore other alternatives are neededsince the evaluation of paraphrases is essentially the task of measuring semantic similarity or of paraphrase recognition all of those metrics including the ones discussed in section 2 can be employed heremost recently callisonburch cohn and lapata discuss parametric another automatic measure that may be used to evaluate paraphrase extraction methodsthis work follows directly from the work done by the authors to create the paraphraseannotated corpus described in the previous sectionrecall that this corpus contains paraphrastic sentence pairs with annotations in the form of alignments between their respective words and phrasesit is posited that to evaluate any paraphrase generation method one could simply have it produce its own set of alignments for the sentence pairs in the corpus and precision and recall could then be computed over alignments instead of phrase pairsthese alignmentoriented precision and recall measures are computed as follows where denotes a sentence pair nm denotes the phrases extracted via the manual alignments for the pair and np denotes the phrases extracted via the automatic alignments induced using the paraphrase method p that is to be evaluatedthe phrase extraction heuristic used to compute np and nm from the respective alignments is the same as that employed by bannard and callisonburch and illustrated in figure 8although using alignments as the basis for computing precision and recall is a clever trick it does require that the paraphrase generation method be capable of producing alignments between sentence pairsfor example the methods proposed by pang knight and marcu and quirk brockett and dolan for generating sentential paraphrases from monolingual parallel corpora and described in section 33 do produce alignments as part of their respective algorithmsindeed callisonburch et al provide a comparison of their pivotbased approachoperating on bilingual parallel corporawith the two monolingual approaches just mentioned in terms of parametric since all three methods are capable of producing alignmentshowever for other approaches that do not necessarily operate at the level of sentences and cannot produce any alignments falling back on estimates of traditional formulations of precision and recall is suggestedthere has also been some preliminary progress toward using standardized test sets for intrinsic evaluationsa test set containing 20 afp articles about violence in the middle east that was used for evaluating the latticebased paraphrase technique in has been made freely available16 in addition to the original sentences for which the paraphrases were generated the set also contains the paraphrases themselves and the judgments assigned by human judges to these paraphrasesthe paraphraseannotated corpus discussed in the previous section would also fall under this category of resourcesas with many other fields in nlp paraphrase generation also lacks serious extrinsic evaluation as described herein many paraphrase generation techniques are developed in the context of a host nlp application and this application usually serves as one form of extrinsic evaluation for the quality of the paraphrases generated by that techniquehowever as yet there is no widely agreedupon method of extrinsically evaluating paraphrase generationaddressing this deficiency should be a crucial consideration for any future communitywide evaluation effortan important dimension for any area of research is the availability of fora where members of the community may share their ideas with their colleagues and receive valuable feedbackin recent years a number of such fora have been made available to the automatic paraphrasing community which represents an extremely important step toward countering the fragmented usage pattern described previouslyit is important for any survey to provide a look to the future of the surveyed task and general trends for the corresponding research methodswe identify several such trends in the area of paraphrase generation that are gathering momentumthe influence of the webthe web is rapidly becoming one of the most important sources of data for natural language processing applications which should not be surprising given its phenomenal rate of growththe freely available web data massive in scale has already had a definite influence over dataintensive techniques such as those employed for paraphrase generation however the availability of such massive amounts of web data comes with serious concerns for efficiency and has led to the development of efficient methods that can cope with such large amounts of databhagat and ravichandran extract phrasal paraphrases by measuring distributional similarity over a 150gb monolingual corpus via locality sensitive hashing a randomized algorithm that involves the creation of fingerprints for vectors in space because vectors that are more similar are more likely to have similar fingerprints vectors can simply be compared by comparing their fingerprints leading to a more efficient distributional similarity algorithm we also believe that the influence of the web will extend to other avenues of paraphrase generation such as the aforementioned extrinsic evaluation or lack thereoffor example fujita and sato propose evaluating phrasal paraphrase pairs automatically generated from a monolingual corpus by querying the web for snippets related to the pairs and using them as features to compute the pairs paraphrasabilitycombining multiple sources of informationanother important trend in paraphrase generation is that of leveraging multiple sources of information to determine whether two units are paraphrasticfor example zhao et al improve the sentential paraphrases that can be generated via the pivot method by leveraging five other sources in addition to the bilingual parallel corpus itself a corpus of web queries similar to the phrase definitions from the encarta dictionary a monolingual parallel corpus a monolingual comparable corpus and an automatically constructed thesaurusphrasal paraphrase pairs are extracted separately from all six models and then combined in a loglinear paraphrasingastranslation model proposed by madnani et al a manual inspection reveals that using multiple sources of information yields paraphrases with much higher accuracywe believe that such exploitation of multiple types of resources and their combinations is an important developmentzhao et al further increase the utility of this combination approach by incorporating application specific constraints on the pivoted paraphrasesfor example if the output paraphrases need to be simplified versions of the input sentences then only those phrasal paraphrase pairs are used where the output is shorter than the inputuse of smt machineryin theory statistical machine translation is very closely related to paraphrase generation since the former also relies on finding semantic equivalence albeit in a second languagehence there have been numerous paraphrasing approaches that have relied on different components of an smt pipeline as we saw in the preceding pages of this surveydespite the obvious convenience of using smt components for the purpose of monolingual translation we must consider that doing so usually requires additional work to deal with the added noise due to the nature of such componentswe believe that smt research will continue to influence research in paraphrasing both by providing readytouse building blocks and by necessitating development of methods to effectively use such components for the unintended task of paraphrase generationdomainspecific paraphrasingrecently work has been done to generate phrasal paraphrases in specialized domainsfor example in the field of health literacy it is well known that documents for health consumers are not very welltargeted to their purported audiencerecent research has shown how to generate a lexicon of semantically equivalent phrasal pairs of technical and lay medical terms from monolingual parallel corpora as well as monolingual comparable corpora examples include pairs such as and in another domain max proposes an adaptation of the pivotbased method to generate rephrasings of short text spans that could help a writer revise a textbecause the goal is to assist a writer in making revisions the rephrasings do not always need to bear a perfect paraphrastic relationship to the original a scenario suited for the pivotbased methodseveral variants of such adaptations are developed that generate candidate rephrasings driven by fluency semantic equivalence and authoring value respectivelywe also believe that a largescale annual communitywide evaluation should become a trend since it is required to foster further research in and use of paraphrase extraction and generationalthough there have been recent workshops and tasks on paraphrasing and entailment as discussed in section 5 this evaluation would be much more focused providing sets of shared guidelines and resources in the spirit of the recent nist mt evaluation workshops over the last two decades there has been much research on paraphrase extraction and generation within a number of research communities in natural language processing in order to improve the specific application with which that community is concernedhowever a large portion of this research can be easily adapted for more widespread use outside its particular host and can provide significant benefits to the whole fieldonly recently have there been serious efforts to conduct research on the topic of paraphrasing by treating it as an important natural language processing task independent of a host applicationin this article we have presented a comprehensive survey of the task of paraphrase extraction and generation motivated by the fact that paraphrases can help in a multitude of applications such as machine translation text summarization and information extractionthe aim was to provide an applicationindependent overview of paraphrase generation while also conveying an appreciation for the importance and potential use of paraphrasing in the field of nlp researchwe show that there are a large variety of paraphrase generation methods and each such method has a very different set of characteristics in terms of both its performance and its ease of deploymentwe also observe that whereas most of the methods in this survey can be used in multiple applications the choice of the most appropriate method depends on how well the characteristics of the produced paraphrases match the requirements of the downstream application in which the paraphrases are being utilized
J10-3003
generating phrasal and sentential paraphrases a survey of datadriven methodsthe task of paraphrasing is inherently familiar to speakers of all languagesmoreover the task of automatically generating or extracting semantic equivalences for the various units of language words phrases and sentences is an important part of natural language processing and is being increasingly employed to improve the performance of several nlp applicationsin this article we attempt to conduct a comprehensive and applicationindependent survey of datadriven phrasal and sentential paraphrase generation methods while also conveying an appreciation for the importance and potential use of paraphrases in the field of nlp researchrecent work done in manual and automatic construction of paraphrase corpora is also examinedwe also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generationwe survey a variety of data driven paraphrasing techniques categorizing them based on the type of data that they use
distributional memory a general framework for corpusbased semantics research into corpusbased semantics has focused on the development of ad hoc models that treat single tasks or sets of closely related tasks as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus as an alternative to this one task one model approach the distributional memory framework extracts distributional information once and for all from the corpus in the form of a set of weighted wordlinkword tuples arranged into a thirdorder tensor different matrices are then generated from the tensor and their rows and columns constitute natural spaces to deal with different semantic problems in this way the same distributional information can be shared across tasks such as modeling word similarity judgments discovering synonyms concept categorization predicting selectional preferences of verbs solving analogy problems classifying relations between word pairs harvesting qualia structures with patterns or example pairs predicting the typical properties of concepts and classifying verbs into alternation classes extensive empirical testing in all these domains shows that a distributional memory implementation performs competitively against taskspecific algorithms recently reported in the literature for the same tasks and against our implementations of several stateoftheart methods the distributional memory approach is thus shown to be tenable despite the constraints imposed by its multipurpose nature single tasks or sets of closely related tasks as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpusas an alternative to this one task one model approach the distributional memory framework extracts distributional information once and for all from the corpus in the form of a set of weighted wordlinkword tuples arranged into a thirdorder tensordifferent matrices are then generated from the tensor and their rows and columns constitute natural spaces to deal with different semantic problemsin this way the same distributional information can be shared across tasks such as modeling word similarity judgments discovering synonyms concept categorization predicting selectional preferences of verbs solving analogy problems classifying relations between word pairs harvesting qualia structures with patterns or example pairs predicting the typical properties of concepts and classifying verbs into alternation classesextensive empirical testing in all these domains shows that a distributional memory implementation performs competitively against taskspecific algorithms recently reported in the literature for the same tasks and against our implementations of several stateoftheart methodsthe distributional memory approach is thus shown to be tenable despite the constraints imposed by its multipurpose naturethe last two decades have seen a rising wave of interest among computational linguists and cognitive scientists in corpusbased models of semantic representation these models variously known as vector spaces semantic spaces word spaces corpusbased semantic models or using the term we will adopt distributional semantic models all rely on some version of the distributional hypothesis stating that the degree of semantic similarity between two words can be modeled as a function of the degree of overlap among their linguistic contextsconversely the format of distributional representations greatly varies depending on the specific aspects of meaning they are designed to modelthe most straightforward phenomenon tackled by dsms is what turney calls attributional similarity which encompasses standard taxonomic semantic relations such as synonymy cohyponymy and hypernymywords like dog and puppy for example are attributionally similar in the sense that their meanings share a large number of attributes they are animals they bark and so onattributional similarity is typically addressed by dsms based on word collocates these collocates are seen as proxies for various attributes of the concepts that the words denotewords that share many collocates denote concepts that share many attributesboth dog and puppy may occur near owner leash and bark because these words denote properties that are shared by dogs and puppiesthe attributional similarity between dog and puppy as approximated by their contextual similarity will be very highdsms succeed in tasks like synonym detection or concept categorization because such tasks require a measure of attributional similarity that favors concepts that share many properties such as synonyms and cohyponymshowever many other tasks require detecting different kinds of semantic similarityturney defines relational similarity as the property shared by pairs of words linked by similar semantic relations despite the fact that the words in one pair might not be attributionally similar to those in the other pair turney generalizes dsms to tackle relational similarity and represents pairs of words in the space of the patterns that connect them in the corpuspairs of words that are connected by similar patterns probably hold similar relations that is they are relationally similarfor example we can hypothesize that dogtail is more similar to carwheel than to doganimal because the patterns connecting dog and tail are more like those of carwheel than like those of doganimal turney uses the relational space to implement tasks such as solving analogies and harvesting instances of relationsalthough they are not explicitly expressed in these terms relation extraction algorithms also rely on relational similarity and focus on learning one relation type at a time although semantic similarity either attributional or relational has the lions share in dsms similarity is not the only aspect of meaning that is addressed by distributional approachesfor instance the notion of property plays a key role in cognitive science and linguistics which both typically represent concepts as clusters of properties in this case the task is not to find out that dog is similar to puppy or cat but that it has a tail it is used for hunting and so onalmuhareb baroni and lenci and baroni et al use the words cooccurring with a noun to approximate its most prototypical properties and correlate distributionally derived data with the properties produced by human subjectscimiano and wenderoth instead focus on that subset of noun properties known in lexical semantics as qualia roles and use lexical patterns to identify for example the constitutive parts of a concept or its function the distributional semantics methodology also extends to more complex aspects of word meaning addressing issues such as verb selectional preferences argument alternations event types and so forthfinally some dsms capture a sort of topical relatedness between words they might find for example a relation between dog and fidelitytopical relatedness addressed by dsms based on document distributions such as lsa and topic models is not further discussed in this articledsms have found wide applications in computational lexicography especially for automatic thesaurus construction corpusbased semantic models have also attracted the attention of lexical semanticists as a way to provide the notion of synonymy with a more robust empirical foundation moreover dsms for attributional and relational similarity are widely used for the semiautomatic bootstrapping or extension of terminological repositories computational lexicons and ontologies innovative applications of corpusbased semantics are also being explored in linguistics for instance in the study of semantic change lexical variation and for the analysis of multiword expressions the wealth and variety of semantic issues that dsms are able to tackle confirms the importance of looking at distributional data to explore meaning as well as the maturity of this research fieldhowever if we looked from a distance at the whole field of dsms we would see that besides the general assumption shared by all models that information about the context of a word is an important key in grasping its meaning the elements of difference overcome the commonalitiesfor instance dsms geared towards attributional similarity represent words in the contexts of other words thereby looking very different from models that represent word pairs in terms of patterns linking themin turn both these models differ from those used to explore concept properties or argument alternationsthe typical approach in the field has been a local one in which each semantic task is treated as a separate problem that requires its own corpusderived model and algorithm both optimized to achieve the best performance in a given task but lacking generality since they resort to taskspecific distributional representations often complemented by additional taskspecific resourcesas a consequence the landscape of dsms looks more like a jigsaw puzzle in which different parts have been completed and the whole figure starts to emerge from the fragments but it is not clear yet how to put everything together and compose a coherent picturewe argue that the one semantic task one distributional model approach represents a great limit of the current state of the artfrom a theoretical perspective corpusbased models hold promise as largescale simulations of how humans acquire and use conceptual and linguistic information from their environment however existing dsms lack exactly the multipurpose nature that is a hallmark of human semantic competencethe common view in cognitive science is that humans resort to a single semantic memory a relatively stable longterm knowledge database adapting the information stored there to the various tasks at hand the fact that dsms need to go back to their environment to collect ad hoc statistics for each semantic task and the fact that different aspects of meaning require highly different distributional representations cast many shadows on the plausibility of dsms as general models of semantic memoryfrom a practical perspective going back to the corpus to train a different model for each application is inefficient and it runs the risk of overfitting the model to a specific task while losing sight of its adaptivitya highly desirable feature for any intelligent systemthink by contrast of wordnet a single general purpose network of semantic information that has been adapted to all sorts of tasks many of them certainly not envisaged by the resource creatorswe think that it is not by chance that no comparable resource has emerged from dsm developmentin this article we want to show that a unified approach is not only a desirable goal but it is also a feasible onewith this aim in mind we introduce distributional memory a generalized framework for distributional semanticsdifferently from other current proposals that share similar aims we believe that the lack of generalization in corpusbased semantics stems from the choice of representing cooccurrence statistics directly as matricesgeometrical objects that model distributional data in terms of binary relations between target items and their contexts this results in the development of ad hoc models that lose sight of the fact that different semantic spaces actually rely on the same kind of underlying distributional informationdm instead represents corpusextracted cooccurrences as a thirdorder tensor a ternary geometrical object that models distributional data in terms of word linkword tuplesmatrices are then generated from the tensor in order to perform semantic tasks in the spaces they definecrucially these ondemand matrices are derived from the same underlying resource and correspond to different views of the same data extracted once and for all from a corpusdm is tested here on what we believe to be the most varied array of semantic tasks ever addressed by a single distributional modelin all cases we compare the performance of several dm implementations to stateoftheart resultswhile some of the ad hoc models that were developed to tackle specific tasks do outperform our most successful dm implementation the latter is never too far from the top without any taskspecific tuningwe think that the advantage of having a general model that does not need to be retrained for each new task outweighs the performance advantage of the taskspecific modelsthe article is structured as followsafter framing our proposal within the general debate on cooccurrence modeling in distributional semantics we introduce the dm framework in section 3 and compare it to other unified approaches in section 4section 5 pertains to the specific implementations of the dm framework we will test experimentallythe experiments are reported in section 6section 7 concludes by summarizing what we have achieved and discussing the implications of these results for corpusbased distributional semanticscorpusbased semantics aims at characterizing the meaning of linguistic expressions in terms of their distributional propertiesthe standard view models such properties in terms of twoway structures that is matrices coupling target elements and contextsin fact the formal definition of semantic space provided by pado and lapata is built around the notion of a matrix mbt with b the set of basis elements representing the contexts used to compare the distributional similarity of the target elements t this binary structure is inherently suitable for approaches that represent distributional data in terms of unstructured cooccurrence relations between an element and a contextthe latter can be either documents or lexical collocates within a certain distance from the target we will refer to such models as unstructured dsms because they do not use the linguistic structure of texts to compute cooccurrences and only record whether the target occurs in or close to the context element without considering the type of this relationfor instance an unstructured dsm might derive from a sentence like the teacher eats a red apple that eat is a feature shared by apple and red just because they appear in the same context window without considering the fact that there is no real linguistic relation linking eat and red besides that of linear proximityin structured dsms cooccurrence statistics are collected instead in the form of corpusderived triples typically word pairs and the parserextracted syntactic relation or lexicosyntactic pattern that links them under the assumption that the surface connection between two words should cue their semantic relation distributional triples are also used in computational lexicography to identify the grammatical and collocational behavior of a word and to define its semantic similarity spacesfor instance the sketch engine1 builds word sketches consisting of triples extracted from parsed corpora and formed by two words linked by a grammatical relation the number of shared triples is then used to measure the attributional similarity between word pairsstructured models take into account the crucial role played by syntactic structures in shaping the distributional properties of wordsto qualify as context of a target item a word must be linked to it by some lexicosyntactic relation which is also typically used to distinguish the type of this cooccurrencegiven the sentence the teacher eats a red apple structured dsms would not consider eat as a legitimate context for red and would distinguish the object relation connecting eat and apple as a different type of cooccurrence from the modifier relation linking red and appleon the other hand structured models require more preliminary corpus processing and tend to be more sparse what little systematic comparison of the two approaches has been carried out suggests that structured models have a slight edgein our experiments in section 61 herein the performance of unstructured and structured models trained on the same corpus is in general comparableit seems safe to conclude that structured models are at least not worse than unstructured modelsan important conclusion for us as dm is built upon the structured dsm ideastructured dsms extract a much richer array of distributional information from linguistic input but they still represent it in the same way as unstructured modelsthe corpusderived ternary data are mapped directly onto a twoway matrix either by dropping one element from the tuple or more commonly by concatenating two elementsthe two words can be concatenated treating the links as basis elements in order to model relational similarity alternatively pairs formed by the link and one word are concatenated as basis elements to measure attributional similarity among the other words treated as target elements in this way typed dsms obtain finergrained features to compute distributional similarity but by couching distributional information as twoway matrices they lose the high expressive power of corpusderived tripleswe believe that falling short of fully exploiting the potential of ternary distributional structures is the major reason for the lack of unification in corpusbased semanticsthe debate in dsms has so far mostly focused on the context choicefor example lexical collocates vs documents or on the costs and benefits of having structured contexts although we see the importance of these issues we believe that a real breakthrough in dsms can only be achieved by overcoming the limits of current twoway models of distributional datawe propose here the alternative dm approach in which the core geometrical structure of a distributional model is a threeway object namely a thirdorder tensoras in structured dsms we adopt wordlinkword tuples as the most suitable way to capture distributional factshowever we extend and generalize this assumption by proposing that once they are formalized as a threeway tensor tuples can become the backbone of a unified model for distributional semanticsdifferent semantic spaces are then generated on demand through the independently motivated operation of tensor matricization mapping the thirdorder tensor onto twoway matricesthe matricization of the tuple tensor produces both familiar spaces similar to those commonly used for attributional or relational similarity and other less known distributional spaces which will yet prove useful for capturing some interesting semantic phenomenathe crucial fact is that all these different semantic spaces are now alternative views of the same underlying distributional objectapparently unrelated semantic tasks can be addressed in terms of the same distributional memory harvested only once from the source corpusthus thanks to the tensorbased representation distributional data can be turned into a general purpose resource for semantic modelingas a further advantage the thirdorder tensor formalization of corpusbased tuples allows distributional information to be represented in a similar way to other types of knowledgein linguistics cognitive science and ai semantic and conceptual knowledge is represented in terms of symbolic structures built around typed relations between elements such as synsets concepts properties and so forththis is customary in lexical networks like wordnet commonsense resources like conceptnet and cognitive models of semantic memory the tensor representation of corpusbased distributional data promises to build new bridges between existing approaches to semantic representation that still appear distant in many respectsthis may indeed contribute to the ongoing efforts to combine distributional and symbolic approaches to meaning we first introduce the notion of a weighted tuple structure the format in which dm expects the distributional data extracted from the corpus to be arranged we then show how a weighted tuple structure can be represented in linear algebraic terms as a labeled thirdorder tensorfinally we derive different semantic vector spaces from the tensor by the operation of labeled tensor matricizationrelations among entities can be represented by ternary tuples or tripleslet o1 and o2 be two sets of objects and are o1 x o2 a set of relations between these objectsa triple expresses the fact that o1 is linked to o2 through the relation r dm includes tuples of a particular type namely weighted distributional tuples that encode distributional facts in terms of typed cooccurrence relations among wordslet w1 and w2 be sets of strings representing content words and l a set of strings representing syntagmatic cooccurrence links between words in a textt c_ w1 x l x w2 is a set of corpusderived tuples t such that w1 cooccurs with w2 and l represents the type of this cooccurrence relationfor instance the tuple in the toy example reported in table 1 encodes the piece of distributional information that marine cooccurs with bomb in the corpus and use specifies the type of the syntagmatic link between these wordseach tuple t has a weight a realvalued score vt assigned by a scoring function 6 w1 x l x w2 4ra weighted tuple structure consists of the set tw of weighted distributional tuples tw for all t e t and 6 vtthe 6 function encapsulates all the operations performed to score the tuples for example by processing an input corpus with a dependency parser counting the occurrences of tuples and weighting the raw counts by mutual informationbecause our focus is on how tuples once they are harvested should be represented geometrically we gloss over the important challenges of choosing the appropriate w1 l and w2 string sets as well as specifying 6in this article we make the further assumption that w1 w2this is a natural assumption when the tuples represent cooccurrences of word pairsmoreover we enforce an inverse link constraint such that for any link l in l there is a k in l such that for each tuple tw vt in the weighted tuple structure tw the tuple t1 w vt is also in tw again this seems reasonable in our context if we extract a tuple and assign it a certain score we might as well add the tuple with the same scorethe two assumptions combined lead the matricization process described in section 33 to generate exactly four distinct vector spaces that as we discuss there are needed for the semantic analyses we conductsee section 66 of turney for a discussion of similar assumptionsstill it is worth emphasizing that the general formalism we are proposing where corpusextracted weighted tuple structures are represented as labeled tensors does not strictly require these assumptionsfor example w2 could be a larger set of relata including not only words but also documents morphological features or even visual features the inverse link constraint might not be appropriate for example if we use an asymmetric association measure or if we are only interested in one direction of certain grammatical relationswe leave the investigation of all these possibilities to further studiesa toy weighted tuple structure word link word weight word link word weight marine own bomb 400 sergeant use gun 519 marine use bomb 821 sergeant own book 80 marine own gun 853 sergeant use book 101 marine use gun 448 teacher own bomb 52 marine own book 32 teacher use bomb 70 marine use book 33 teacher own gun 93 sergeant own bomb 167 teacher use gun 47 sergeant use bomb 695 teacher own book 484 sergeant own gun 734 teacher use book 536 dsms adopting a binary model of distributional information are represented by matrices containing corpusderived cooccurrence statistics with rows and columns labeled by the target elements and their contextsin dm we formalize the weighted tuple structure as a labeled thirdorder tensor from which semantic spaces are then derived through the operation of labeled matricizationtensors are multiway arrays conventionally denoted by boldface euler script letters x the order of a tensor is the number of indices needed to identify its elementstensors are a generalization of vectors and matricesthe entries in a vector can be denoted by a single indexvectors are thus firstorder tensors often indicated by a bold lowercase letter v the ith element of a vector v is indicated by vimatrices are secondorder tensors and are indicated with bold capital letters athe entry in the ith row and jth column of a matrix a is denoted by aijan array with three indices is a thirdorder tensorthe element of a thirdorder tensor x is denoted by xijka convenient way to display thirdorder tensors is via nested tables such as table 2 where the first index is in the header column the second index in the first header row and the third index in the second header rowthe entry x321 of the tensor in the table is 70 and the entry x112 is 853an index has dimensionality i if it ranges over the integers from 1 to ithe dimensionality of a thirdorder tensor is the product of the dimensionalities of its indices i x j x k for example the thirdorder tensor in table 2 has dimensionality 3 x 2 x 3if we fix the integer i as the value of the first index of a matrix a and take the entries corresponding to the full range of values of the other index j we obtain a row vector similarly by fixing the second index to j we obtain the column vector ajgeneralizing a fiber is equivalent to rows and columns in higher order tensors and it is obtained by fixing the values of all indices but onea moden fiber is a fiber where only the nth index has not been fixedfor example in the tensor x of table 2 x11 is a mode1 fiber x23 is a mode2 fiber and x32 is a mode3 fibera weighted tuple structure can be represented as a thirdorder tensor whose entries contain the tuple scoresas for the twoway matrices of classic dsms in order to make tensors linguistically meaningful we need to assign linguistic labels to the elements of the tensor indiceswe define a labeled tensor xλ as a tensor such that for each of its indices there is a onetoone mapping of the integers from 1 to i to i distinct strings that we call the labels of the indexwe will refer herein to the string a uniquely associated to index element i as the label of i their correspondence a labeled thirdorder tensor of dimensionality 3 x 2 x 3 representing the weighted tuple structure of table 1 being indicated by i aa simple way to perform the mappingthe one we apply in the running example of this sectionis by sorting the i items in the string set alphabetically and mapping increasing integers from 1 to i to the sorted stringsa weighted tuple structure tw built from w1 l and w2 can be represented by a labeled thirdorder tensor xλ with its three indices labeled by w1 l and w2 respectively and such that for each weighted tuple t e tw vt there is a tensor entry vtin other terms a weighted tuple structure corresponds to a tensor whose indices are labeled with the string sets forming the triples and whose entries are the tuple weightsgiven the toy weighted tuple structure in table 1 the object in table 2 is the corresponding labeled thirdorder tensormatricization rearranges a higher order tensor into a matrix the simplest case is moden matricization which arranges the moden fibers to be the columns of the resulting dn x dj matrix moden matricization of a thirdorder tensor can be intuitively understood as the process of making vertical horizontal or depthwise slices of a threeway object like the tensor in table 2 and arranging these slices sequentially to obtain a matrix matricization unfolds the tensor into a matrix with the nth index indexing the rows of the matrix and a column for each pair of elements from the other two tensor indicesfor example the mode1 matricization of the tensor in table 2 results in a matrix with the entries vertically arranged as they are in the table but replacing the second and third indices with a single index ranging from 1 to 6 more explicitly in moden matricization we map each tensor entry to matrix entry where j is computed as in equation adapted from kolda and bader for example if we apply mode1 matricization to the tensor of dimensionality 3 x 2 x 3 in table 2 we obtain the matrix a3x6 in table 3 the tensor entry x311 is mapped to the matrix cell a31 x323 is mapped to a36 and x122 is mapped to a14observe that each column of the matrix is a mode1 fiber of the tensor the first column is the x11 fiber the second column is the x21 fiber and so onmatricization has various mathematically interesting properties and practical applications in computations involving tensors in dm matricization is applied to labeled tensors and it is the fundamental operation for turning the thirdorder tensor representing the weighted tuple structure into matrices whose row and column vector spaces correspond to the linguistic objects we want to study that is the outcome of matricization must be labeled matricestherefore we must define an operation of labeled moden matricizationrecall from earlier discussion that when moden matricization is applied the nth index becomes the row index of the resulting matrix and the corresponding labels do not need to be updatedthe problem is to determine the labels of the column index of the resulting matrixwe saw that the columns of the matrix produced by moden matricization are the moden fibers of the original tensorwe must therefore assign a proper label to moden tensor fibersa moden fiber is obtained by fixing the values of two indices and by taking the tensor entries corresponding to the full range of values of the third indexthus the natural choice for labeling a moden fiber is to use the pair formed by the labels of the two index elements that are fixedspecifically each moden fiber of a tensor xλ is labeled with the binary tuple whose elements are the labels of the corresponding fixed index elementsfor instance given the labeled tensor in table 2 the mode1 fiber x11 is labeled with the pair the mode2 fiber x21 is labeled with the pair and the mode3 fiber x32 is labeled with the pair because moden fibers are the columns of the matrices obtained through moden matricization we define the operation of labeled moden matricization that given a labeled thirdorder tensor xλ maps each entry to the labeled entry such that j is obtained according to equation and λj is the binary tuple obtained from the triple by removing λnfor instance in mode1 matricization the entry in the tensor in table 2 is mapped onto the entry table 3 reports the matrices a b and c respectively obtained by applying labeled mode1 mode2 and mode3 matricization to the labeled tensor in table 2the columns of each matrix are labeled with pairs according to the definition of labeled matricization we just gavefrom now on when we refer to moden matricization we always assume we are performing labeled moden matricizationthe rows and columns of the three matrices resulting from nmode matricization of a thirdorder tensor are vectors in spaces whose dimensions are the corresponding column and row elementssuch vectors can be used to perform all standard linear algebra operations applied in vectorbased semantics measuring the cosine of the angle between vectors applying singular value decomposition to the whole matrix and so onunder the assumption that w1 w2 and the inverse link constraint it follows that for each column of the matrix resulting from mode1 matricization and labeled by there will be a column in the matrix resulting from mode3 matricization that is labeled by and that is identical to the former except possibly for the order of the dimensions similarly for any row w2 in the matrix resulting from mode3 matricization there will be an identical row w1 in the mode1 matricizationtherefore given a weighted tuple structure tw extracted from a corpus and subject to the constraints we just mentioned by matricizing the corresponding labeled thirdorder tensor xλ we obtain the following four distinct semantic vector spaces word by linkword vectors are labeled with words w1 and vector dimensions are labeled with tuples of type wordword by link vectors are labeled with tuples of type and vector dimensions are labeled with links l wordlink by word vectors are labeled with tuples of type and vector dimensions are labeled with words w2 link by wordword vectors are labeled with links l and vector dimensions are labeled with tuples of type words like marine and teacher are represented in the w1xlw2 space by vectors whose dimensions correspond to features such as or in this space we can measure the similarity of words to each other in order to tackle attributional similarity tasks such as synonym detection or concept categorizationthe w1w2xl vectors represent instead word pairs in a space whose dimensions are links and it can be used to measure relational similarity among different pairsfor example one could notice that the link vector of is highly similar to that of crucially as can be seen in table 3 the corpusderived scores that populate the vectors in these two spaces are exactly the same just arranged in different waysin dm attributional and relational similarity spaces are different views of the same underlying tuple structurethe other two distinct spaces generated by tensor matricization look less familiar and yet we argue that they allow us to subsume under the same general dm framework other interesting semantic phenomenawe will show in section 63 how the w1lxw2 space can be used to capture different verb classes based on the argument alternations they displayfor instance this space can be used to find out that the object slot of kill is more similar to the subject slot of die than to the subject slot of kill the lxw1w2 space displays similarities among linksthe usefulness of this will of course depend on what the links arewe will illustrate in section 64 one function of this space namely to perform feature selection picking links that can then be used to determine a meaningful subspace of the w1w2xl spacedirect matricization is just one of the possible uses we can make of the labeled tensorin section 65 we illustrate another use of the tensor formalism by performing smoothing through tensor decompositionother possibilities such as graphbased algorithms operating directly on the graph defined by the tensor or deriving unstructured semantic spaces from the tensor by removing one of the indices are left to future workbefore we move on it is worth emphasizing that from a computational point of view there is virtually no additional cost in the tensor approach with respect to traditional structured dsmsthe labeled tensor is nothing other than a formalization of distributional data extracted in the wordlinkwordscore format which is customary in many structured dsmslabeled matricization can then simply be obtained by concatenating two elements in the original triple to build the corresponding matrixagain a common step in building a structured dsmin spite of being costfree in terms of implementation the mathematical formalism of labeled tensors highlights the common core shared by different views of the semantic space thereby making distributional semantics more generalas will be clear in the next sections the ways in which we tackle specific tasks are by themselves mostly not originalthe main element of novelty is the fact that methods originally developed to resort to ad hoc distributional spaces are now adapted to fit into the unified dm frameworkwe will point out connections to related research specific to the various tasks in the sections devoted to describing their reinterpretation in dmwe omit discussion of our own work that the dm framework is an extension and generalization of baroni et al and baroni and lenci instead we briefly discuss two other studies that explicitly advocate a uniform approach to corpusbased semantic tasks and one article that like us proposes a tensorbased formalization of corpusextracted triplessee turney and pantel for a very recent general survey of dsmspado and lapata partly inspired by lowe have proposed an interesting general formalization of dsmsin their approach a corpusbased semantic model is characterized by a set of functions to extract statistics from the corpus construction of the basisbytargetelements cooccurrence matrix and a similarity function operating on the matrixour focus is entirely on the second aspecta dm according to the characterization in section 3 is a labeled tensor based on a source weighted tuple structure and coupled with matricization operationshow the tuple structure was built is not part of the dm formalizationat the other end dm provides sets of vectors in different vector spaces but it is agnostic about how they are used of course much of the interesting progress in distributional semantics will occur at the two ends of our tensor with better tuple extraction and weighting techniques on one side and better matrix manipulation and similarity measurement on the otheras long as the former operations result in data that can be arranged into a weighted tuple structure and the latter procedures act on vectors such innovations fit into the dm framework and can be used to improve performance on tasks defined on any space derivable from the dm tensorwhereas the model proposed by pado and lapata is designed only to address tasks involving the measurement of the attributional similarity between words turney shares with dm the goal of unifying attributional and relational similarity under the same distributional modelhe observes that tasks that are traditionally solved with an attributional similarity approach can be recast as relational similarity tasksinstead of determining whether two words are for example synonymous by looking at the features they share we can learn what the typical patterns are that connect synonym pairs when they cooccur and make a decision about a potential synonym pair based on their occurrence in similar contextsgiven a list of pairs instantiating an arbitrary relation turneys pairclass algorithm extracts patterns that are correlated with the relation and can be used to discover new pairs instantiating itturney tests his system in a variety of tasks obtaining good results across the boardin the dm approach we collect one set of statistics from the corpus and then exploit different views of the extracted data and different algorithms to tackle different tasksturney on the contrary uses a single generic algorithm but must go back to the corpus to obtain new training data for each new taskwe compare dm with some of turneys results in section 6 but independently of performance we find the dm approach more appealingas corpora grow in size and are enriched with further levels of annotation extracting ad hoc data from them becomes a very timeconsuming operationalthough we did not carry out any systematic experiments we observe that the extraction of tuple counts from corpora in order to train our sample dm models took days whereas even the most timeconsuming operations to adapt dm to a task took on the order of 1 to 2 hours on the same machines similar considerations apply to space compressed our source corpora take about 21 gb our best dm tensor 11 gb perhaps more importantly extracting features from the corpus requires a considerable amount of nlp knowhow whereas the dm representation of distributional data as weighted triples is more akin to other standard knowledge representation formats based on typed relations which are familiar to most computer and cognitive scientiststhus a trained dm can become a generalpurpose resource and be used by researchers beyond the realms of the nlp community whereas applying pairclass requires a good understanding of various aspects of computational linguisticsthis severely limits its interdisciplinary appealat a more abstract level dm and pairclass differ in the basic strategy with which unification in distributional semantics is pursuedturneys approach amounts to picking a task and reinterpreting other tasks as its particular instancesthus attributional and relational similarity are unified by considering the former as a subtype of the latterconversely dm assumes that each semantic task may keep its specificity and unification is achieved by designing a sufficiently general distributional structure populating a specific instance of the structure and generating semantic spaces on demand from the latterthis way dm is able to address a wider range of semantic tasks than turneys modelfor instance language is full of productive semantic phenomena such as the selectional preferences of verbs with respect to unseen arguments predicting the plausibility of unseen pairs cannot by definition be tackled by the current version of pairclass which will have to be expanded to deal with such cases perhaps adopting ideas similar to those we present a first step in this direction within a framework similar to turneys was taken by herdaˇgdelen and baroni turney explicitly formalizes the set of corpusextracted wordlinkword triples as a tensor and was our primary source of inspiration in formalizing dm in these termsthe focus of turneys article however is on dimensionality reduction techniques applied to tensors and the application to corpora is only briefly discussedmoreover turney only derives the w1lw2 space from the tensor and does not discuss the possibility of using the tensorbased formalization to unify different views of semantic data which is instead our main pointthe higherorder tensor dimensionality reduction techniques tested on language data by turney and van de cruys can be applied to the dm tensors before matricizationwe present a pilot study in this direction in section 65in order to make our proposal concrete we experiment with three different dm models corresponding to different ways to construct the underlying weighted tuple structure all models are based on the natural idea of extracting wordlinkword tuples from a dependency parse of a corpus but this is not a requirement for dm the links could for example be based on frequent ngrams as in turney and baroni et al or even on very different kinds of relation such as cooccurring within the same documentthe current models are trained on the concatenation of the webderived ukwac corpus2 about 1915 billion tokens a mid2009 dump of the english wikipedia3 about 820 million tokens and the british national corpus4 about 95 million tokensthe resulting concatenated corpus was tokenized postagged and lemmatized with the treetagger5 and dependencyparsed with the maltparser6 it contains about 283 billion tokensthe ukwac and wikipedia sections can be freely downloaded with full annotation from the ukwac corpus sitefor all our models the label sets w1 w2 contain 30693 lemmas these terms were selected based on their frequency in the corpus augmenting the list with lemmas that we found in various standard test sets such as the toefl and sat listsin all models the words are stored in possuffixed lemma formthe weighted tuple structures differ for the choice of links in l andor for the scoring function σ depdmour first dm model relies on the classic intuition that dependency paths are a good approximation to semantic relations between words depdm is also the model with the least degree of link lexicalization among the three dm instances we have built ldepdm includes the following nounverb nounnoun and adjectivenoun links sbj intr subject of a verb that has no direct object the teacher is singing 4 the soldier talked with his sergeant 4 sbj tr subject of a verb that occurs with a direct object the soldier is reading a book 4 obj direct object the soldier is reading a book 4 iobj indirect object in a double object construction the soldier gave the woman a book 4 nmod noun modifier good teacher 4 school teacher 4 coord noun coordination teachers and soldiers 4 prd predicate noun the soldier became sergeant 4 verb an underspecified link between a subject noun and a complement noun of the same verb the soldier talked with his sergeant 4 the soldier is reading a book 4 preposition every preposition linking the noun head of a prepositional phrase to its noun or verb head i saw a soldier with the gun 4 the soldier talked with his sergeant 4 for each link we also extract its inverse for example there is a sbj intr1 link between an intransitive verb and its subject the cardinality of ldepdm is 796 including direct and inverse linksthe weights assigned to the tuples by the scoring function σ are given by local mutual information computed on the raw corpusderived wordlinkword cooccurrence countsgiven the cooccurrence count oijk of three elements of interest and the corresponding expected count under independence eijk lmi oijk log oijk eijk lmi is an approximation to the loglikelihood ratio measure that has been shown to be a very effective weighting scheme for sparse frequency counts the measure can also be interpreted as the dominant term of average mi or as a heuristic variant of pointwise mi to avoid its bias towards overestimating the significance of low frequency events and it is nearly identical to the poissonstirling measure lmi has considerable computational advantages in cases like ours in which we measure the association of three elements because it does not require keeping track of the full 2 x 2 x 2 contingency table which is the case for the loglikelihood ratiofollowing standard practice negative weights are raised to 0the number of nonzero tuples in the depdm tensor is about 110m including tuples with direct links and their inversesdepdm is a 30693 x 796 x 30693 tensor with density 00149 lexdmthe second model is inspired by the idea that the lexical material connecting two words is very informative about their relation llexdm contains complex links each with the structure patternsuffixthe suffix is in turn formed by two substrings separated by a each respectively encoding the following features of w1 and w2 their pos and morphological features the presence of an article and of adjectives for nouns the presence of adverbs for adjectives and the presence of adverbs modals and auxiliaries for verbs together with their diatheses if the adjective modifying w1 or w2 belongs to a list of 10 high frequency adjectives the suffix string contains the adjective itself otherwise only its posfor instance from the sentence the tall soldier has already shot we extract the tuple its complex link contains the pattern sbj intr and the suffix nthejvnauxalreadythe suffix substring nthej encodes the information that w1 is a singular noun is definite and has an adjective that does not belong to the list of high frequency adjectivesthe substring vnauxalready specifies that w2 is a pastparticiple has an auxiliary and is modified by already belonging to the preselected list of high frequency adverbsthe patterns in the lexdm links include verb if the verb link between a subject noun and a complement noun belongs to a list of 52 high frequency verbs the underspecified verb link of depdm is replaced by the verb itself the soldier used a gun 4 the soldier read the yellow book 4 is copulative structures with an adjectival predicate the soldier is tall 4 prepositionlink nounpreposition this schema captures connecting expressions such as of a number of in a kind of link noun is one of 48 semimanually selected nouns such as number variety or kind the arrival of a number of soldiers 4 attribute noun one of 127 nouns extracted from wordnet and expressing attributes of concepts such as size color or heightthis pattern connects adjectives and nouns that occur in the templates attribute noun of noun is adj and adj attribute noun of noun the color of strawberries is red 4 the autumnal color of the forest 4 as adj as this pattern links an adjective and a noun that match the template as adj as noun as sharp as a knife 4 such as links two nouns occurring in the templates noun such as noun and such noun as noun animals such as cats4 such vehicles as cars 4 lexdm links have a double degree of lexicalizationfirst the suffixes encode a wide array of surface features of the tuple elementssecondly the link patterns themselves besides including standard syntactic relations extend to lexicalized dependency relations and lexicosyntactic shallow templatesthe latter include patterns adopted in the literature to extract specific pieces of semantic knowledgefor instance noun such as noun and such noun as noun were first proposed by hearst as highly reliable patterns for hypernym identification whereas attribute noun of noun is adj and adj attribute noun of noun were successfully used to identify typical values of concept attributes therefore the lexdm distributional memory is a repository of partially heterogeneous types of corpusderived information differing in their level of abstractness which ranges from fairly abstract syntactic relations to shallow lexicalized patternsllexdm contains 3352148 links including inversesthe scoring function 6 is the same as that in depdm and the number of nonzero tuples is about 355m including direct and inverse linkslexdm is a 30693 x 3352148 x 30693 tensor with density 000001typedmthis model is based on the idea motivated and tested by baroni et al but see also davidov and rappoport for a related methodthat what matters is not so much the frequency of a link but the variety of surface forms that express itfor example if we just look at frequency of cooccurrence the triple is much more common than the semantically more informative however if we count the different surface realizations of the former pattern in our corpus we find that there are only three of them whereas has nine distinct realizations typedm formalizes this intuition by adopting as links the patterns inside the lexdm links while the suffixes of these patterns are used to count their number of distinct surface realizationswe call the model typedm because it counts types of realizations not tokensfor instance the two lexdm links of1nanthe and of1nsjnthe are counted as two occurrences of the same typedm link of1 corresponding to the pattern in the two original linksthe scoring function 6 computes lmi not on the raw wordlinkword cooccurrence counts but on the number of distinct suffix types displayed by a link when it cooccurs with the relevant wordsfor instance a typedm link derived from a lexdm pattern that occurs with nine different suffix types in the corpus is assigned a frequency of 9 for the purpose of the computation of lmithe distinct typedm links are 25336the number of nonzero tuples in the typedm tensor is about 130m including direct and inverse linkstypedm is a 30693 x 25336 x 30693 tensor with density 00005to sum up the three dm instance models herein differ in the degree of lexicalization of the link set andor in the scoring functionlexdm is a heavily lexicalized model contrasting with depdm which has a minimum degree of lexicalization and consequently the smallest set of linkstypedm represents a sort of middle level both for the kind and the number of linksthese consist of syntactic and lexicalized patterns as in lexdmthe lexical information encoded in the lexdm suffixes however is not used to generate different links but to implement a different counting scheme as part of a different scoring functiona weighted tuple structure is intended as a longterm semantic resource that can be used in different projects for different tasks analogously to traditional handcoded resources such as wordnetcoherent with this approach we make our best dm model publicly available from httpcliccimecunitnitdmthe site also contains a set of perl scripts that perform the basic operations on the tensor and its derived vectors we are about to describethe dm framework provides via matricization a set of matrices with associated labeled row and column vectorsthese labeled matrices can simply be derived from the tuple tensor by concatenating two elements in the original triplesany operation that can be performed on the resulting matrices and that might help in tackling a semantic task is fair gamehowever in the experiments reported in this article we will work with a limited number of simple operations that are wellmotivated in terms of the geometric framework we adopt and suffice to face all the tasks we will deal with vector length and normalizationthe length of a vector v with dimensions v1 v2 vn is a vector is normalized to have length 1 by dividing each dimension by the original vector lengthcosinewe measure the similarity of two vectors x and y by the cosine of the angle they form the cosine ranges from 11 for vectors pointing in the same direction to 0 for orthogonal vectorsother similarity measures such as lins measure work better than the cosine in some tasks however the cosine is the most natural similarity measure in the geometric formalism we are adopting and we stick to it as the default approach to measuring similarityvector sumtwo or more vectors are summed in the obvious way by adding their values on each dimensionwe always normalize the vectors before summingthe resulting vector points in the same direction as the average of the summed normalized vectorswe refer to it as the centroid of the vectorsprojection onto a subspaceit is sometimes useful to measure length or compare vectors by taking only some of their dimensions into accountfor example one way to find nouns that are typical objects of the verb to sing is to measure the length of nouns in a w1xlw2 subspace in which only dimensions such as have non0 valueswe project a vector onto a subspace of this kind through multiplication of the vector by a square diagonal matrix with 1s in the diagonal cells corresponding to the dimensions we want to preserve and 0s elsewherea matrix of this sort performs an orthogonal projection of the vector it multiplies as we saw in section 3 labeled matricization generates four distinct semantic spaces from the thirdorder tensorfor each space we have selected a set of semantic experiments that we model by applying some combination of the vector manipulation operations of section 52the experiments correspond to key semantic tasks in computational linguistics andor cognitive science typically addressed by distinct dsms so farwe have also aimed at maximizing the variety of aspects of meaning covered by the experiments ranging from synonymy detection to argument structure and concept properties and encompassing all the major lexical classesboth these facts support the view of dm as a generalized model that is able to overtake stateoftheart dsms in the number and types of semantic issues addressed while being competitive in each specific taskthe choice of the dm semantic space to tackle a particular task is essentially based on the naturalness with which the task can be modeled in that spacehowever alternatives are conceivable both with respect to space selection and to the operations performed on the spacefor instance turney models synonymy detection with a dsm that closely resembles our w1w2l space whereas we tackle this task under the more standard w1lw2 viewit is an open question whether there are principled ways to select the optimal space configuration for a given semantic taskin this article we limit ourselves to proving that each space derived through tensor matricization is semantically interesting in the sense that it provides the proper ground to address some semantic taskfeature selectionreweighting and dimensionality reduction have been shown to improve dsm performancefor instance the feature bootstrapping method proposed by zhitomirskygeffet and dagan boosts the precision of a dsm in lexical entailment recognitioneven if these methods can be applied to dm as well we did not use them in our experimentsthe results presented subsequently should be regarded as a baseline performance that could be enhanced in future work by exploring various taskspecific parameters this is consistent with our current aim of focusing on the generality and adaptivity of dm rather than on taskspecific optimizationas a first important step in this latter direction however we conclude the empirical evaluation in section 65 by replicating one experiment using tensordecompositionbased smoothing a form of optimization that can only be performed within the tensorbased approach to dsmsin order to maximize coverage of the experimental test sets they are preprocessed with a mixture of manual and heuristic procedures to assign a pos to the words they contain lemmatize convert some multiword forms to single words and turn some adverbs into adjectives nevertheless some words are unrecoverable and in such cases we make a random guess in many of the experiments herein dm is not only compared to the results available in the literature but also to our implementation of stateoftheart dsmsthese alternative models have been trained on the same corpus used to build the dm tuple tensorsthis way we aim at achieving a fairer comparison with alternative approaches in distributional semantics abstracting away from the effects induced by differences in the training datamost experiments report global test set accuracy to assess the performance of the algorithmsthe number of correctly classified items among all test elements can be seen as a binomially distributed random variable and we follow the acl wiki stateoftheart site7 in reporting also clopperpearson binomial 95 confidence intervals around the accuracies the binomial confidence intervals give a sense of the spread of plausible population values around the testsetbased point estimates of accuracywhere appropriate and interesting we compare the accuracy of two specific models statistically with an exact fisher test on the contingency table of correct and wrong responses given by the two modelsthis approach to significance testing is problematic in many respects the most important being that we ignore dependencies in correct and wrong counts due to the fact that the algorithms are evaluated on the same test set more appropriate tests however would require access to the fully itemized results from the compared algorithms whereas in most cases we only know the point estimate reported in the earlier literaturefor similar reasons we do not make significance claims regarding other performance measures such as macroaveraged f other forms of statistical analysis of the results are introduced herein when they are used they are mostly limited to the models for which we have full access to the resultsnote that we are interested in whether dm performance is overall within stateoftheart range and not on making precise claims about the models it outperformsin this respect we think that our general results are clear even where they are not supported by statistical inference or interpretation of the latter is problematicthe vectors of this space are labeled with words w1 and their dimensions are labeled with binary tuples of type the dimensions represent the attributes of words in terms of lexicosyntactic relations with lexical collocates such as or consistently all the semantic tasks that we address with this space involve the measurement of the attributional similarity between wordsthe w1xlw2 matrix is a structured semantic space similar to those used by curran and moens grefenstette and lin among othersto test if the use of links detracts from performance on attributional similarity tasks we trained on our concatenated corpus two alternative modelswin and dvwhose features only include lexical collocates of the targetwin is an unstructured dsm that does not rely on syntactic structure to select the collocates but just on their linear proximity to the targets its matrix is based on cooccurrences of the same 30k words we used for the other models within a window of maximally five content words before or after the targetdv is an implementation of the dependency vectors approach of pado and lapata it is a structured dsm but dependency paths are used to pick collocates without being part of the attributesthe dv model is obtained from the same cooccurrence data as depdm frequencies are summed across dependency path links for wordlinkword triples with the same first and second wordssuppose that soldier and gun occur in the tuples and in depdm this results in two features for soldier and in dv we would derive a single gun feature with frequency 40as for the dm models the win and dv counts are converted to lmi weights and negative lmi values are raised to 0win is a 30693 x 30693 matrix with about 110 million nonzero entries dv is a 30693 x 30693 matrix with about 38 million nonzero values 611 similarity judgmentsour first challenge comes from the classic data set of rubenstein and goodenough consisting of 65 noun pairs rated by 51 subjects on a 04 similarity scalethe average rating for each pair is taken as an estimate of the perceived similarity between the two words following the earlier literature we use pearsons r to evaluate how well the cosines in the w1lw2 space between the nouns in each pair correlate with the ratingsthe results are presented in table 4 which also reports stateoftheart performance levels of corpusbased systems from the literature one of the dm models namely typedm does very well on this task outperformed only by doublecheck an unstructured system that relies on web queries and for which we report the best result across parameter settingswe also report the best results from a range of experiments with different models and parameter settings from herdaˇgdelen erk and baroni and pado and lapata for the latter we also report the best result they obtain when using cosine as the similarity measure overall the typedm result is in line with the state of the art given the size of the input corpus and the fact that we did not perform any tuningfollowing pado pado and erk we used the approximate test proposed by raghunathan to compare the correlations with the human ratings of sets of models the test suggests that the difference in correlation with human ratings between typedm and our second best model win is significant on the other hand there is no significant difference across win depdm dv and lexdm late quantitative similarity ratingsthe classic toefl synonym detection task focuses on the high end of the similarity scale asking the models to make a discrete decision about which word is the synonym from a set of candidatesthe data set introduced to computational linguistics by landauer and dumais consists of 80 multiplechoice questions each made of a target word and four candidatesfor example given the target levied the candidates are imposed believed requested correlated the first one being the correct choiceour algorithms pick the candidate with the highest cosine to the target item as their guess of the right synonymtable 5 reports results on the toefl set for our models as well as the best model of herdaˇgdelen and baroni and the corpusbased models from the acl wiki toefl stateoftheart table the claims to follow about the relative performance of the models must be interpreted cautiously in light of the spread of the confidence intervals it suffices to note that according to a fisher test the difference between the secondbest model glsa and the twelfth model pmiir01 is not significant at the α 05 level the difference between the bottom model lsa97 and random guessing is on the other hand highly significant the best dm model is again typedm which also outperforms turneys unified pairclass approach as well as his webstatistics based pmiir01 modeltypedm does better than the best pado and lapata model and comparably to our dv implementationits accuracy is more than 10 higher than the average human test taker and the classic lsa model among the approaches that outperform typedm bagpack is supervised and cwo and pmiir03 rely on much larger corporathis leaves us with three unsupervised models from the literature that outperform typedm while being trained on comparable or smaller corpora lsa03 glsa and ppmicin all three cases the authors show that parameter tuning is beneficial in attaining the reported best performancefurther work should investigate how we could improve typedm by exploring various parameter settings 613 noun categorizationhumans are able to group words into classes or categories depending on their meaning similaritiescategorization tasks play a prominent role in cognitive research on concepts and meaning as a probe into the semantic organization of the lexicon and the ability to arrange concepts hierarchically into taxonomies research in corpusbased semantics has always been interested in investigating whether distributional similarity could be used to group words into semantically coherent categoriesfrom the computational point of view this is a particularly crucial issue because it concerns the possibility of using distributional information to assign a semantic class or type to wordscategorization requires a discrete decision as in the toefl task but it is based on detecting not only synonyms but also less strictly related words that stand in a coordinatecohyponym relationwe focus here on noun categorization which we operationalize as a clustering taskdistributional categorization has been investigated for other pos as well most notably verbs however verb classifications are notoriously more controversial than nominal ones and deeply interact with argument structure propertiessome experiments on verb classification will be carried out in the w1lw2 space in section 63because the task of clustering conceptswords into superordinates has recently attracted much attention we have three relevant data sets from the literature available for our teststhe almuharebpoesio set includes 402 concepts from wordnet balanced in terms of frequency and ambiguitythe concepts must be clustered into 21 classes each selected from one of the 21 unique wordnet beginners and represented by between 13 and 21 nounsexamples include the vehicle class the motivation class and the social unit class see almuhareb for the full setthe battig test set introduced by baroni et al is based on the expanded battig and montague norms of van overschelde rawson and dunlosky the set comprises 83 concepts from 10 common concrete categories with the concepts selected so that they are rated as highly prototypical of the classclass examples include land mammals tools and fruit see baroni et al for the full listfinally the esslli 2008 set was used for one of the distributional semantic workshop shared tasks it is also based on concrete nouns but it includes fewer prototypical members of categories the 44 target concepts are organized into a hierarchy of classes of increasing abstractionthere are 6 lower level classes with maximally 13 concepts per class at a middle level concepts are grouped into three classes at the most abstract level there is a twoway distinction between living beings and objectssee httpwordspace collocationsde for the full setwe cluster the nouns in each set by computing their similarity matrix based on pairwise cosines and feeding it to the widely used cluto toolkit we use clutos builtin repeated bisections with global optimization method accepting all of clutos default values for this methodcluster quality is evaluated by percentage purity one of the standard clustering quality measures returned by cluto if nir is the number of items from the ith true class that were assigned to the rth cluster n the total number of items and k the number of clusters then expressed in words for each cluster we count the number of items that belong to the true class that is most represented in the cluster and then we sum these counts across clustersthe resulting sum is divided by the total number of items so that in the best case purity will be 1 as cluster quality deteriorates purity approaches 0for the models where we have full access to the results we use a heuristic bootstrap procedure to obtain confidence intervals around the purities we resample with replacement 10k data sets of the original sizeempirical 95 confidence intervals are then computed from the distribution of the purities in the bootstrapped data sets the confidence intervals give a rough idea of how stable purity estimates are across small variations of the items in the data setsthe random models for this task are baselines assigning the concepts randomly to the target clusters with the constraint that each cluster must contain at least one conceptrandom assignment is repeated 10k times and we obtain means and confidence intervals from the distribution of these simulationstable 6 reports purity results for the three data sets comparing our models to those in the literatureagain the typedm model has an excellent performanceon the esslli 2008 set it outperforms the best configuration of the best shared task system among those that did threelevel categorization despite the fact that the latter uses the full web as a corpus and manually crafted patterns to improve feature extractiontypedms performance is equally impressive on the ap set where it outperforms attrvalue05 the best unsupervised model by the data set proponents trained on the full webinterestingly the deppath model of rothenhausler and schutze which is the only one outperforming typedm on the ap set is another structured model with dependencybased linkmediated features which would fit well into the dm frameworktypedms purity is extremely high with the battig set as well although here it is outperformed by the unstructured win modelour top two performances are higher than strudel the best model by the proponents of the taskthe latter was trained on about half of the data we used however 614 selectional preferencesour last pair of data sets for the w1xlw2 space illustrate how the space can be used not only to measure similarity among words but also to work with more abstract notions such as that of a typical filler of an argument slot of a verb we think that these are especially important experiments because they show how the same matrix that has been used for tasks that were entirely bound to lexical items can also be used to generalize to structures that go beyond what is directly observed in the corpusin particular we model here selectional preferences but our method is generalizable to many other semantic tasks that pertain to composition constraints that is they require measuring the goodness of fit of a wordconcept as argument filler of another wordconcept including assigning semantic roles logical metonymy coercion and many other challengesthe selectional preference test sets are based on averages of human judgments on a sevenpoint scale about the plausibility of nouns as arguments of verbsthe mcrae data set consists of 100 nounverb pairs rated by 36 subjectsthe pado set has 211 pairs rated by 20 subjectsfor each verb we first use the w1xlw2 space to select a set of nouns that are highly associated with the verb via a subject or an object linkin this space nouns are represented as vectors with dimensions that are labeled with tuples where the word might be a verb and the link might stand for among other things syntactic relations such as obj to find nouns that are highly associated with a verb v when linked by the subject relation we project the w1xlw2 vectors onto a subspace where all dimensions are mapped to 0 except the dimensions that are labeled with where lsbj is a link containing either the string sbj intr or the string sbj tr and v is the verbwe then measure the length of the noun vectors in this subspace and pick the top n longest ones as prototypical subjects of the verbthe same operation is performed for the object relationin our experiments we set n to 20 but this is of course a parameter that should be exploredwe normalize and sum the vectors of the picked nouns to obtain a centroid that represents an abstract subject prototype for the verb the plausibility of an arbitrary noun as the subject of a verb is then measured by the cosine of the noun vector to the subject centroid in w1xlw2 spacecrucially the algorithm can provide plausibility scores for nouns that do not cooccur with the target verb in the corpus by looking at how close they are to the centroid of nouns that do often cooccur with the verbthe corpus may contain neither eat topinambur nor eat sympathy but the topinambur vector will likely be closer to the prototypical eat object vector than the one of sympathy would beit is worth stressing that the whole process relies on a single w1xlw2 matrix this space is first used to identify typical subjects of a verb via subspacing then to construct centroid vectors for the verb subject prototypes and finally to measure the distance of nouns to these centroidsour method is essentially the same save for implementation and parameter choice details as the one proposed by pado pado and erk in turn inspired by erk however they treat the identification of typical argument fillers of a verb as an operation to be carried out using different resources whereas we reinterpret it as a different way to use the same w1lw2 space in which we measure plausibilityfollowing pado and colleagues we measure performance by the spearman p correlation coefficient between the average human ratings and the model predictions considering only verbnoun pairs that are present in the modeltable 7 reports percentage coverage and correlations for the dm models results from pado pado and erk and the performance on the pado data set of the supervised system of herdaˇgdelen and baroni testing for significance of the correlation coefficients with twotailed tests based on a spearmancoefficient derived t statistic we find that the resniks model correlation for the mcrae data is not significantly different from 0 parcos on mcrae is significant at α 05 and all other models on either data set are significant at α 01 and belowtypedm emerges as an excellent model to tackle selectional preferences and as the overall winner on this taskon the pado data set it is as good as pados framenet based model and it is outperformed only by the supervised bagpack approachon the mcrae data set all three dm models do very well and typedm is slightly worse than the other two modelson this data set the dm models are outperformed by pados framenet model in terms of correlation but the latter has a much lower coverage suggesting that for practical purposes the dm models are a better choiceaccording to raghunathans test the difference in correlation with human ratings among the three dm models is not significant on the mcrae data where typedm is below the other models on the pado data set on the other hand where typedm outperforms the other dm models the same difference is highly significant as a final remark on the w1lw2 space we can notice that dm models perform very well in tasks involving attributional similaritythe performance of unstructured dsms is also high sometimes even better than that of structured dsmshowever our best dm model also achieves brilliant results in capturing selectional preferences a task that is not directly addressable by unstructured dsmsthis fact suggests that the real advantage provided by structured dsmsparticularly when linguistic structure is suitably exploited as with the dm thirdorder tensoractually resides in their versatility in addressing a much larger and various range of semantic tasksthis preliminary conclusion will also be confirmed by the experiments modeled with the other dm spacesthe vectors of this space are labeled with word pair tuples and their dimensions are labeled with links l this arrangement of our tensor reproduces the relational similarity space of turney also implicitly assumed in much relation extraction work where word pairs are compared based on the patterns that link them in the corpus in order to measure the similarity of their relations the links that in w1xlw2 space provide a form of shallow typing of lexical features associated with single words constitute under the w1w2xl view full features associated with word pairs besides exploiting this view of the tensor to solve classic relational tasks we will also show how problems that have not been traditionally defined in terms of a wordpairbylink matrix such as qualia harvesting with patterns or generating lists of characteristic properties can be elegantly recast in the w1w2xl space by measuring the length of vectors in a link space thus bringing a wider range of semantic operations under the umbrella of the natural dm spacesthe w1w2xl space represents pairs of words that cooccur in the corpus within the maximum span determined by the scope of the links connecting them when words do not cooccur or only cooccur very rarely attributional similarity can come to the rescuegiven a target pair we can construct other probably similar pairs by replacing one of the words with an attributional neighborfor example given the pair we might discover in w1xlw2 space that car is a close neighbor of automobilewe can then look for the pair and use relational evidence about this pair as if it pertained to this is essentially the way to deal with w1w2xl data sparseness proposed by turney except that he relies on independently harvested attributional and relational spaces whereas we derive both from the same tensormore precisely in the w1w2xl tasks where we know the set of target pairs in advance we smooth the dm models by combining in turn one of the words of each target pair with the top 20 nearest w1xlw2 neighbors of the other word obtaining a total of 41 pairs the centroid of the w1w2xl vectors of these pairs is then taken to represent a target pair vector is an average of the etc vectorsthe nearest neighbors are efficiently searched in the w1xlw2 matrix by compressing it to 5000 dimensions via random indexing using the parameters suggested by sahlgren smoothing consistently improved performance and we only report the relevant results for the smoothed versions of the models we reimplemented turneys latent relational analysis model training it on our source corpus we chose the parameter values of turneys main model in short for a given set of target pairs we count all the patterns that connect them in either order in the corpuspatterns are sequences of one to three words occurring between the targets with all none or any subset of the elements replaced by wildcards only the top 4000 most frequent patterns are preserved and a targetpairbypattern matrix is constructed values in the matrix are log and entropytransformed using turneys formulafinally svd is applied reducing the columns to the top 300 latent dimensions for simplicity and to make lra more directly comparable to the dm models we applied our attributionalneighborbased smoothing technique instead of the more sophisticated one used by turneythus our lra implementation differs from turneys original in two aspects the smoothing method and the source corpus neither variation pertains to inherent differences between lra and dmgiven the appropriate resources a dm model could be trained on turneys gigantic corpus and smoothed with his technique621 solving analogy problemsthe sat test set introduced by turney and collaborators contains 374 multiplechoice questions from the sat college entrance exameach question includes one target and five candidate analogies the data set is dominated by nounnoun pairs but all other combinations are also attested the task is to choose the candidate pair most analogous to the target this is essentially the same task as the toefl but applied to word pairs instead of wordsas in the toefl we pick the candidate with the highest cosine with the target as the right analogytable 8 reports our sat results together with those of other corpusbased methods from the acl wiki and other systemstypedm is again emerging as the best among our modelsto put its performance in context statistically according to a fisher test its accuracy is not significantly different from that of vsm whereas it is better than that of pmiir06 typedm is at least as good as lra when the latter is trained on the same data and smoothed with our method suggesting that the excellent performance of turneys version of lra is due to the fact that he used a much larger corpus andor to his more sophisticated smoothing technique and not to the specific way in which lra collects corpusbased statisticsall the algorithms with higher accuracy than typedm are based on much larger input corpora except bagpack which is however supervisedthe lsa system of quesada mangalath and kintsch which performs similarly to typedm is based on a smaller corpus but it relies on handcoded analogy domains that are represented by lists of manually selected characteristic words622 relation classificationjust as the sat is the relational equivalent of the toefl task the test sets we tackle next are a relational analog to attributional concept clustering in that they require grouping pairs of words into classes that instantiate the same relationswhereas we cast attributional categorization as an unsupervised clustering problem the common approach to classifying word pairs by relation is supervised and relies on labeled examples for trainingin this article we exploit training data in a very simple way via a nearest centroid methodin the semeval task we are about to introduce where both positive and negative examples are available for each class we use the positive examples to construct a centroid that represents a target class and negative examples to construct a centroid representing items outside the classwe then decide if a test pair belongs to the target class by measuring its distance from the positive and negative centroids picking the nearest onefor example the becauseeffect relation has positive training examples such as cyclinghappiness and massagerelief and negative examples such as customersatisfaction and exposureprotectionwe create a positive centroid by summing the w1w2l vectors of the first set of pairs and a negative centroid by summing the latterwe then measure the cosine of a test item such as smilewrinkle with the centroids and decide if it instantiates the becauseeffect relation based on whether it is closer to the positive or negative centroidfor the other tasks we do not have negative examples but positive examples for different classeswe create a centroid for each class and classify test items based on the centroid they are nearest toour first test pertains to the seven relations between nominals in task 4 of semeval 2007 becauseeffect instrumentagency product producer originentity themetool partwhole contentcontainerfor each relation the data set includes 140 training and about 80 test itemseach item consists of a web snippet containing word pairs connected by a certain pattern the retrieved snippets are manually classified by the semeval organizers as positive or negative instances of a certain relation about 50 training and test cases are positive instancesin our experiments we do not make use of the contexts of the target word pairs that are provided with the test setthe second data set comes from nastase and szpakowicz it pertains to the classification of 600 modifiernoun pairs and it is of interest because it proposes a very finegrained categorization into 30 semantic classes such as because purpose locationat locationfrom frequency timeat and so onthe modifiers can be nouns adjectives or adverbsbecause the data set is not split into training and test data we follow turney and perform leaveoneout crossvalidationthe data set also comes with a coarser fiveway classificationour unreported results on it are comparable in terms of relative performance to the ones for the 30way classificationthe last data set contains 1443 nounnoun compounds classified by o seaghdha and copestake into 6 relations be have in actor instrument and about see o seaghdha and copestake and references therewe use the same fiveway crossvalidation splits as the data set proponentstable 9 reports performance of models from our experiments and from the literature on the three supervised relation classification tasksfollowing the relevant earlier studies for semeval we report macroaveraged accuracy whereas for the other two data sets we report global accuracy all other measures are macroaveragedmajority is the performance of a classifier that always guesses the majority class in the test set alltrue always assigns an item to the target class probmatch randomly guesses classes matching their distribution in the test data for semeval the table reports the results of those models that took part in the shared task and like ours did not use the organizerprovided wordnet sense labels nor information about the query used to retrieve the examplesall these models are outperformed by typedm despite the fact that they exploit the training contexts andor specific additional resources an annotated compound database more sophisticated machine learning algorithms to train the relation classifiers web counts and so onfor the ns data set none of the dm models do well although typedm is once more the best among themthe dm models are outperformed by other models from the literature all trained on much larger corpora and also by our implementation of lrathe difference in global accuracy between lra and typedm is significant typedms accuracy is nevertheless well above the best baseline accuracy the oc results confirm that typedm is the best of our models again outperforming our lra implementationstill our best performance is well below that of occomb the absolute best and ocrel the best purely relational model of o seaghdha and copestake o seaghdha and copestake use sophisticated kernelbased methods and extensive parameter tuning to achieve these resultswe hope that the typedm performance would also improve by improving the machine learning aspects of the procedureas an ad interim summary we observe that typedm achieves competitive results in semantic tasks involving relational similarityin particular in both analogy solving and two out of three relation classification experiments typedm is at least as good as our lra implementationwe now move on to show how this same view of the dm tensor can be successfully applied to aspects of meaning that are not normally addressed by relational dsms623 qualia extractiona popular alternative to the supervised approach to relation extraction is to pick a set of lexicosyntactic patterns that should capture the relation of interest and to harvest pairs they connect in text as famously illustrated by hearst for the hyponymy relationin the dm approach instead of going back to the corpus to harvest the patterns we exploit the information already available in the w1w2l spacewe select promising links as our equivalent of patterns and we measure the length of word pair vectors in the w1w2l subspace defined by these linkswe illustrate this with the data set of cimiano and wenderoth which contains qualia structures for 30 nominal concepts both concrete and abstract cimiano and wenderoth asked 30 subjects to produce qualia for these words obtaining a total of 1487 word quale pairs instantiating the four roles postulated by pustejovsky formal constitutive agentive and telic we approximate the patterns proposed by cimiano and wenderoth by manually selecting links that are already in our dm models as reported in table 10 all qualia roles have links pertaining to nounnoun pairsthe agentive and telic patterns also harvest nounverb pairsfor lexdm we pick all links that begin with one of the strings in table 10for the depdm model the only attested links are n with q n sbj intr q n sbj tr q and q obj n consequently depdm does not harvest formal qualia and is penalized accordingly in the evaluationwe project all w1w2xl vectors that contain a target noun onto each of the four subspaces determined by the qualespecific link sets and we compute their subspace lengthsgiven a target noun n and a potential quale q the length of the vector in the subspace characterized by the links that represent role r is our measure of how good q is as a quale of type r for n in the subspace defined by the telic links is our measure of fitness of read as telic role of bookwe use length in the subspace associated to the qualia role r to rank all pairs relevant to r following cimiano and wenderoths evaluation method for each noun we first compute separately for each role the ranked list precision at 11 equally spaced recall levels from 0 to 100we select the precision recall and f values at the recall level that results in the highest f score we then average across the roles and then across target nounsthe task as framed here cannot be run with the lra model and because of its openended nature we do not smooth the modelstable 11 reports the performance of our models as well as the f scores reported by cimiano and wenderothfor our models where we have access to the itemized data we also report the standard deviation of f across the target nounsall the dm models perform well and once more typedm emerges as the best among them with an f value that is also above the best cimiano and wenderoth models despite the large standard deviations the difference in f across concepts between typedm and the secondbest dm model is highly significant suggesting that the large variance is due to different degrees of difficulty of the concepts affecting the models in similar wayslinks approximating the patterns proposed in cimiano and wenderoth automated generation of commonsense concept descriptions in terms of intuitively salient properties a dog is a mammal it barks it has a tail and so forth similar property lists collected from subjects in elicitation tasks are widely used in cognitive science as surrogates of mental features largescale collections of propertybased concept descriptions are also carried out in ai where they are important for commonsense reasoning in the qualia task given a concept we had to extract properties of certain kinds the propertybased description task is less constrained because the most salient relations of a nominal concept might be in all sorts of relations with it still we couch the task of unconstrained property extraction as a challenge in the w1w2xl spacethe approach is similar to the method adopted for qualia roles but now the whole w1w2xl space is used instead of selected subspacesgiven all the pairs that have the target nominal concept as first element we rank them by length in the w1w2xl spacethe longest vectors in this space should correspond to salient properties of the target concept as we expect a concept to often cooccur in texts with its important properties for example among the longest w1w2xl vectors with car as first item we find and the first two pairs are normalized by dividing by the longest vector in the harvested set the third by dividing by the longest vectorwe test this approach in the esslli 2008 distributional semantic workshop unconstrained property generation challenge the data set contains for each of 44 concrete concepts 10 properties that are those that were most frequently produced by subjects in the elicitation experiment of mcrae et al algorithms must generate lists of 10 properties per concept and performance is measured by overlap with the subjectproduced properties that is by the crossconcept average proportions of properties in the generated lists that are also in the corresponding gold standard listssmoothing would be very costly and probably counterproductive because lra requires a priori specification of the target pairs it is not well suited to this tasktable 12 reports the percentage overlap with the gold standard properties for our models as well as the only esslli 2008 participant that tried this task and for the models of baroni et al typedm is the best dm model and it also does quite well compared to the state of the artthe difference between strudel the best model from the earlier literature and typedm is not statistically significant according to a paired ttest across the target concepts the difference between typedm and dv10 the second best model from the literature is highly significant if we consider how difficult this sort of openended task is matching on average two out of ten speakergenerated properties as typedm does is an impressive featthe vectors of this space are labeled with binary tuples of type and their dimensions are labeled with words w2 we illustrate this space in the task of discriminating verbs participating in different argument alternationshowever other uses of the space can also be foreseenfor example the rows of w1lxw2 correspond to the columns of the w1xlw2 space we could use the former space for feature smoothing or selection in the latter space for example by merging the features of w1xlw2 whose corresponding vectors in w1lxw2 have a cosine similarity over a given thresholdwe leave this possibility to further workamong the linguistic objects represented by the w1lxw2 vectors we find the syntactic slots of verb framesfor instance the vector labeled with the tuple represents the subject slot of the verb read in terms of the distribution of its noun fillers which label the dimensions of the spacewe can use the w1lxw2 space to explore the semantic properties of syntactic frames and to extract generalizations about the inner structure of lexicosemantic representations of the sort formal semanticists have traditionally been interested infor instance the high similarity between the object slot of kill and the subject slot of die might provide a distributional correlate to the classic because analysis of killing by dowty and many othersmeasuring the cosine between the vectors of different syntactic slots of the same verb corresponds to estimating the amount of fillers they sharemeasures of slot overlap have been used by joanis stevenson and james as features to classify verbs on the basis of their argument alternationslevin and rappaporthovav define argument alternations as the possibility for verbs to have multiple syntactic realizations of their semantic argument structurealternations involve the expression of the same semantic argument in two different syntactic slotswe expect that if a verb undergoes a particular alternation then the set of nouns that appear in the two alternating slots should overlap to a certain degreeargument alternations represent a key aspect of the complex constraints that shape the syntaxsemantics interfaceverbs differ with respect to the possible alternations they can undergo and this variation is strongly dependent on their semantic properties levin has in fact proposed a wellknown classification of verbs based on their range of syntactic alternationsrecognizing the alternations licensed by a verb is extremely important in capturing its argument structure properties and consequently in describing its semantic behaviorwe focus here on a particular class of alternations namely transitivity alternations whose verbs allow both for a transitive np v np variant and for an intransitive np v variant we use the w1lxw2 space to carry out the automatic classification of verbs that participate in different types of transitivity alternationsin the causativeinchoative alternation the object argument can also be realized as an intransitive subject in a first experiment we use the w1lxw2 space to discriminate between transitive verbs undergoing the causativeinchoative alternation and nonalternating ones the ci data set was introduced by baroni and lenci but not tested in a classification task thereit consists of 232 causativeinchoative verbs and 170 nonalternating transitive verbs from levin in a second experiment we apply the w1lxw2 space to discriminate verbs that belong to three different classes each corresponding to a different type of transitive alternationwe use the ms data set which includes 19 unergative verbs undergoing the induced action alternation 19 unaccusative verbs that undergo the causativeinchoative alternation and 20 objectdrop verbs participating in the unexpressed object alternation see levin for details about each of these transitive alternationsthe complexity of this task is due to the fact that the verbs in the three classes have both transitive and intransitive variants but with very different semantic rolesfor instance the transitive subject of unaccusative and unergative verbs is an agent of causation whereas the subject of the intransitive variant of unaccusative verbs has a theme role and the intransitive subject of unergative verbs has instead an agent role thus their surface identity notwithstanding the semantic properties of the syntactic slots of the verbs in each class are very differentby testing the w1lxw2 space on such a task we can therefore evaluate its ability to capture nontrivial properties of the verbs thematic structurewe address these tasks by measuring the similarities between the w1lxw2 vectors of the transitive subject intransitive subject and direct object slots of a verb and using these interslot similarities to classify the verbfor instance given the definition of the ci alternation we can predict that with alternating verbs the intransitive subject slot should be similar to the direct object slot while this should not hold for nonalternating verbs for each verb v in a data set we extract the corresponding w1lxw2 slot vectors whose links are sbj intr sbj tr and obj then for each v we build a threedimensional vector with the cosines between the three slot vectorsthese second order vectors encode the profile of similarity across the slots of a verb and can be used to spot verbs that have comparable profiles we model both experiments as classification tasks using the nearest centroid method on the threedimensional vectors with leaveoneout crossvalidationwe perform binary classification of the ci data set and threeway classification of the ms datatable 13 reports the results with the baselines computed similarly to the ones in section 622 the dm performance is also compared with the results of merlo and stevenson for their classifiers tested with the leaveoneout methodology all the dm models discriminate the verb classes much more reliably than the baselinesthe accuracy of depdm the worst dm model is significantly higher than that of the best baselines alltrue in ci and majority on ms typedm is again our best modelits performance is comparable to the lower range of the merlo and stevenson classifiers the typedm results were obtained simply by measuring the verb interslot similarities in the w1lw2 spaceconversely the classifiers in merlo and stevenson rely on a much larger range of knowledgeintensive features selected in an ad hoc fashion for this task finally we can notice that in both experiments the mildly and heavily lexicalized dm models score better than their nonlexicalized counterpart although the difference between the best dm model and depdm is not significant on either data set verb alternations do not typically appear among the standard tasks on which dsms are testedmoreover they involve nontrivial properties of argument structurethe good performance of dm in these experiments is therefore particularly significant in supporting its vocation as a general model for distributional semanticsthe vectors of this space are labeled with links l and their dimensions are labeled with word pair tuples links are represented in terms of the word pairs they connectthe lxw1w2 space supports tasks where we are directly interested in the links as an object of studyfor example characterizing prepositions or measuring the relative similarity of different kinds of verbnoun relationswe focus here instead on a potentially more common use of lxw1w2 vectors as a feature selection and labeling space for w1w2xl tasksspecifically we go back to the qualia extraction task of section 623there we started with manually identified linkshere we start with examples of nounquale pairs that instantiate a role r we project all lxw1w2 vectors in a subspace where only dimensions corresponding to one of the example pairs are nonzerowe then pick the most characteristic links in this subspace to represent the target role r and look for new pairs in the w1w2xl subspace defined by these automatically picked links instead of the manual onesalthough we stop at this point the procedure can be seen as a dm version of popular iterative bootstrapping algorithms such as espresso start with some examples of the target relation find links that are typical of these examples use the links to find new examples and so onin dm the process does not go back to a corpus to harvest new links and example pairs but it iterates between the column and row spaces of a precompiled matrix for each of the 30 noun concepts in the cimiano and wenderoth gold standard we use the nounquale pairs pertaining to the remaining 29 concepts as training examples to select a set of 20 links that we then use in the same way as the manually selected links of section 623simply picking the longest links in the lxw1w2 subspace defined by the example dimensions does not work because we harvest links that are frequent in general rather than characteristic of the qualia roles for each role r we construct instead two lxw1w2 subspaces one positive subspace with the example pairs as unique nonzero dimensions and a negative subspace with nonzero dimensions corresponding to all pairs such that w1 is one of the training nominal concepts and w2 is not a quale qr in the example pairswe then measure the length of each link in both subspacesfor example we measure the length of the obj link in a subspace characterized by example pairs and the length of obj in a subspace characterized by pairs that are probably not telic exampleswe compute the pointwise mutual information statistic on these lengths to find the links that are most typical of the positive subspace corresponding to each qualia rolepmi with respect to other association measures finds more specific links which is good for our purposeshowever it is also notoriously prone to overestimating the importance of rare items thus before selecting the top 20 links ranked by pmi we filter out those links that do not have at least 10 nonzero dimensions in the positive subspacemany parameters here should be tuned more systematically but the current results will nevertheless illustrate our methodologytable 14 reports for each quale the typedm links that were selected in each of the 30 leaveoneconceptout foldsthe links n is q n in q and q such as n are a good sketch of the formal relation which essentially subsumes various taxonomic relationsthe other formal links are less conspicuoushowever note the presence of noun coordination consistently with the common claim that coordinated terms tend to be related taxonomically constitutive is mostly a wholepart relation and the harvested links do a good job at illustrating such a relationfor the telic q by n q through n and q via n capture cases in which the quale stands in an actioninstrument relation to the target noun these links thus encode the subtype of telic role that pustejovsky calls indirect the two verbnoun links instead capture direct telic roles which are typically expressed by the theme of a verb the least convincing results are those for the agentive role where only q obj n and perhaps q out n are intuitively plausible canonical linksinterestingly the manual selections we carried out in section 623 also gave very poor results for the agentive role as shown by the fact that table 10 reports just one link for such a rolethis suggests that the problems with this qualia role might be due to the number and type of lexicalized links used to build the dm tensors rather than to the selection algorithm presented herecoming now to the quantitative evaluation of the harvested patterns the results in table 15 are based on w1w2l subspaces where the nonzero dimensions correspond to the links that we picked automatically with the method we just described typedm is the best model in this setting as wellits performance is even better than the one obtained with the manually picked patterns and the automated approach has more room for improvement via parameter optimizationwe did not get as deeply into lw1w2 space as we did with the other views but our preliminary results on qualia harvesting suggest at least that looking at links as links selected in all folds of the leaveoneout procedure to extract links typical of each qualia rolen is q q is n q become n n coord q n have q n use q n with q n without q q coord n q have n n in q n provide q q such as nq after n q alongside n q as n q before n q behind n q by n q like n q obj n q besides n q during n q in n q obj n n sbj intr q q through n q via n q out n q over n q since n q unlike n lxw1w2 vectors might be useful for feature selection in w1w2xl or for tasks in which we are given a set of pairs and we have to find links that can function as verbal labels for the relation between the word pairs dimensionality reduction techniques such as the svd approximate a sparse cooccurrence matrix with a denser lowerrank matrix of the same size and they have been shown to be effective in many semantic tasks probably because they provide a beneficial form of smoothing of the dimensionssee turney and pantel for references and discussionwe can apply svd to any of the tensorderived matrices we used for the tasks hereinan interesting alternative is to smooth the source tensor directly by a tensor decomposition techniquein this section we present evidence that tensor decomposition can improve performance and it is at least as good in this respect as matrixbased svdthis is the only experiment in which we operate on the tensor directly rather than on the matrices derived from it paving the way to a more active role for the underlying tensor in the dm approach to semanticsthe tucker decomposition of a tensor can be seen as a higherorder generalization of svdgiven a tensor x of dimensionality i1 x i2 x i3 its nrank right now is the rank of the vector space spanned by its moden fibers tucker decomposition approximates the tensor x having nranks r1 right now with x a tensor with nranks qn right now for all modes n unlike the case of svd there is no analytical procedure to find the best lowerrank approximation to a tensor and tucker decomposition algorithms search for the reduced rank tensor with the best fit iterativelyspecifically we use the memoryefficient met algorithm of kolda and sun as implemented in the matlab tensor toolbox10 kolda and bader provide details on tucker decomposition its general properties as well as applications and alternativessvd is believed to exploit patterns of higher order cooccurrence between the rows and columns of a matrix making row elements that cooccur with two synonymic columns more similar than in the original spacetucker decomposition applied to the mode3 tuple tensor could capture patterns of higher order cooccurrence for each of the modesfor example it might capture at the same time similarities between links such as use and hold and w2 elements such as gun and knifesvd applied after construction of the w1xlw2 matrix on the other hand would miss the composite nature of columns such as and another attractive feature of tucker decomposition is that it could be applied once to smooth the source tensor whereas with svd each matricization must be smoothed separatelyhowever tucker decomposition and svd are computationally intensive procedures and at least with our current computational resources we are not able to decompose even the smallest dm tensor given the continuous growth in computational power and the fact that efficient tensor decomposition is a very active area of research full tensor decomposition is nevertheless a realistic near future taskfor the current pilot study we replicated the ap concept clustering experiment described in section 613because for efficiency reasons we must work with just a portion of the original tensor we thought that the ap data set consisting of a relatively large and balanced collection of nominal concepts would offer a sensible starting point to extract the subsetspecifically we extract from our best tensor typedm the values labeled by tuples where wap is in the ap set l is one of the 100 most common links occurring in tuples with a wap and w2 is one of the 1000 most common words occurring in tuples with a wap and a l the resulting tensor aptypedm has dimensionality 402 x 100 x 1 000 with 1318214 nonzero entries the w1xlw2 matricization of aptypedm results in a 402 x 1 000 000 matrix with 66026 nonzero columns and the same number of nonzero entries and density as the tensorthe possible combinations of target lower nranks constitute a large tridimensional parameter space and we leave its systematic exploration to further workinstead we pick 300 50 and 500 as initial target nranks for the three modes and we explore their neighborhood in parameter space by changing one target nrank at a time by a relatively small value for the parameters concerning the reduced tensor fitting process we accept the default values of the tensor toolboxfor comparison purposes we also apply svd to the w1xlw2 matrix derived from aptypedmwe systematically explore the svd target lower rank parameter from 50 to 350 in increments of 50 unitsthe results are reported in table 16the rank column reports the nranks when reduction is performed on the tensor and matrix ranks in the other casesbootstrapped confidence intervals are obtained as described in section 613in general the results confirm that smoothing by rank reduction is beneficial to semantic performance although not spectacularly so with an improvement of about 4 for the best reduced model with respect to the raw aptypedm tensor as a general trend tensorbased smoothing does better than matrixbased smoothing as we said for tucker we only report results from a small region of the tridimensional parameter space whereas the svd rank parameter range is explored coarsely but exhaustivelythus although other parameter combinations might lead to dramatic changes in tucker performance the best svd performance in the table is probably close to the svd performance upper boundthe present pilot study suggests an attitude of cautious optimism towards tensor decomposition as a smoothing techniqueat least in the ap task it helps as compared to no smoothing at allthe same conclusion is reached by turney who uses essentially the same method to tackle the toefl task and obtains more than 10 improvement in accuracy with respect to the corresponding raw tensorat least as a trend tensor decomposition appears to be better than matrix decomposition but only marginally so still even if the tensor and matrixbased decompositions turned out to have comparable effects tensorbased smoothing is more attractive in the dm framework because we could perform the decomposition once and use the smoothed tensor as our stable underlying dm beyond smoothing tensor decomposition might provide some novel avenues for distributional semantics while keeping to the dm program of a single model for many tasksvan de cruys used tensor decomposition to find commonalities in latent dimensions across the fiber labels another possible use for smoothing would be to propagate link mass across parts of speechour tensors being based on pos tagging and dependency parsing have 0 values for nounlinknoun tuples such as and in a smoothed tensor by the influence of tuples such as and these tuples will get some non0 weight that hopefully will make the object relation between city and destruction emergethis is at the moment just a conjecture but it constitutes an exciting direction for further work focusing on tensor decomposition within the dm frameworka general framework for distributional semantics should satisfy the following two requirements representing corpusderived data in such a way as to capture aspects of meaning that have so far been modeled with different prima facie incompatible data structures using this common representation to address a large battery of semantic experiments achieving a performance at least comparable to that of stateofart taskspecific dsmswe can now safely claim that dm satisfies both these desiderata and thereby represents a genuine step forward in the quest for a general purpose approach to distributional semanticsdm addresses point by modeling distributional data as a structure of weighted tuples that is formalized as a labeled thirdorder tensorthis is a generalization with respect to the common approach of many corpusbased semantic models that rely on distributional information encoded into wordlinkword tuples associated with weights that are functions of their frequency of cooccurrence in the corpusexisting structured dsms still couch this information directly in binary structures namely cooccurrence matrices thereby giving rise to different semantic spaces and losing sight of the fact that such spaces share the same kind of distributional informationthe thirdorder tensor formalization of distributional data allows dm to fully exploit the potential of corpusderived tuplesthe four semantic spaces we analyzed and tested in section 6 are generated from the same underlying thirdorder tensor by the standard operation of tensor matricizationthis way we derive a set of semantic spaces that can be used for measuring attributional similarity and relational similarity moreover the distributional information encoded in the tensor and unfolded via matricization leads to further arrangements of the data useful in addressing semantic problems that do not fall straightforwardly into the attributional or the relational paradigm in some cases it is obvious how to reformulate a semantic problem in the new frameworkother tasks can be reframed in terms of our four semantic spaces using geometric operations such as centroid computations and projection onto a subspacethis was the case for selectional preferences pattern and examplebased relation extraction and the task of generating typical properties of conceptswe consider a further strength of the dm approach that it naturally encourages us to think as we did in these cases of ways to tackle apparently unrelated tasks with the existing resources rather than devising unrelated approaches to deal with themregarding point that is addressing a large battery of semantic experiments with good performance in nearly all test sets our best implementation of dm is at least as good as other algorithms reported in recently published papers often towards the top of the stateoftheart rankingwhere other models outperform typedm by a large margin there are typically obvious reasons for this the rivals have been trained on much larger corpora or they rely on special knowledge resources or on sophisticated machine learning algorithmsimportantly typedm is consistently at least as good those models we reimplemented to be fully comparable to our dms moreover the best dm implementation does not depend on the semantic space typedm outperforms the other two models in all four spacesthis is not surprising but it is good to have an empirical confirmation of the a priori intuitionthe current results suggest that one could for example compare alternative dms on a few attributional tasks and expect the best dm in these tasks to also be the best in relational tasks and other semantic challengesthe final experiment of section 6 briefly explored an interesting aspect of the tensorbased formalism namely the possibility of improving performance on some tasks by working directly on the tensor rather than on the matrices derived from itbesides this pilot study we did not carry out any taskspecific optimization of typedm which achieves its very good performance using exactly the same underlying parameter configuration across the different spaces and tasksparameter tuning is an important aspect in dsm development with an often dramatic impact of parameter variation we leave the exploration of parameter space in dm for future researchits importance notwithstanding however we regard this as a rather secondary aspect if compared with the good performance of a dm model in the large and multifarious set of tasks we presentedof course many issues are still openit is one thing to claim that the models that outperform typedm do so because they rely on larger corpora it is another to show that typedm trained on more data does reach the top of the current heapthe differences between typedm and the other generally worseperforming dm models remind us that the idea of a shared distributional memory per se is not enough to obtain good results and the extraction of an ideal dm from the corpus certainly demands further attentionwe need to reach a better understanding of which pieces of distributional information to extract and whether different semantic tasks require focusing on specific subsets of distributional dataanother issue we completely ignored but which will be of fundamental importance in applications is how a dmbased system can deal with outofvocabulary itemsideally we would like a seamless way to integrate new terms in the model incrementally based on just a few extra data points but we leave it to further research to study how this could be accomplished together with the undoubtedly many further practical and theoretical problems that will emergewe will conclude instead by discussing some general advantages that follow from the dm approach of separating corpusbased model building the multipurpose long term distributional memory and different views of the memory data to accomplish different semantic tasks without resorting to the source corpus againfirst of all we would like to make a more general point regarding parameter tuning and taskspecific optimization by going back to the analogy with wordnet as a semantic multipurpose resourceif you want to improve performance of a wordnetbased system you will probably not wait for its next release but rather improve the algorithms that work on the existing wordnet graphsimilarly in the dm approach we propose that corpusbased resources for distributional semantics should be relatively stable multipurpose largescale databases only occasionally updated still given the same underlying dm and a certain task much work can be done to exploit the dm optimally in the task with no need to go back to corpusbased resource constructionfor example performance on attributional tasks could be raised by dimension reweighting techniques such as recently proposed by zhitomirskygeffet and dagan for the problem of data sparseness in the w1w2l space we could treat the tensor as a graph and explore random walks and other graphical approaches that have been shown to scale down gracefully to capture relations in sparser data sets as in our simple example of smoothing relational pairs with attributional neighbors more complex tasks may be tackled by combining different views of dm andor resorting to different spaces within the same view as in our approach to selectional preferencesone might even foresee an algorithmic way to mix and match the spaces as most appropriate to a certain taskwe propose a similar split for the role of supervision in dsmsconstruction of the dm tensor from the corpus is most naturally framed as an unsupervised task because the model will serve many different purposeson the other hand supervision can be of great help in tuning the dm data to specific tasks a crucial challenge for dsms is whether and how corpusderived vectors can also be used in the construction of meaning for constituents larger than wordsthese are the traditional domains of formal semantics which is most interested in how the logical representation of a sentence or a discourse is built compositionally by combining the meanings of its constituentsdsms have so far focused on representing lexical meaning and compositional and logical issues have either remained out of the picture or have received still unsatisfactory accountsa general consensus exists on the need to overcome this limitation and to build new bridges between corpusbased semantics and symbolic models of meanings most problems encountered by dsms in tackling this challenge are specific instances of more general issues concerning the possibility of representing symbolic operations with distributed vectorbased data structures many avenues are currently being explored in corpusbased semantics and interesting synergies are emerging with research areas such as neural systems quantum information holographic models of memory and so ona core problem in dealing with compositionality with dsms is to account for the role of syntactic information in determining the way semantic representations are built from lexical itemsfor instance the semantic representation assigned to the dog bites the man must be different from the one assigned to the man bites the dog even if they contain exactly the same lexical itemsalthough it is still unclear which is the best way to compose the representation of content words in vector spaces it is nowadays widely assumed that structured representations like those adopted by dm are in the right direction towards a solution to this issue exactly because they allow distributional representations to become sensitive to syntactic structures compositionality and similar issues in dsms lie beyond the scope of this paperhowever there is nothing in dm that prevents it from interacting with any of the research directions we have mentioned hereindeed we believe that the generalized nature of dm represents a precondition for distributional semantics to be able to satisfactorily address these more advanced challengesa multipurpose distributional semantic resource like dm can allow researchers to focus on the next steps of semantic modelingthese include compositionality but also modulating word meaning in context and finding ways to embed the distributional memory in complex nlp systems or even embodied agents and robotsdmstyle triples predicating a relation between two entities are common currency in many semantic representation models and knowledgeexchange formalisms such as rdfthis might also pave the way to the integration of corpusbased information with other knowledge sourcesit is hard to see how such integration could be pursued within generalized systems such as pairclass that require keeping a full corpus around and corpusprocessing knowhow on behalf of interested researchers from outside the nlp community similarly the dm triples might help in fostering the dialogue between computational linguists and the computational neurocognitive community where it is common to adopt triplebased representations of knowledge and to use the same set of tuples to simulate various aspects of cognitionfor a recent extended example of this approach see rogers and mcclelland it would be relatively easy to use a dm model in lieu of their neural network and use it to simulate the conceptual processes they reproducedm unlike classic dsm models that go directly from the corpus data to solving specific semantic tasks introduces a clear distinction between an acquisition phase the declarative structure at the core of semantic modeling and the procedural problemsolving components this separation is in line with what is commonly assumed in cognitive science and formal linguistics and we hope it will contribute to make corpusbased modeling a core part of the ongoing study of semantic knowledge in humans and machines
J10-4006
distributional memory a general framework for corpusbased semanticsresearch into corpusbased semantics has focused on the development of ad hoc models that treat single tasks or sets of closely related tasks as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpusas an alternative to this one task one model approach the distributional memory framework extracts distributional information once and for all from the corpus in the form of a set of weighted wordlinkword tuples arranged into a thirdorder tensordifferent matrices are then generated from the tensor and their rows and columns constitute natural spaces to deal with different semantic problemsin this way the same distributional information can be shared across tasks such as modeling word similarity judgments discovering synonyms concept categorization predicting selectional preferences of verbs solving analogy problems classifying relations between word pairs harvesting qualia structures with patterns or example pairs predicting the typical properties of concepts and classifying verbs into alternation classesextensive empirical testing in all these domains shows that a distributional memory implementation performs competitively against taskspecific algorithms recently reported in the literature for the same tasks and against our implementations of several stateoftheart methodsthe distributional memory approach is thus shown to be tenable despite the constraints imposed by its multipurpose naturewe use a representation based on third order tensors and provide a general framework for distributional semantics in which it is possible to represent several aspects of meaning using a single data structure
a planbased analysis of indirect speech act we propose an account of indirect forms of speech acts to request and inform based on the hypothesis that language users can recognize actions being performed by others infer goals being sought and cooperate in their achievement this cooperative behaviour is independently motivated and may or may not be intended by speakers if the hearer believes it is intended he or she can recognize the speech act as indirect otherwise it is interpreted directly heuristics are suggested to decide among the interpretations we propose an account of indirect forms of speech acts to request and inform based on the hypothesis that language users can recognize actions being performed by others infer goals being sought and cooperate in their achievementthis cooperative behaviour is independently motivated and may or may not be intended by speakersif the hearer believes it is intended he or she can recognize the speech act as indirect otherwise it is interpreted directlyheuristics are suggested to decide among the interpretationsaustin 1962 was one of the first to stress the distinction between the action which a speaker performs by uttering a sentence and the truth conditions of propositions contained in the sentenceactions have effects on the world and may have preconditions which must obtain for them to be felicitously performedfor actions whose execution involves the use of language the preconditions may include the speaker holding certain beliefs about the world and having certain intentions or wants as to how it should changeas well as being important to the study of natural language semantics speech acts are important to the designer of conversational natural language understanding systemssuch systems should be able to recognize what actions the user is performingconversely if such a system is to acquire information or request assistance from its user it should know how and when to ask questions and make requestscohen and perrault 1979 argue for the distinction between a competence i this research was supported in part by the national research council of canada under operating grant a9285thanks to phil cohen michael mccord corot reason and john searle for their commentswe assume the usual responsibility for remaining inaccuracies misunderstandings and downright errors theory of speech acts which characterizes what utterances an ideal speaker can make in performing what speech acts and a performance theory which also accounts for how a particular utterance is chosen in given circumstances or how it is recognizedwe are only concerned here with a competence theoryin perrault allen and cohen 1978 we suggested that it is useful to consider speech acts in the context of a planning systema planning system consists of a class of parameterized procedures called operators whose execution can modify the worldeach operator is labelled with formulas stating its preconditions and effectsa plan construction algorithm is a procedure which given a description of some initial state of the world and a goal state to be achieved constructs a plan or sequence of operators to achieve itit is assumed there and in all our subsequent work that language users maintain a model of the world and a set of goals one person s beliefs may include beliefs about another person a beliefs and wants including a beliefs about s etcwe do not concern ourselves with obligations feelings etc which clearly can also be affected by speech actscp discuss criteria for judging the correctness of the preconditions and effects of the operators corresponding to speech acts and specifically those of the acts inform and requesthowever the conditions on inform and request given in cp are at best necessary and certainly not sufficientin particucopyright 1980 by the association for computational linguisticspermission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the journal reference and this copyright notice are included on the first pageto copy otherwise or to republish requires a fee andor specific permission lar they say nothing about the form of utterances used to perform the speech actsseveral syntactic devices can be used to indicate the speech act being performed the most obvious are explicit performative verbs such as quoti hereby request you to quot and mood but the mood of an utterance is well known to not completely specify its illocutionary force 1ab can be requests to close the door 1ce can be requests to tell the answer and 1f can be an assertionfurthermore all these utterances can also be intended literally in some contextsfor example a parent leaving a child at the train station may ask 1g expecting a yesno answer as a confirmationthe object of this paper is to extend the work in cp to account for indirect use of mood loosely called indirect speech actsthe solution proposed here is based on the following intuitively simple and independently motivated hypotheses identifying actions being performed by others and goals being soughtan essential part of helpful or cooperative behaviour is the adoption by one agent of a goal of another followed by an attempt to achieve itfor example for a store clerk to reply quothow many do you wantquot to a customer who has asked quotwhere are the steaksquot the clerk must have inferred that the customer wants steaks then he must have decided to get them himselfthis might have occurred even if the customer had intended to get the steaks him or herselfcooperative behaviour must be accounted for independently of speech acts for it often occurs without the use of language tends that the hearer recognize not only that b was performed but also that through cooperative behaviour by the hearer intended by the speaker the effects of a should be achievedthe speaker must also believe that it is likely that the hearer can recognize this intentionthe process by which one agent can infer the plans of another is central to our account of speech actsschmidt et al 1978 and genesereth 1978 present algorithms by which one agent can infer the goals of another but assuming no interaction between the twowe describe the process in terms of a set of plausible plan inference rules directly related to the rules by which plans can be constructedlet a and s be two agents and act an actionone example of a simple plan inference rule is quotif s believes that a wants to do act then it is plausible that s believes that a wants to achieve the effects of actquot from simple rules like this can be derived more complex plan inference rules such as quotif s believes that a wants s to recognize a intention to do act then it is plausible that s believes that a wants s to recognize a intention to achieve the effects of actquot notice that the complex rule is obtained by introducing quots believes a wantsquot in the antecedent and consequent of the simple rule and by interpreting quots recognizes a intentionquot as quots comes to believe that a wantsquotthroughout the paper we identify quotwantquot and quotintendquotwe show that rules of the second type can account for s recognition of many indirect speech acts by a ie those in which s recognizes a intention that s perform cooperative actsto distinguish the use of say the indicative mood in an assertion from its use in say an indirect request the speech act operators request and inform of cp are reformulated and two further acts srequest and sinform are addedthese surface level acts are realized literally as indicative and imperative utterancesan srequest to inform is realized as a questionthe surface level acts can be recognized immediately as parts of the higher level acts to which the simple plan construction and inference rules can applyalternatively the complex rules can be applied to the effects of the surface acts and the intended performance of one of the illocutionary acts inferred laterfor example there are two ways an agent s could be led to tell a the secret after hearing a tell him quotcan you tell me the secretquotboth start with s recognition that a asked a yesno questionin the first case s assumes that a simply wanted to know whether s could tell the secret then infers that a in fact wants to know the secret and helpfully decides to tell itin the second case s recognizes that a intends s to infer that a wants to know the secret and that a intends s to tell a the secret and thus that a has requested s to tell the secretfollowing a review of the relevant aspects of speech act theory in section 2 section 3 outlines our assumptions about beliefs goals actions plans and the plan inference processsection 4 shows how the speech act definitions and the plan inference process can be used to relate literal to indirect meanings for requests and informswe show how utterances such as 1h1 and even 1m can be used as requests to pass the salt and what the origin of the several interpretations of 1m issimilarly we show how 1n can be used to inform while 1o cannotsection 5 relates this work to the literature while section 6 suggests further problems and draws some conclusionsthe speech act recognition process described here has been implemented as a computer program and tested by having it simulate an information clerk at a railway stationthis domain is real but sufficiently circumscribed so that interchanges between clerk and patrons are relatively short and are directed towards a limited set of goalsthe program accepts as input simple english sentences parses them using an atn parser and produces as output the speech act it recognized and their associated propositional contentsit can handle all the examples discussed heredetails of the implementation can be found in allen 1979prior to austin 1962 logicians considered the meaning of a sentence to be determined only by its truth valuehowever austin noted that some sentences cannot be classified as true or false the utterance of one of these sentences constitutes the performance of an action and hence he named them performativesto quote austin quotwhen i say before the register or altar etc i do i am not reporting on a marriage i am indulging in itquotexamples like this and his inability to rigorously distinguish performative sentences from those which purportedly have truth value led austin to the view that all utterances could be described as actions or speech actshe classified speech acts into three classes the locutionary illocutionary and perlocutionary actsa locutionary act is an act of saying something it is the act of uttering sequences of words drawn from the vocabulary of a given language and conforming to its grammaran illocutionary act is one performed in making an utterance quotpromisequot quotwarnquot quotinformquot and quotrequestquot are names of illocutionary actsin general any verb that can complete the sentence quoti hereby you that i to quot names an illocutionary actan utterance has illocutionary force f if the speaker intends to perform the illocutionary act f by making that utteranceverbs that name types of illocutionary acts are called performative verbsfrom now on we take speech acts to mean the illocutionary actsperlocutionary acts are performed by making the utterancefor example s may scare a by warning a or convince a of something by informing a of itthe success of a perlocutionary act is typically beyond the control of the speakerfor example s cannot convince a of something against a will s can only present a with sufficient evidence so that a will decide to believe itperlocutionary acts may or may not be intentionalfor instance s may or may not intend to scare a by warning a searle 1969 suggests that illocutionary acts can be defined by providing for each act necessary and sufficient conditions for the successful performance of the actcertain syntactic and semantic devices such as mood and explicit performative verbs are used to indicate illocutionary forceone of the conditions included in searle account is that the speaker performs an illocutionary act only if he intends that the hearer recognize his intention to perform the act and thereby recognize the illocutionary forcethis is important for it links austin work american journal of computational linguistics volume 6 number 34 julydecember 1980 169 c raymond perrault and james f allen a planbased analysis of indirect speech acts on speech acts with the work of grice on meaning and is discussed in the next sectionmany philosophers have noted the relationship between communication and the recognition of intention grice presents informally his notion of a speaker meaning something as follows quots meant something by x is equivalent to intended the utterance of x to produce some effect in an audience by means of the recognition of this intention in other words in order for s to communicate m by uttering x to a s must get a to recognize that s intended to communicate m by uttering xto use and example of grice if i throw a coin out the window expecting a greedy person in my presence to run out and pick it up i am not necessarily communicating to him that i want him to leavefor me to have successfully communicated he must at least have recognized that i intended him to leavethe same arguments hold when discussing illocutionary actsfor example the only way s can request a to do act is to get a to recognize s intention to request a to do actthe relation between speech acts and the devices used to indicate them is complicated by the fact that performative verbs are seldom present and the same device can be used to perform many illocutionary actsthe interrogative mood for example can be used to request quotcan you pass the saltquot question quotdo you know the timequot inform quotdo you know that sam got marriedquot warn quotdid you see the bear behind youquot promise quotwould i miss your partyquot as many authors have pointed out an utterance conveys its indirect illocutionary force by virtue of its literal one quotit is cold herequot can function as a request to say close the window in part because it is an assertion that the temperature is lowmost of the literature on the treatment of indirect speech acts within the theory of grammar stems from the work of gordon and lakoff 1975 they claim that direct and indirect instances of the same speech act have different quotmeaningsquot ie different logical forms and they propose a set of quotconversational postulatesquot by which literal forms quotentailquot indirect onesthe postulates for requests correspond to conditions that must obtain for a request to be sincerefor a to sincerely request b to do act the following sincerity conditions must hold is the salt near you john asked me to ask you to pass the saltgl postulates directly relate the literal form of one speech act to the indirect form of anotherthus they do not predict why certain acts allow certain indirect formsfor example the postulates do not account for why 23cd can be requests while 23ef cannotbut 23e is infelicitous as a question since there is no context where one can acquire information by querying one own mental stateutterance 23f is a reasonable question but even if the speaker found out the answer it would not get him any closer to acquiring the salt a theory of indirect speech acts should capture these facts gl does not similarly gl postulates fail to explain the relation between indirect forms of different speech actsfor example 23g can be an assertion that p and 23h cannot for the same reasons that 23i can be a request to do a and 23j cannotthe hearer knowing that p obtains is an intended perlocutionary effect of an informing act just as the hearer doing an act a is an intended effect of a requesta speaker can indirectly inform or request by informing the hearer that the speaker desires the perlocutionary effect of that act and intending that the hearer recognize the speaker intention that the perlocutionary effect should be achievedthis paper shows that what gl achieve with their postulates can be derived from the five hypotheses given in the introductionour proposal here is a de170 american journal of computational linguistics volume 6 number 34 julydecember 1980 c raymond perrault and james f allen a planbased analysis of indirect speech acts velopment of searle 1975it requires separating the surface form conditions completely from the definitions of the illocutionary acts and introducing an intermediary level the surface actsour theory of indirection will however share with gl some problems brought up by sadock 1970 green 1975 and brown 1980these are discussed further in section 45our analysis of indirect requests and informs relies on the inference by the hearer of some of the goals of the speaker and of some of the actions which the speaker is taking to achieve those goalssection 31 outlines the form of the models of the world which language users are assumed to have in particular their beliefs about the world and their goalsin section 32 we define actions and how they affect the belief modelthe rules for plan construction and inference are considered in sections 33 and 34because of space limitations this section is very sketchymore detail motivation and problems are available in allen 1979 and allen and perrault 1980we assume that every agent s has a set of beliefs about the world which may include beliefs about other agents beliefsagents can hold false beliefsas quine 1956 pointed out belief creates a context where substitution of coreferential expressions need not preserve truthvaluewe add to a firstorder language with equality the operator b and b is to be read quota believes that pquot for any formula p the b operator is assumed to satisfy the following axiom schemas where p and q are schema variables ranging over propositions and a ranges over agents the rules of inference are modus ponens and if t is a theorem then ba is a theorem for every agent a ie every agent believes every valid consequence of the logical axiomsthe partial deduction system used in the implementation of allen 1979 is based on cohen 1978the foundations for a more elaborate system can be found in moore 1979the word quotknowquot is used in at least three different senses in englishone may know that a proposition p is true know whether a proposition p is true or know what the referent of a description iswe define quota knows that pquot written know as p a bathis is weaker than some definitions of quotknowquot in the philosophical literature where among other things quota knows that pquot entails that a believes p for the quotright reasonsquot ie knowledge is true and justified belief if s believes that a knows that p s is committed to believing that p is truein other words if s believes a does not know p then s must believe that p is true in addition to believing that a does not believe p is truethis problem is analogous to the widenarrow scope distinction that russell found in his account of definite descriptions one solution to this problem is to consider know as a quotmacroquot whose expansion is sensitive to negationdetails may be found in allen 1979a knows whether a proposition p is true if a knows that p or a knows that pknowing what the referent of a description is requires quantification into beliefone of its arguments is a formula with exactly one free variablea knowref the departure time of train1 if train1 has a unique departure time y and if a believes that y is train l unique departure timewe let w mean quotagent a wants p to be truequotp can be either a state or the execution of some actionin the latter case if act is the name of an action wa means quota wants b to do actquotthe logic of want is even more difficult than that of beliefit is necessary for us to accept the following american journal of computational linguistics volume 6 number 34 julydecember 1980 171 the most interesting interactions between the belief and want operators come from the models that agents have of each other abilities to act and to recognize the actions of othersthis will be further discussed in the following sectionactions model ways of changing the worldas with the operators in strips the actions can be grouped into families represented by action schemas which can be viewed as parameterized procedure definitionsan action schema consists of a name a set of parameters with constraints and a set of labelled formulas in the following classes effects conditions that become true after the execution of the procedurebody a set of partially ordered goal states that must be achieved in the course of executing the procedurein the examples given here there will never be more than one goal state in a bodypreconditions conditions necessary to the successful execution of the procedurewe distinguish for voluntary actions a want precondition the agent must want to perform the action ie he must want the other preconditions to obtain and the effects to become true through the achievement of the bodythe constraints on the parameters consist of type specifications and necessary parameter interdependencieseach action has at least one parameter namely the agent or instigator of the actionin the blocks world for example the action of putting one block on top of another could be defined as the preconditions effects and body provide information to the plan construction and inference processes so that they can reason about the applicability and effect of performing the action in a given contextfinally the body of the action specifies what steps must be achieved in the course of the execution of the actionprimitive actions have no bodies their execution is specified by a nonexaminable procedureall agents are assumed to believe that actions achieve their effects and require their preconditionswe need the following axioms for all agents a and b and for all actions act if pre is the precondition of act and eff its effect then every predicate and modal operator in these axioms and throughout the paper should be indexed by a state or timethe resulting logic would be accordingly more complexthe issue is raised again in sect6a plan to transform a world w0 into a world wn is a sequence of actions al an such that the preconditions of ai are true in wi1 and ai transforms world wi1 into wian agent can achieve a goal by constructing and then executing a plan which transforms the current state of the world into one in which the goal obtainsthis can be done by finding an operator which if executed in some world would achieve the goalif its preconditions are satisfied in the initial world the plan is completeotherwise the planning process attempts to achieve the preconditionsthis simple view of plan construction as a quotbackward chainingquot process can be refined by assuming different levels of quotdetailquot in the representation of the world and of the operatorsthis view allows plans constructed at one level of detail to be expanded to a lower level through the bodies of their constituent actsas noted earlier the agent of an action must believe that its precondition is true to believe that his executing the action will succeedfor agent a to plan that agent s should perform action act a must achieve that s should believe that the precondition of act holds and s beliefs should not be inconsistent with a ie it must be true that ba where p is the precondition of actwe assume that an agent cannot do an action without wanting to do that actionthus a precondition of every action act by an agent a is that wawe are concerned with the model that agents have of each other plan construction and inference process and consider these two processes as consisting of chains of plausible inferences operating on goals and observed actionsthe processes are specified in two parts first as schemas of rules which conjecture that certain states or actions can be added to a plan being constructedthe plausibility of the plans containing the result of the inferences is then evaluated by rating heuristicsthus the plan construction and inference rules are not to be interpreted as valid logical rules of inferencethere are two inverses to the knowif rule if a wants to know whether p is true then a may want p to be true or a may want p to be false2 throughout the rest of the paper agent a will usually denote the constructorexecutor of plans and s the recognizer of plans piw is the special case of the preconditionaction rule where the precondition is the want precondition want rule for all agents s a and c and for all actions act whose agent is c it is plausible that the plan inference rules generate formulas which the recognizing agent believes are possiblea separate mechanism is used to evaluate their plausibilityan agent s attempting to infer the plans of another agent a starts with an observed action of a and a set of goals or expectations which s believes a may be trying to achieves attempts to construct a plan involving the action and preferably also including some of the expectationsplan inference is a search through a space of partial plans each consisting of two partsone part is constructed using the plan inference rules from the observed action the other is constructed using the plan construction rules from an expected goal the partial plans are manipulated by a set of tasks which decide what rules are to be applied what quotmergesquot between alternatives and expectations should be attempted and when the process terminatesthe partial plans and their associated tasks are rated by a set of heuristics and the most highly rated task is executed firstthe rating of a partial plan reflects how likely it is to be part of the quotcorrectquot plan ie the plan the speaker is executingif several incompatible inferences can be made from one point in the alternative then its rating is divided among themthe heuristics described in this section are based on domain independent relations between actions their bodies preconditions and effectsthe need for more domain dependent measures is discussed lateramerican journal of computational linguistics volume 6 number 34 julydecember 1980 173 of tions are possible in rules ec1 ec3 and ei1 ei3the heuristics are described here only in terms increasing or decreasing ratings of partial plansdecrease the rating of a partial plan in which the preconditions of executing actions are currently falsedecrease the rating of a partial plan containing a pending action act by an agent a if a is not able to do act3 decrease the rating of a partial plan in which the effects of a pending act already obtain or are not wanted by the planner4 other heuristics depending on how well the utterance fits with the expectations are not immediately relevant to understanding indirect speech acts and will not be discussed hereone further heuristic is added in section 43in general several rating heuristics are applicable to an partial plantheir effects on the rating of the partial plan are cumulativea hearer s identifies the illocutionary force of an utterance by recognizing that the speaker a has certain intentions namely that s should recognize some intention p of athis can be represented by a formula of the form bswato do the recognition the simple plan construction and inference rules of sections 33 and 34 must be extended so that they can operate on these nested formulasthis can be done by assuming that every agent is aware that other agents construct and infer plans in the same way he canin fact both the simple inference and construction rules are necessary to derive the extended inference rulesthe extended rules are specified by quotmetarulesquot which show how to construct new pcpi rules from old onesthe first extended construction rule is a can achieve that s recognizes that a wants the effect of act by achieving that s recognizes that a wants act to be done assuming that s would infer that the effects of act are also desiredthe same rule applies if we replace quotwants the effect of actquot and quotwants act to be donequot by any pair of y and x as given in figure 1we assume all these sutistitu if bswa i bw is a pi rule then wa c wa is a pc rulesimilarly we can generate the corresponding pi rule if bswa i bw is a pi rule then bswa i bswa is a pi ruleei1 allows prefixing bswa to plan inference rulesplan construction rules can also be embedded if a wants s to want to do act then a should be able to achieve this by achieving that s wants the effect of act and by relying on s to plan actin other words finally any agent a can plan for s to recognize a intention that s plan and for s to be able to recognize this intention in afor example a can plan for s to recognize a intention that s want to close the door by planning for s to recognize a intention that s want the door closedthese rules are obtained by using ei2 as the pi rule which is quotextendedquot by ec1 and ei1our quottoolkitquot is now sufficiently full to allow us to consider some speech acts and their recognitionthe definitions of the speech acts request and inform used in this paper are slightly different from the ones in cohen and perrault 1979 in that they rely on the existence of speech act bodies to account for indirect formsplans including speech acts are now thought of as having two levels the illocutionary level and the surface levelacts at the illocutionary level model the intentions motivating an utterance independently of the syntactic forms used to indicate those intentionsacts at the surface level are realized by utterances having specific illocutionary force indicatorsthe first illocutionary level act is one by which a speaker informs a hearer that some proposition is truefor a to sincerely inform s that p is true a must believe a knows that p is true and want to inform s that p and must intend to get s to know that p is true which is done by constructing a plan that will achieve s recognition of this intention a then must depend on s to bring about the efiect s must decide to believe what a saidthis is made explicit by introducing an admittedly simplistic decide to believe act decide to believe prec b effect know thus a can inform s of p by achieving bswa followed by decide to believein many cases agents reason about inform acts to be performed where the information for the propositional content is not known at the time of plan constructionfor example a may plan for s to inform a whether p is truea cannot plan for s to perform inform since this assumes the truth of p we get around this difficulty by defining informif another view of the inform actinformif prec knowif a w effect knowif body b similarly it must be possible for a to plan for s to tell a the referent of a description without a knowing the referentthis is the role of the informref actinformref prec knowref a w effect knowref body b request is defined as request constraint hearer is agent of action the intention of a request is to get the hearer to want to do the action and this is accomplished by getting the hearer to believe that the speaker wants the hearer to do the action and then depending on the hearer to decide to do itto explicitly represent this decision process a because to want act defined along the lines of the decide to believe act above is necessarybecause to want prec b effect w as examples of the use of speech acts quottell me whether the train is herequot and quotis the train herequot intended literally are both requests by a that s informif the train is herequotwhen does the train arrivequot intended literally is a request by a that h informref of the departure time of the trainfinally we define the two surface level acts sinform produces indicative mood utterances and srequest produces imperative utterances or interrogative utterances if the requested act is an informthese acts have no preconditions and serve solely to signal the immediate intention of the speaker the starting point for all the hearer inferencingsinform effect b srequest effect b the effects of sinform match the body of the inform act reflecting the fact that it is a standard way of executing an informit is important however that sinform is only one way of executing an informthe same relationship holds between the srequest and request actionsgiven the speech act definitions of section 41 we say that a performed an illocutionary act ia by uttering x to s if a intends that s should recognize that this definition allows more than one illocutionary act to be performed by a single surface actin this section we show how the hearer of an utterance can recognize the speaker intention indicated by a speech act especially when these intentions are communicated indirectly prec w effect w body b 5 see cohen and perrault 1979 for a discussion of why searle preparatory conditions quotspeaker believes hearer can do the actionquot need not be part of the preconditions on requestamerican journal of computational linguistics volume 6 number 34 julydecember 1980 175 c raymond perrault and james f allen a planbased analysis of indirect speech acts all inferencing by s of a plans starts from s recognition that a intended to perform one of the surface acts and that a in fact wanted to do the actall inference chains will be shown as starting from a formula of the form bswathe object of the inferencing is to find what illocutionary level act a intended to performthe actioneffect rule applied to the starting formula yields one of the form bswa ies believes that a wants s to recognize a intention that p the inferencing process searches for plausible formulas of the form bswa where ia is an illocutionary level actexample 1 shows a direct request to pass the salt where the surface request maps directly into the intended request interpretation6 the actions relevant to the examples given here are let us also assume that s presently has the salt iehave is true and mutually believed by s and athe rating heuristics for the complex rules ei1 to ei3 are the same as for the pi rules but each heuristic may be applicable several times at different levelsfor example consider the frequently recurring inference chain it shows the line of inference from the point where s recognizes that a requested s to do act to the point where the effects of the requested action are inferred as part of a planof interest here is the evaluation of the plausibility of step two heuristics are applicablethe proposition quotwsquot is 6 to improve readability of inference chains in the examples we drop the prefix bswa from all propositionsthe formula on line follows from the one on line by the rule at the beginning of line applications of ei1 will be labelled quotrulequotei1 where quotrulequot is a pi rule embedded by ei1similarly applications of ei2 and ei3 will be labelled quotrulequotei2 and quotrulequotei3 where quotrulequot is a pc rule name evaluated with respect to what s believes a believesif bsbaws is true the request interpretation is considered unlikely by the effectbased heuristicin addition the preconditions of act are considered with respect to what s believes a believes s believesthis step will only be reasonable if s can do the action by a preconditionbased heuristicto make more explicit the distinction between inferences in bswa and inferences in bswabswa let us consider two inference chains that demonstrate two interpretations of the utterance quotdo you know the secretquotlines 13 of example 2 show the chain which leads s to believe that a asked a yesno question lines 16 of example 3 show the interpretation as a request to s to inform a of the secretnotice that in both interpretations s may be led to believe that a wants to know the secretin the literal case s infers a goal from the literal interpretation and may tell the secret simply by being helpful in the indirect case s recognizes a intention that s inform a of the secret telling the secret is then conforming to a intentions there is in fact a third interpretation of this sentenceif a and s both know that a already knows the secret then the utterance could be intended as quotif you do not know the secret i will tell it to youquot this requires recognizing a conditional action and is beyond our present abilitiestwo sets of pi rules are applicable to formulas of the form bswabswa the simple rules pi1 to pi6 operating quotwithinquot the prefix bswa and the rules generated by ei1 and ei3 which allow the simple rules to apply within the prefix bswabswato reflect the underlying assumption in our model that intention will always be attributed if possible the inferences at the most deeply nested level should be preferredof course if the inferences at the nested level lead to unlikely plans the inferences at the quotshallowquot levels may be appliedin particular if there are multiple mutually exclusive inferences at the nested level then the quotshallowquot inferences will be preferredthis reflects the fact that the nested inferences model what the speaker intends the hearer to inferif there are many inferences possible at the nested level the speaker would not be able to ensure that the hearer would perform the correct oneexample 4 shows the interpretation of quoti want you to pass the saltquot as a requesttaking the utterance literally s infers that a wants him to know that a wants him to pass the saltthis yields proposition which leads through the next three inferences to the intention that would be recognized from a request act ie that a wants s to pass the salt notice that an application of the bodyaction rule to step yields inform for in fact the speaker may be performing both speech actsthe level of inferencing heuristic favours the indirect formthe key step in example 5 is the application of the knowpositive rule from line to line since given the context s assumes that a knows whether s has the salt the literal interpretation would not produce a reasonable goal for athis supports the nested knowpositive inference and attributes further intention to the speaker once this is done it is easy to infer that a wants s to pass him the salt hence the request interpretationquotcan you pass the saltquot and quotdo you want to pass the saltquot are treated similarly for they inquire about the preconditions on passexample 7quoti want the saltquot example 7 includes in the step from to an application through ei3 of the effectaction rulea informs s of a goal of having the salt and then depends on s planning on that goal to infer the pass actionbecause the action is the quotobviousquot way of achieving the goal s believes that a intended him to infer itsince questions are treated as requests to inform most of them are handled in a similar manner to the requests above44ah can all be understood as questions about the departure time of some trainan interesting example of an indirect inform is 45a for it is very similar to 45bc which both seem to only be requeststhe interpretation of 45a as an indirect inform follows from the fact that inference chains which would make it a request are all inhibited by the heuristicsin example 8 the possible bodyaction inference from to request is downgraded because the embedded inference to is possiblethe interesting case is the embedded knownegative inference which is also possible from it implies that bswa or equivalently but such a goal is highly unlikelya is attempting to achieve the goal bs by having s recognize that a wants p to be trueas a result no speech act interpretation is possible from this stepfor instance the bodies of the acts inform and inform are bswa and bswa respectivelyboth of these are contradicted by part of 45dthus the knownegative possibility can be eliminatedthis allows the knowpositive inference to be recognized as intended and hence leads to the indirect interpretation as an inform45b has only a literal interpretation since both the knowpositive and knownegative rules are applicable at the nested level without a reason to favour either the literal request is preferredthe interpretations of 45c are similar to those of examples 2 and 3all the examples of indirect speech acts so far have been explained in terms of rules pi1pi6 and complex inference rules derived from themin this section we give one more example relying on somewhat more specific rulesa full investigation of how many such specific rules are necessary to account for common forms of indirect requests and informs remains to be donethis example shows how a completely nonstandard form can be intended indirectlysuppose that a tells quotjohn asked me to ask you to leavequot this has at least three possible interpretations him to leaveinterpretations c and d can hold even if s decides that a actually does want him to leavehowever in these cases he would not say that a intended to communicate the intent that he leave ie he would not say the utterance was a requestboth interpretations rely on axioms act1 and act2 which state that if some agent a believes that agent s executed some action act then a may believe that the preconditions of act obtained before and the effects of act obtained after the execution of actthey also require a new pcpi rule if a wants s to believe some proposition p then a may get s to believe some proposition q as long as a believes that s believes that q implies pwa c wa if babs bswa i bswa if bsbabsin example 9 s recognizes that a asked him to leavethe interpretation depends on s concluding that john performed his request successfully and hence that a wants to request s to leaveit is then an easy step to infer that a wants s to leave which leads to the request interpretationinterpretation a simple report of some previous action follows from by pibain example 10 s recognizes that a intended to tell him that john wants him to leavethis depends on the fact that s concludes that john wanted to perform the request that a reportedmost of the needed inferences call for the use of ei1 to embed simple inference rules twicenote that an inform act could have been inferred at each of the four previous steps for example from the body inference would produce informbut the inferences at the quotbswabswjquot level were so direct that they were continuedthe examples of the previous section show how our plan inference rules account for the indirect interpretations of the requests which gl postulates were designed for as well as several othersour approach differs from gl in that an utterance may carry both a literal and an indirect interpretation and of course in that its inference rules are language independentquotjohn asked me to ask you to leavequot however in some ways both solutions are too strongconsider for example the following can you reach the salt are you able to reach the salt i hereby ask you to tell me whether you are able to reach the saltalthough 5ac are all literally questions about the hearer ability only 5a normally conveys a requestsadock 1974 suggests that forms such as 5a differ from 5b in that the former is an idiom which is directly a request while 5b is primarily a yesno questionhowever as brown 1980 points out this fails to account for responses to 5a which follow from its literal formone can answer quotyesquot to 5a and then go on to pass the saltbrown proposes what she calls quotfrozen isa formsquot which directly relate surface form and indirect illocutionary force bypassing the literal forcefrozen forms differ from normal rules mapping illocutionary forces to illocutionary forces in that they point to the relevant normal rule which provides the information necessary to the generation of responses to the surface formsthe speaker of 5b or 5c may in fact want the hearer to reach the salt as does the speaker of 5a but he does not want his intention to be recognized by the hearerthus it appears that from the hearer point of view the chain of inferences at the intended level should get turned off soon after the recognition of the literal actit seems that in this case the plausibility of the inferences after step 3 should be strongly decreasedunfortunately it is not obvious that this can be done without making the rating heuristics sensitive to syntaxthe indirect interpretation can also be downgraded in the presence of stronger expectationsif a speaker entered a room full of aspiring candidates for employment and said quoti want to know how many people here can write a sortmerge programquot and then turning to each individually asked quotcan you write a sortmergequot the question would not be intended as a request to write a program and would not be recognized as such by a pi algorithm which rated highly an illocutionary act which fits well in an expectationin several of the earlier examples of questions intended as indirect requests the literal interpretation is blocked because it leads to acts whose effects were true before the utterancethe literal interpretation of 5d gets blocked because the reminding gets done as part of the understanding of the literal actthus only an indirect interpretation is possiblesadock 1970 points out that some cooccurrence rules depend on conveyed rather than literal illocutionary forcethe morpheme please can occur initially only in sentences which convey a requestthese remain problematic for brown and for uswe have given evidence in this paper for an account of indirect speech acts based on rationality imputing rationality to others surface speech act definitions relating form to quotliteralquot intentions and illocutionary acts allowing a variety of realizing forms for the same intentionsthe reader may object that we are suggesting a complex solution to what appears to be a simple problemit is important to distinguish here the general explanation of indirect speech acts from the implementation of such an algorithm in a practical natural language understanding systemwe claim that the elements necessary for a theoretically satisfying account of indirect speech acts are independently motivatedit is almost certain that a computationally efficient solution to the indirect speech act problem would shortcut many of the inference chains suggested here although we doubt that all searching can be eliminated in the case of the less standard forms such as 46athe implementation in brachman et al 1980 does just thathowever the more fundamental account is necessary to evaluate the correctness of the implementationsmany problems remainother syntactic forms that have significance with respect to illocutionary force determination should be consideredfor example tag questions such as quotjohn is coming to the party tonight is not hequot have not been analysed here furthermore no quotwhyquot or quothowquot questions have been examinedbesides the incorporation of more syntactic information another critical area that needs work concerns the control of inferencingto allow the use of specialized inferences a capability that is obviously required by the general theory much research needs to be done outlining methods of selecting and restricting such inferencesthis paper has concentrated on recognitionallen 1979 shows how the construction algorithms would have to be modified to allow the generation of surface acts including indirect formsmcdonald 1980 discusses the planning of lowlevel syntactic formaccording to the definition of inform of section 41 any utterance that causes s to infer that a has a plan to achieve know by achieving bswa is considered by s to be an informstrawson 1964 argues that one level of recognition of intention is not sufficient for the definition of a speech actschiffer 1972 gives a series of counterexamples to show that no finite number of conditions of the form bswa is sufficient eitherthe solution he proposes is that the recognition of intention must be mutually believed between the speaker and the hearercohen and levesque 1980 and allen forthcoming show how the speech act definitions given here can be extended in this directionwe have only considered acts to request and inform because many of their interesting properties can be based on belief and wantat least primitive accounts of the logics of these propositional attitudes are availableclearly there is room for much work hereextending the analysis to other speech acts such as promises will require a study of other underlying logics such as that of obligationthere also remain many problems with the formalization of actionswe believe this work shows that the concepts of preconditions effects and action bodies are fruitful in discussing plan recognitionthe operator definitions for speech acts used here are intended to facilitate the statement of the plan construction and inference ruleshowever their expressive power is insufficient to handle complex actions involving sequencing conditionals disjunctions iterations parallelism discontinuity and a fortiori requests and promises to do such actsthey are also inadequate as moore 1979 points out to express what the agent of an action knows after the success or failure of an actmoore logic of action includes sequencing conditionals and iterations and is being applied to speech acts by appelt 1980much remains to be done to extend it to parallel and discontinuous actions typical of multiple agent situationsthese difficulties notwithstanding we hope that we have helped show that the interaction of logic philosophy of language linguistics and artificial intelligence is productive and that the whole will she would light on each of the partsamerican journal of computational linguistics volume 6 number 34 julydecember 1980 181 c raymond perrault and james f allen a planbased analysis of indirect speech acts
J80-3003
a planbased analysis of indirect speech actwe propose an account of indirect forms of speech acts to request and inform based on the hypothesis that language users can recognize actions being performed by others infer goals being sought and cooperate in their achievementthis cooperative behaviour is independently motivated and may or may not be intended by speakersif the hearer believes it is intended he or she can recognize the speech act as indirect otherwise it is interpreted directlyheuristics are suggested to decide among the interpretations
extraposition grammars extraposition grammars are an extension of definite clause grammars and are similarly defined in terms of logic clauses the extended formalism makes it easy to describe left extraposition of constituents an important feature of natural language syntax edinburgh eh1 1jz scotland extraposition grammars are an extension of definite clause grammars and are similarly defined in terms of logic clausesthe extended formalism makes it easy to describe left extraposition of constituents an important feature of natural language syntaxthis paper presents a grammar formalism for natural language analysis called extraposition grammars based on the subset of predicate calculus known as definite or horn clausesit is argued that certain important linguistic phenomena collectively known in transformational grammar as left extraposition can be described better in xgs than in earlier grammar formalisms based on definite clausesthe xg formalism is an extension of the definite clause grammar 6 formalism which is itself a restriction of colmerauer formalism of metamorphosis grammars 2thus xgs and mgs may be seen as two alternative extensions of the same basic formalism dcgsthe argument for xgs will start with a comparison with dcgsi should point out however that the motivation for the development of xgs came from studying large mgs for natural language 47the relationship between mgs and dcgs is analogous to that between type0 grammars and contextfree grammarsso some of the linguistic phenomena which are seen as rewriting one sequence of constituents into another might be described better in a mg than in a dcghowever it will be shown that rewritings such as the one involved in left extraposition cannot easily be described in either of the two formalismsleft extraposition has been used by grammarians to describe the form of interrogative sentences and relative clauses at least in languages such as english french spanish and portuguesethe importance of these constructions even in simplified subsets of natural language such as those used in database interfaces suggests that a grammar formalism should be able to express them in a clear and concise mannerthis is the purpose of xgsthis section summarises the concepts of definite clause grammars and of the underlying system of logic definite clauses needed for the rest of the papera fuller discussion can be found elsewhere 6a definite clause has either the form to be read as quotp is true if q1 qn are truequot or the form p to be read as quotp is truequotp is the head of the clause are goals forming the body of the clausethe symbols p qi qn stand for literalsa literal has a predicate symbol and possibly some arguments ega literal is to be interpreted as denoting a relation between its arguments egquotfatherquot denotes the relation father between x and yarguments are terms standing for partially specified objectsterms may be a compound term has a functor and some arguments which are termscompound terms are best seen as copyright 1981 by the association for computational linguisticspermission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the journal reference and this copyright notice are included on the first pageto copy otherwise or to republish requires a fee andor specific permissiona particular type of term the list has a simplified notationthe binary functor makes up nonempty lists and the atom denotes the empty listin the special list notation may be read as quotx is grandfather of z if x is father of y and y is a parent of zquot the clause father may be read as quotjohn is father of maryquot a set of definite clauses forms a programa program defines the relations denoted by the predicates appearing on the head of clauseswhen using a definite clause interpreter such as prolog 9 a goal statement p specifies that the relation instances that match p are requirednow any contextfree rule such as sentence noun phrase verb_phrase may be translated into a definite clause which says quotthere is a sentence between points so and s in a string if there is a noun phrase between points so and si and a verb phrase between points si and squota contextfree rule like determiner the can be translated into determiner connects which may be read as quotthere is a determiner between points so and s in a string if so is joined to s by the word thequotthe predicate connects is used to relate terms denoting points in a string to the words which join those pointsdepending on the application different definitions of connects might be usedin particular if a point in a string is represented by the list of words after that point connects has the very simple definition connects which may be read as quota string point represented by a list of words with first element word and rest s is connected by the word word to the string point represented by list squot dcgs are the natural extension of contextfree grammars obtained through the translation into definite clauses outlined abovea dcg nonterminal may have arguments of the same form as those of a predicate and a terminal may be any termfor instance the rule is made of a noun phrase with structure np and number n followed by a verb phrase with structure vp agreeing with the number nquota dcg rule is just quotsyntactic sugarquot for a definite clausethe clause for the example above is in general a dcg nonterminal with n arguments is translated into a predicate of n2 arguments the last two of which are the string points as in the translation of contextfree rules into definite clausesthe main idea of dcgs is then that grammar symbols can be general logic terms rather than just atomic symbolsthis makes dcgs a generalpurpose grammar formalism capable of describing any type0 languagethe first grammar formalism with logic terms as grammar symbols was colmerauer metamorphosis grammars 2where a dcg is a cfg with logic terms for grammar symbols a mg is a somewhat restricted type0 grammar with logic terms for grammar symbolshowever the very simple translation of dcgs into definite clauses presented above does not carry over directly to mgsroughly speaking left extraposition occurs in a natural language sentence when a subconstituent of some constituent is missing and some other constituent to the left of the incomplete one represents the missing constituent in some wayit is useful to think that an empty constituent the trace occupies the quotholequot left by the missing constituent and that the constituent to the left which represents the missing part is a marker indicating that a constituent to its right contains a trace 1one can then say that the constituent in whose place the trace stands has been extraposed to the left and in its new position is represented by the markerfor instance relative clauses are formed by a marker which in the simpler cases is just a relative pronoun followed by a sentence where some noun phrase has been replaced by a tracethis is represented in the following annotated surface structure in this example t stands for the trace that is the surface form of the marker and the connection between the two is indicated by the common index ithe concept of left extraposition plays an essential role directly or indirectly in many formal descriptions of relative and interrogative clausesrelated to this concept there are several quotglobal constraintsquot the quotisland constraintsquot that have been introduced to restrict the situations in which left extraposition can be appliedfor instance the ross complexnp constraint 8 implies that any relative pronoun occurring outside a given noun phrase cannot be bound to a trace occurring inside a relative clause which is a subconstituent of the noun phrasethis means that it is not possible to have a configuration like xi np rel x2 s t2 tl 1 note that here i use the concept of left extraposition in a loose sense without relating it to transformations as in transformational grammarin xgs and also in other formalisms for describing languages the notion of transformation is not used but a conceptual operation of some kind is required for instance to relate a relative pronoun to a quotholequot in the structural representation of the constituent following the pronounto describe a fragment of language where left extraposition occurs one might start with a cfg which gives a rough approximation of the fragmentthe grammar may then be refined by adding arguments to nonterminals to carry extraposed constituents across phrasesthis method is analogous to the introduction of quotderivedquot rules by gazdar 5take for example the cfg in figure 41in this grammar it is possible to use rule to expand a noun phrase into a trace even outside a relative clauseto prevent this i will add arguments to all nonterminals from which a noun phrase might be extraposedthe modified grammar now a dcg is given in figure 42a variable hole will have the value trace if an extraposed noun phrase occurs somewhere to the right nil otherwisethe parse tree of figure 43 shows the variable values when the grammar of figure 42 is used to analyse the noun phrase quotthe man that john metquotintuitively we either can see noun phrases moving to the left leaving traces behind or traces appearing from markers and moving to the rightin a phrase quotnoun phrasequot holel will have the value trace when a trace occurs somewhere to the right of the left end of the phrasein that case hole2 will be nil if the noun phrase contains the trace trace if the trace appears to the right of the right end of this noun phrasethus rule in figure 42 specifies that a noun phrase expands into a trace if a trace appears from the left and as this trace is now placed it will not be found further to the rightthe nonterminal relative has no arguments because the complexnp constraint prevents noun phrases from moving out of a relative clausehowever that constraint does not apply to prepositional phrases so prep_phrase has argumentsthe nonterminal entence has a single argument because in a relative clause the trace must occur in the sentence immediately to the right of the relative pronounit is obvious that in a more extensive grammar many nonterminals would need extraposition arguments and the increased complication would make the grammar larger and less readablecolmerauer mg formalism allows an alternative way to express left extrapositionit involves the use of rules whose lefthand side is a nonterminal followed by a string of quotdummyquot terminal symbols which do not occur in the input vocabularyan example of such a rule is rel_marker t rel pronounits meaning is that rel pronoun can be analysed as a rel marker provided that the terminal t is added to the front of the input remaining after the rule applicationsubsequent rule applications will have to cope explicitly with such dummy terminalsthis method has been used in several published grammars 2 4 7 but in a large grammar it has the same problems of size and clarity as the previous methodit also suffers from a theoretical problem in general the language defined by such a grammar will contain extra sentences involving the dummy terminalsfor parsing however no problem arises because the input sentences are not supposed to contain dummy terminalsthese inadequacies of mgs were the main motivation for the development of xgsto describe left extraposition we need to relate noncontiguous parts of a sentencebut neither dcgs nor mgs have means of representing such a relationship by specific grammar rulesrather the relationship can only be described implicitly by adding extra information to many unrelated rules in the grammarthat is one cannot look at a grammar and find a set of rules specific to the constructions which involve left extrapositionwith extraposition grammars i attempt to provide a formalism in which such rules can be writtenin this informal introduction to the xg formalism i will avoid the extra complications of nonterminal argumentsso in the discussion that follows we may look at xgs as an extension of cfgssometimes it is easier to look at grammar rules in the lefttoright or synthesis directioni will say then that a rule is being used to expand or rewrite a stringin other cases it is easier to look at a rule in the righttoleft or analysis directioni will say then that the rule is being used to analyse a stringlet us first look at the following xg fragment sentence noun_phrase verb_phrase noun_phrase determiner noun relative noun_phrase trace relative 1 relative rel marker sentence rel marker trace rel pronounall rules but the last are contextfreethe last rule expresses the extraposition in simple relative clausesit states that a relative pronoun is to be analysed as a marker followed by some unknown constituents followed by a tracethis is shown in figure 51as in the dcg example of the previous section the extraposed noun phrase is expanded into a tracehowever instead of the trace being rewritten into the empty string the trace is used as part of the analysis of rel markerthe difference between xg rules and dcg rules is then that the lefthand side of an xg rule may contain several symbolswhere a dcg rule is seen as expressing the expansion of a single nonterminal into a string an xg rule is seen as expanding together several noncontiguous symbols into a stringmore precisely an xg rule has the general form here each segment s is a sequence of terminals and nonterminals the first symbol in s 1 the leading symbol is restricted to be a nonterminalthe righthand side r is as in a dcg ruleleaving aside the constraints discussed in the next section the meaning of a rule like is that any sequence of symbols of the form sixis 2x 2 etc sk_ ixk_isk with arbitrary xi can be rewritten into rx ix 2xk_ 1thinking procedurally one can say that a nonterminal may be expanded by matching it to the leading symbol on the lefthand side of a rule and the rest of the lefthand side is quotput asidequot to wait for the derivation of symbols which match each of its symbols in sequencethis sequence of symbols can be interrupted by arbitrary strings paired to the occurrences of on the lefthand side of the rulewhen several xg rules are involved the derivation of a surface string becomes more complicated than in the single rule example of the previous section because rule applications interact in the way now to be describedto represent the intermediate stages in an xg derivation i will use bracketed strings made up of a bracketed string is balanced if the brackets in it balance in the usual waynow an xg rule etc un v can be applied to bracketed string s if s x0u1x1u2 etc xn_ unxn and each of the gaps x1 xn_1 is balancedthe substring of s between xo and xn is the span of the rule applicationthe application rewrites s into new string t replacing u1 by v followed by n1 open brackets and replacing each of u2 un by a close bracket in short s is replaced by xovx the relation between the original string s and the derived string t is abbreviated as s t in the new string t the substring between xo and xn is the result of the applicationin particular the application of a rule with a single segment in its lefthand side is no different from what it would be in a type0 grammar taking again the rule rel marker trace rel pronoun its application to rel marker john likes trace produces rel _pronoun after this rule application it is not possible to apply any rule with a segment matching inside a bracketed portion and another segment matching outside itthe use of the above rule has divided the string into two isolated portions each of which must be independently expandedgiven an xg with initial symbol s a sentence t is in the language defined by the xg if there is a sequence of rule applications that transforms s into a string from which t can be obtained by deleting all bracketsi shall refer to the restrictions on xg rule application which i have just described as the bracketing constraintthe effect of the bracketing constraint is independent of the order of application of rules because if two rules are used in a derivation the brackets introduced by each of them must be compatible in the way described aboveas brackets are added and never deleted it is clear that the order of application is irrelevantfor similar reasons any two applications in a derivation where the rules involved have more than one segment in their lefthand sides one and only one of the two following situations arises if one follows to the letter the definitions in this section then checking in a parsing procedure whether an xg rule may be applied would require a scan of the whole intermediate stringhowever we will see in section 10 that this check may be done quoton the flyquot as brackets are introduced with a cost independent of the length of the current intermediate string in the derivationin the same way as parse trees are used to visualise contextfree derivations i use derivation graphs to represent xg derivationsin a derivation graph as in a parse tree each node corresponds to a rule application or to a terminal symbol in the derived sentence and the edges leaving a node correspond to the symbols in the righthand side of that node rulein a derivation graph however a node can have more than one incoming edge in fact one such edge for each of the symbols on the lefthand side of the rule corresponding to that nodeof these edges only the one corresponding to the leading symbol is used to define the lefttoright order of the symbols in the sentence whose derivation is represented by the graphif one deletes from a derivation graph all except the first of the incoming edges to every node the result is a tree analogous to a parse treefor example figure 71 shows the derivation graph for the string quotaabbccquot according to the xg this xg defines the language formed by the set of all strings anbnen for n0the example shows incidentally that xgs even without arguments are strictly more powerful than cfgs since the language described is not contextfreethe topology of derivation graphs reflects clearly the bracketing constraintassume the following two conventions for the drawing of a derivation graph which are followed in all the graphs shown here then the derivation graph obeys the bracketing constraint if and only if it can be drawn following the conventions without any edges crossing1 the example of figure 72 shows this clearlyin this figure the closed path formed by edges 1 2 3 and 4 has the same effect as a matching pair of brackets in a bracketed stringit is also worth noting that nested rule applications appear in a derivation graph as a configuration like the one depicted in figure 738xgs and left extraposition we saw in figure 42 a dcg for relative clausesthe xg of figure 81 describes essentially the same language fragment showing how easy it is to describe left extraposition in an xgin that grammar the sentence the mouse that the cat chased squeaks has the derivation graph shown in figure 82the left extraposition implicit in the structure of the sentence is represented in the derivation graph by the application of the rule for rel marker at the node marked in the figureone can say that the left extraposition has been quotreversedquot in the derivation by the use of this rule which may be looked at as repositioning trace to the right thus quotreversingquot the extraposition of the original sentencein the rest of this paper i often refer to a constituent being repositioned into a bracketed string to mean that a rule having that constituent as a nonleading symbol in the lefthand side has been applied and the symbol matches some symbol in the string for example in figure 82 the trace t is repositioned into the subgraph with root in the example of figure 82 there is only one application of a nondcg rule at the place marked however we have seen that when a derivation contains several applications of such rules the applications must obey the bracketing constraintthe use of the constraint in a grammar is better explained with an examplefrom the sentences the mouse squeaksthe cat likes fishthe cat chased the mouse the grammar of figure 81 can derive the following string which violates the complexnp constraint the mouse that the cat that chased likes fish squeaksthe derivation of this ungrammatical string can be better understood if we compare it with a sentence outside the fragment the mouse that the cat which chased it likes fish squeaks where the pronoun it takes the place of the incorrect tracethe derivation graph for that unenglish string is shown in figure 91in the graph and mark two nested applications of the rule for rel markerthe string is unenglish because the higher relative in the graph binds a trace occurring inside a sentence which is part of the subordinated noun phrase now using the bracketing constraint one can neatly express the complexnp constraintit is only necessary to change the second rule for relative in figure 81 to relative open rel marker sentence close and add the rule with this modified grammar it is no longer possible to violate the complexnp constraint because no constituent can be repositioned from outside into the gap created by the application of rule to the result of applying the rule for relatives the nonterminals open and close bracket a subderivation open x close preventing any constituent from being repositioned from outside that subderivation into itfigure 92 shows the use of rule in the derivation of the sentence the mouse that the cat that likes fish chased squeaksthis is based on the same three simple sentences as the ungrammatical string of figure 91 which the reader can now try to derive in the modified grammar to see how the bracketing constraint prevents the derivationin the previous sections i avoided the complication of nonterminal argumentsalthough it would be possible to describe fully the operation of xgs in terms of derivations on bracketed strings it is much simpler to complete the explanation of xgs using the translation of xg rules into definite clausesin fact a rigorous definition of xgs independently of definite clauses would require a formal apparatus very similar to the one needed to formalise definite clause programs in the first place and so it would fall outside the scope of the present paperthe interested reader will find a full discussion of those issues in two articles by colmerauer 23like a dcg a general xg is no more than a convenient notation for a set of definite clausesan xg nonterminal of arity n corresponds to an n4 place predicate of the extra four arguments two are used to represent string positions as in dcgs and the other two are used to represent positions in an extraposition list which carries symbols to be repositionedeach element of the extraposition list represents a symbol being repositioned as a 4tuple x where context is either gap if the symbol was preceded by in the rule where it originated or nogap if the symbol was preceded by type may be terminal or nonterminal with the obvious meaning symbol is the symbol proper xist is the remainder of the extraposition list an xg rule is translated into a clause for the predicate corresponding to the leading symbol of the rulein the case where the xg rule has just a single symbol on the lefthand side the translation is very similar to that of dcg rulesfor example the rule a terminal t in the righthand side of a rule translates into a call to the predicate terminal defined below whose role is analogous to that of connects in dcgsfor example the rule the translation of a rule with more than one symbol in the lefthand side is a bit more complicatedinformally each symbol after the first is made into a 4tuple as described above and fronted to the extraposition listthus for example the rule rel marker trace rel pronounfurthermore for each distinct nonleading nonterminal nt in the lefthand side of a rule of the xg the translation includes the clause where virtual defined later can be read as quotc is the constituent between xo and x in the extraposition listquot and the variables vi transfer the arguments of the symbol in the extraposition list to the predicate which translates that symbolfor example the rule marker the ofwhom trace whose which can be used in a more complex grammar of relative clauses to transform quotwhose xquot into quotthe x of whomquot corresponds to the clauses finally the two auxiliary predicates virtual and terminal are defined as followsgap gap where connects is as for dcgsthese definitions need some commentthe first clause for terminal says that provided the current extraposition list allows a gap to appear in the derivation terminal symbol t may be taken from the position so in the source string where t connects so to some new position s the second clause for terminal says that if the next symbol in the current extraposition list is a terminal t then this symbol can be taken as if it occurred at s in the source stringthe clause for virtual allows a nonterminal to be quotread off fromquot the extraposition list relative open x rel_marker x the nodes of the analysis fragment for the relative clause quotthat likes fishquot are represented by the corresponding goals indented in proportion to their distance from the root of the graphthe following conventions are used to simplify the figure the definite clause program corresponding to the grammar for this example is listed in appendix iithe example shows clearly how the bracketing constraint workssymbols are placed in the extraposition list by rules with more than one symbol in the lefthand side and removed by calls to virtual on a firstinlastout basis that is the extraposition list is a stackbut this property of the extraposition list is exactly what is needed to balance quoton the flyquot the auxiliary brackets in the intermediate steps of a derivationbeing no more than a logic program an xg can be used for analysis and for synthesis in the same way as a dcgfor instance to determine whether a string s with initial point initial and final point final is in the language defined by the xg of figure 81 one tries to prove the goal statement as for dcgs the string s can be represented in several waysif it is represented as a list the above goal would be written sentencethe last two arguments of the goal are 1 to mean that the overall extraposition list goes from to ie it is the empty listthus no constituent can be repositioned into or out of the top level entencein this paper i have proposed an extension of dcgsthe motivation for this extension was to provide a simple formal device to describe the structure of such important natural language constructions as relative clauses and interrogative sentencesin transformational grammar these constructions have usually been analysed in terms of left extraposition together with global constraints such as the complexnp constraint which restrict the range of the extrapositionglobal constraints are not explicit in the grammar rules but are given externally to be enforced across rule applicationsthese external global constraints because theoretical difficulties because the formal properties of the resulting systems are far from evident and practical difficulties because they lead to obscure grammars and prevent the use of any reasonable parsing algorithmdcgs although they provide the basic machinery for a clear description of languages and their structures lack a mechanism to describe simply left extraposition and the associated restrictionsmgs can express the rewrite of several symbols in a single rule but the symbols must be contiguous as in a type0 grammar rulethis is still not enough to describe left extraposition without complicating the rest of the grammarxgs are an answer to those limitationsan xg has the same fundamental property as a dcg that it is no more than a convenient notation for the clauses of an ordinary logic programxgs and their translation into definite clauses have been designed to meet three requirements to be a principled extension of dcgs which can be interpreted as a grammar formalism independently of its translation into definite clauses to provide for simple description of left extraposition and related restrictions to be comparable in efficiency with dcgs when executed by prologit turns out that these requirements are not contradictory and that the resulting design is extremely simplethe restrictions on extraposition are naturally expressed in terms of scope and scope is expressed in the formalism by quotbracketing outquot subderivations corresponding to balanced stringsthe notion of bracketed string derivation is introduced in order to describe extraposition and bracketing independently of the translation of xgs into logic programssome questions about xgs have not been tackled in this paperfirst from a theoretical point of view it would be necessary to complete the independent characterisation of xgs in terms of bracketed strings and show rigorously that the translation of xgs into logic programs correctly renders this independent characterisation of the semantics of xgsas pointed out before this formalisation does not offer any substantial problemsnext it is not clear whether xgs are as general as they could befor instance it might be possible to extend them to handle right extraposition of constituents which although less common than left extraposition can be used to describe quite frequent english constructions such as the gap between head noun and relative clause in what files are there that were created todayit may however be possible to describe such situations in terms of left extraposition of some other constituent finally i have been looking at what transformations should be applied to an xg developed as a clear description of a language so that the resulting grammar could be used more efficiently in parsingin particular i have been trying to generalise results on deterministic parsing of contextfree languages into appropriate principles of transformationdavid warren and michael mccord read drafts of this paper and their comments led to many improvements both in content and in formthe comments of the referees were also very usefula british council fellowship partly supported my work in this subjectthe computing facilities i used to experiment with xgs and to prepare this paper were made available by british science research council grants
J81-4003
extraposition grammarsextraposition grammars are an extension of definite clause grammars and are similarly defined in terms of logic clausesthe extended formalism makes it easy to describe left extraposition of constituents an important feature of natural language syntaxwhereas head grammars provide for an account of verb fronting and crossserial dependencies we introducing extraposition grammars is focused on displacement of noun phrases in english
coping with syntactic ambiguity or how to put the block in the box on the table we construct a table so that the entry in the tells the parser how to parse i occurrences of 9 an example suppose for example that we were given the following grammar s np vp adjs s v np adjs adjs vp 0 v np adjs pp p np np ni np pp adjs adj adjs i into vp v np adjs v so that the parser can also find vps by just counting coccurrences of terminal symbols now we simplify so that can also be parsed by just counting occurrences of terminal symbols translate into the equation s np vp adjs v np adjs adjs and then expand vp using s np adjs adjs v np adjs adjs and factor s v np that can be simplified considerably because np n e e n e and e adj e adj n e cat 14 the entire example grammar has now been compiled into a form that is easier for parsing this formula says that sentences are all of the form s v n adj which could be recognized by the following finite state machine c journal of computational linguistics volume 8 number 34 julydecember 1982 kenneth church and ramesh patil coping with syntactic ambiguity furthermore the number of parse trees for a given input sentence can be found by multiplying three numbers the catalan of the number of p n before the verb the catalan of one more than the number of p n after the verb and the ramp of the number of adj for example the sentence the man on the hill saw the boy with a telescope yesterday in the morning cat 3 6 parses that is there is one way to parse quotthe man on the hillquot two ways to parse quotsaw the boy with a telescopequot or is attached to quotboyquot as in and three ways to parse the adjuncts or they could both attach to the vp or they could split the man on the hill saw the boy with a telescope yesterday in the morning the man on the hill saw the boy with a telescope yesterday in the morning the man on the hill saw the boy with a telescope yesterday in the morning the man on the hill saw the boy with a telescope yesterday in the morning the man on the hill saw the boy with a telescope yesterday in the morning the man on the hill saw the boy with a telescope yesterday in the morning all and only these possibilities are permitted by the grammar 10 conclusion we began our discussion with the observation that certain grammars are quotevery way ambiguousquot and suggested that this observation could lead to improved parsing performance catalan grammars were then introduced to remedy the situation so that the processor can delay attachment decisions until it discovers some more useful constraints until such time the processor can do little more than note that the input sentence is quotevery way ambiguousquot we suggested that a table lookup scheme might be an effective method to implement such a processor we then introduced rules for combining primitive grammars such as catalan grammars into composite grammars this linear systems view quotbundles upquot all the parse trees into a single concise description capable of telling us everything we might want to know about the parses this abstract view of ambiguity enables us to ask questions in the most convenient order and to delay asking until it is clear that the payoff will exceed the cost this abstraction was strongly influenced by the notion of binding we have presented combination rules in three different representation systems power series atns and contextfree grammars each of which contributed its own insights power series are convenient for defining the algebraic operations atns are most suited for discussing implementation issues and contextfree grammars enable the shortest derivations perhaps the following quotation best summarizes our motivation for alternating among these three representation systems thing or idea seems meaningful only when we have different ways to represent it different perspectives and different associations then you can turn it around in your mind so to speak however it seems at the moment you can see it another way you never come to a full stop in each of these representation schemes we have introduced five primitive grammars catalan unit step 1 and 0 and terminals and four composition rules addition subtraction multiplication and division we have seen that it is often possible to employ these analytic tools in order to reorganize the grammar into a form more suitable for processing efficiently we have identified certain where the ambiguity is combinatoric and have sketched a few modifications to the grammar that enable processing to proceed in a more efficient manner in particular we have observed it to be important for the grammar to avoid referencing quantities that are not easily determined such as the dividing point between a noun phrase and a prepositional phrase as in put the block in the box on the table in the kitchen we have seen that the desired reorganization can be achieved by taking advantage of the fact that the autoconvolution of a catalan series produces another caseries this reduced processing time from to almost linear time similar analyses have been discussed for a number of lexically and structurally ambiguous constructions culminating with the example in section 9 where we transformed a grammar into a form that could be parsed by a single lefttoright pass over the terminal elements currently these grammar reformulations have to be performed by hand it ought to be possible to automate this process so that the reformulations could be performed by a grammar compiler we leave this project open for future research 11 acknowledgments we would like to thank jon allen sarah ferguson lowell hawkinson kris halvorsen bill long mitch marcus rohit parikh and peter szolovits for their very useful comments on earlier drafts we would journal of computational linguistics volume 8 number 34 julydecember 1982 especially like to thank bill martin for initiating the project sentences are far more ambiguous than one might have thoughtthere may be hundreds perhaps thousands of syntactic parse trees for certain very natural sentences of englishthis fact has been a major problem confronting natural language processing especially when a large percentage of the syntactic parse trees are enumerated during semanticpragmatic processingin this paper we propose some methods for dealing with syntactic ambiguity in ways that exploit certain regularities among alternative parse treesthese regularities will be expressed as linear combinations of atn networks and also as sums and products of formal power serieswe believe that such encoding of ambiguity will enhance processing whether syntactic and semantic constraints are processed separately in sequence or interleaved togethermost parsers find the set of parse trees by starting with the empty set and adding to it each time they find a new possibilitywe make the observation that in certain situations it would be much more efficient to work in the other direction starting from the universal set and ruling trees out when the parser decides that they cannot be parsesrulingout is easier when the set of parse trees is closer to the universal set and rulingin is easier when the set of parse trees is closer to the empty setrulingout is particularly suited for quotevery way ambiguousquot constructions such as prepositional phrases that have just as many parse trees as there are binary trees over the terminal elementssince every tree is a parse the parser does not have to rule any of them outin some sense this is a formalization of an idea that has been in the literature for some timethat is it has been noticed for a long time that these sorts of very ambiguous constructions are very difficult for most parsing algorithms but not for peoplethis observation has led some researchers to hypothesize additional parsing mechanisms such as pseudoattachment 2 and permanent predictable ambiguity so that the parser could quotattach all waysquot in a single stephowever these mechanisms have always lacked a precise interpretation we will present a much more formal way of coping with quotevery way ambiguousquot grammars defined in terms of catalan numbers sentences are far more ambiguous than one might have thoughtour experience with the eqsp parser indicates that there may be hundreds perhaps thousands of syntactic parse trees for certain very natural sentences of englishfor example consider the following sentence with two prepositional phrases 2 the idea of pseudoattachment was first proposed by marcus though marcus does not accept the formulation in church 1980copyright 1982 by the association for computational linguisticspermission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the journal reference and this copyright notice are included on the first pageto copy otherwise or to republish requires a fee andor specific permissionthese syntactic ambiguities grow quotcombinatoriallyquot with the number of prepositional phrasesfor example when a third pp is added to the sentence above there are five interpretations when a fourth pp is added there are fourteen trees and so onthis sort of combinatoric ambiguity has been a major problem confronting natural language processingin this paper we propose some methods for dealing with syntactic ambiguity in ways that take advantage of regularities among the alternative parse treesin particular we observe that enumerating the parse trees as above fails to capture the important generalization that prepositional phrases are quotevery way ambiguousquot or more precisely the set of parse trees over i pps is the same as the set of binary trees that can be constructed over i terminal elementsnotice for example that there are two possible binary trees over three elements corresponding to and respectively and that there are five binary trees over four elements corresponding to respectivelypps adjuncts conjuncts nounnoun modification stack relative clauses and other quotevery way ambiguousquot constructions will be treated as primitive objectsthey can be combined in various ways to produce composite constructions such as lexical ambiguity which may also be very ambiguous but not necessarily quotevery way ambiguousquot lexical ambiguity for example will be analyzed as the sum of its senses or in flow graph terminology as a parallel connection of its sensesstructural ambiguity on the other hand will be analyzed as the product of its components or in flow graph terminology as a series connectionthis section will make the linear systems analogy more precise by relating contextfree grammars to formal power series formal power series are a wellknown device in the formal language literature for developing the algebraic properties of contextfree grammarswe introduce them here to establish a formal basis for our upcoming discussion of processing issuesthe power series for grammar is each term consists of a sentence generated by the grammar and an ambiguity coefficient3 which counts how many ways the sentence can be generatedfor example the sentence quotjohnquot has one parse tree and so onthe reader can verify for himself that quotjohn and john and john and john and johnquot has fourteen treesnote that the power series encapsulates the ambiguity response of the system to all possible input sentencesin this way the power series is analogous to the impulse response in electrical engineering which encapsulates the response of the system to all possible input frequenciesall of these transformed representation systems provide a complete description of the system with no loss of information4 transforms are often very useful because they provide a different point of viewcertain observations are more easily seen in the transform space than in the original space and vice versathis paper will discuss several ways to generate the power seriesinitially let us consider successive approximationof all the techniques to be presented here successive approximations most closely resembles the approach taken by most current chart parsers including eqsp the alternative approaches take advantage of certain regularities in the power series in order to produce the same results more efficientlysuccessive approximation works as followsfirst we translate grammar into the equation where quotquot connects two ways of generating an np and quotquot concatenates two parts of an npin some sense we want to quotsolvequot this equation for npthis can be accomplished by refining successive approximationsan initial approximation np0 is formed by taking np to be the empty language then we form the next approximation by substituting the previous approximation into equation and simplifying according to the usual rules of algebra 4 this needs a qualificationit is true that the power series provides a complete description of the ambiguity response to any input sentencehowever the power series representation may be losing some information that would be useful for parsingin particular there might be some cases where it is impossible to recover the parse trees exactly as we will see though this may not be too serious a problem for many practical applicationsthat is it is often possible to recover most of the structure which may be adequate for many applications5 the careful reader may correctly object to this assumptionwe include it here for expository convenience as it greatly simplifies the derivations though it should be noted that many of the results could be derived without the assumptionfurthermore this assumption is valid for counting ambiguitythat is ia bi ici iai i8 ci where a b and c are sets of trees and eventually we have np expressed as an infinitely long polynominal abovethis expression can be simplified by introducing a notation for exponentiationlet x be an abbreviation for multiplying x x x i timesnote that parentheses are interpreted differently in algebraic equations than in contextfree rulesin contextfree rules parentheses denote optionality where in equations they denote precedence relations among algebraic operationsambiguity coefficients take on an important practical significance when we can model them directly without resorting to successive approximation as abovethis can result in substantial time and space savings in certain special cases where there are much more efficient ways to compute the coefficients than successive approximation equation is such a special case the coefficients follow a wellknown combinatoric series called the catalan numbers 6 this section will describe catalan numbers and their relation to parsingthe first few catalan numbers are 1 1 2 5 14 42 132 469 1430 4862they are generated by the closed form expression7 this formula can be explained in terms of parenthesized expressions which are equivalent to treescat is the number of ways to parenthesize a formula of length n there are two conditions on parenthesization there must be the same number of open and close parentheses and they must be properly nested so that an open parenthesis precedes its matching close parenthesisthe first term counts the number of 6 this fact was first pointed out to us by v prattwe suspect that it is a generally wellknown result in the formal language community though its origin is unclear where a is equal to the product of all integers between 1 and a binomial coefficients are very common in combinatorics where they are interpreted as the number of ways to pick b objects out of a set of a objectsamerican journal of computational linguistics volume 8 number 34 julydecember 1982 141 kenneth church and ramesh path coping with syntactic ambiguity sequences of 2n parentheses such that there are the same number of opens and closesthe second term subtracts cases violating condition this explanation is elaborated in knuth 1975 p 531it is very useful to know that the ambiguity coefficients are catalan numbers because this observation enables us to replace equation with where cat i denotes the ith catalan numberthe ith catalan number is the number of binary trees that can be constructed over i phrasesthis theoretical model correctly predicts our practical experience with eqspeqsp found exactly the catalan number of parse trees for each sentence in the following sequence14 it was the number of products of products of products of productsthese predictions continue to hold with as many as nine prepositional phrases we could improve eqsp performance on pps if we could find a more efficient way to compute catalan numbers than chart parsing the method currently employed by eqsplet us propose two alternatives table lookup and evaluating expression directlyboth are very efficient over practical ranges of n say no more than 20 phrases or so8 in both cases the ambiguity of a sentence in grammar can be determined by counting the number of occurrences of quotand johnquot and then retrieving the catalan of that numberthese approaches both take linear time 9 whereas chart parsing requires cubic time to parse sentences in these grammars a significant improvementso far we have shown how to compute in linear time the number of ambiguous interpretations of a sentence in an quotevery way ambiguousquot grammarhowever we are really interested in finding parse trees not just the number of ambiguous interpretationswe could extend the table lookup algorithm to find trees rather than ambiguity coefficients by modifying the table to store trees instead of numbersfor parsing purposes cati can be thought of as a pointer to the ith entry of the tableso for a sentence in grammar for example the machine could count the number of occurrences of quotand johnquot and then retrieve the table entry for that number index trees john and john and john the table would be more general if it did not specify the lexical items at the leaveslet us replace the table above with index trees and assume the machine can bind the x to the appropriate lexical itemsthere is a real problem with this table lookup machinethe parse trees may not be exactly correct because the power series computation assumed that multiplication was associative which is an appropriate assumption for computing ambiguity but inappropriate for constructing treesfor example we observed that prepositional phrases and conjunction are both quotevery way ambiguousquot grammars because their ambiguity coefficients are catalan numbershowever it is not the case that they generate exactly the same parse treesnevertheless we present the table lookup pseudoparser here because it seems to be a speculative new approach with considerable promiseit is often more efficient than a real parser and the trees that it finds may be just as useful as the correct one for many practical purposesfor example many speech recognition projects employ a parser to filter out syntactically inappropriate hypotheseshowever a full parser is not really necessary for this task a recognizer such as this table lookup pseudoparser may be perfectly adequate for this taskfurthermore it is often possible to recover the correct trees from the output of the pseudoparserin particular the difference between prepositional phrases and conjunction could be accounted for by modifying the interpretation of the pp category label so that the trees would be interpreted correctly even though they are not exactly correct8 the table lookup scheme ought to have a way to handle the theoretical possibility that there are an unlimited number of prepositional phrasesthe table lookup routine will employ a more traditional parsing algorithm when the number of phrases in the input sentence is not stored in the tablethe table lookup approach works for primitive grammarsthe next two sections show how to decompose composite grammars into series and parallel combinations of primitive grammarsparallel decomposition can be very useful for dealing with lexical ambiguity as in where quottotalquot can be taken as a noun or as a verb as in the accountant brought the daily sales to total with products near profits organized according to the new law noun the daily sales were ready for the accountant to total with products near profits organized according to the new law verb the analysis of these sentences makes use of the additivity property of linear systemsthat is each case and is treated separately and then the results are added togetherassuming quottotalquot is a noun there are three prepositional phrases contributing cat3 bracketings and assuming it is a verb there are two prepositional phrases for cat2 ambiguitiescombining the two cases produces cat3 cat2 5 2 7 parsesadding another prepositional phrase yields cat4 cat3 14 5 19 parsesthis behavior is generalized by the following power series this observation can be incorporated into the table lookup pseudoparser outlined aboverecall that cat is interpreted as the ith index in a table containing all binary trees dominating i leavessimilarly cati cati i will be interpreted as an instruction to quotappendquot the ith entry and i 1 th entry of the table10 let us consider a system where syntactic processing strictly precedes semantic and pragmatic processingin such a system how could we incorporate semantic and pragmatic heuristics once we have already parsed the input sentence and found that it was the sum of two catalansthe parser can simply subtract the inappropriate interpretationsif the oracle says that quottotalquot is a verb then would be subtracted from the combined sum and if the oracle says that quottotalquot is a noun then would be subtractedon the other hand our analysis is also useful in a system that interleaves syntactic processing with semantic and pragmatic processingsuppose that we had a semantic routine that could disambiguate quottotalquot but only at a very high cost in execution timewe need a way to estimate the usefulness of executing the semantic routine so that we do not spend the time if it is not likely to pay offthe analysis above provides a very simple way to estimate the benefit of disambiguating quottotalquot if it turns out to be a verb then trees have been ruled out and if it turns out to be a noun then trees have been ruled outwe prefer our declarative algebraic approach over procedural heuristic search strategies because we do not have to specify the order of evaluationwe can delay the binding of decisions until the most opportune momentsuppose we have a nonterminal s that is a series combination of two other nonterminals np and vpby inspection the power series of s is this result is easily verified when there is an unmistakable dividing point between the subject and the predicatefor example the verb quotisquot separates the pps in the subject from those in the predicate in but not in in the total number of parse trees is the product of the number of ways of parsing the subject times the number of ways of parsing the predicateboth the subject and the predicate produce a catalan number of parses and hence the result is the product of two catalan numbers which was verified by eqsp this result can be formalized in terms of the power series 10 this can be implemented efficiently given an appropriate representation of sets of treeskenneth church and ramesh patil coping with syntactic ambiguity the power series says that the ambiguity of a particular sentence is the product of cati and cat where i is the number of pps before quotisquot and j is the number after quotisquot this could be incorporated in the table lookup parser as an instruction to quotmultiplyquot the ith entry in the table by the jth entrymultiplication is a crossproduct operation l x r generates the set of binary trees whose left subtree l is from l and whose right subtree r is from are l x are 11 cl rr this is a formal definitionfor practical purposes it may be more useful for the parser to output the list in the factored form which is much more concise than a list of treesit is possible for example that semantic processing can take advantage of factoring capturing a semantic generalization that holds across all subjects or all predicatesimagine for example that there is a semantic agreement constraint between predicates and argumentsfor example subjects and predicates might have to agree on the feature humansuppose that we were given sentences where this constraint was violated by all ambiguous interpretations of the sentencein this case it would be more efficient to employ a feature vector scheme which propagates the features in factored formthat is it computes a feature vector for the union of all possible subjects and a vector for the union of all possible vps and then compares these vectors to check if there are any interpretations that meet the constrainta system such as this which keeps the parses in factored form is much more efficient than one that multiplies them outeven if semantics cannot take advantage of the factoring there is no harm in keeping the representation in factored form because it is straightforward to expand into a list of trees this example is relatively simple because quotisquot helps the parser determine the value of i and jnow let us return to example where quotisquot does not separate the two strings of ppsagain we determine the power series by multiplying the two subcases however this form is not so useful for parsing because the parser cannot easily determine i and j the number of prepositional phrases in the subject and the number in the predicateit appears the parser will have to compute the product of two catalans for each way of picking i and j which is somewhat expensive11 fortunately the catalan function has some special properties so that it is possible algebraically to remove the references to i and jin the next section we show how this expression can be reformulated in terms of n the total number of ppssome readers may have noticed that expression is in convolution formwe will make use of this in the reformulationnotice that the catalan series is a fixed point under autoconvolution that is multiplying a catalan power series with itself produces another polynomial with catalan coefficients12 the multiplication is worked out for the first few termsthis property can be summarized as e cat xi e cat x1i xi e catn x where n equals ijintuitively this equation says that if we have two quotevery way ambiguousquot constructions and we combine them in every possible way the result is an quotevery way ambiguousquot constructionwith this observation equation reduces to hence the number of parses in the auxiliaryinverted case is the catalan of one more than in the noninverted casesas predicted eqsp found the following inverted sentences to be more ambiguous than their noninverted counterparts by one catalan number11 earley algorithm and most other contextfree parsing algorithms actually work this way12 the proof immediately follows from the ztransform of the catalan series zb b 1 of products14 it was the number of products of products of products of productshow could this result be incorporated into the table lookup pseudoparserrecall that the pseudoparser implements catalan grammars by returning an index into the catalan tablefor example if there were i pps the parser would return we now extend the indexing scheme so that the parser implements a series connection of two catalan grammars by returning one higher index than it would for a simple catalan grammarthat is if there were n pps the parser would return series connections of catalan grammars are very common in every day natural language as illustrated by the following two sentences which have received considerable attention in the literature because the parser cannot separate the direct object from the prepositional complementboth examples have a catalan number of ambiguities because the autoconvolution of a catalan series yields another catalan series13 this result can improve parsing performance because it suggests ways to reorganize the grammar so that there will be fewer references to quantities that are not readily availablethis reorganization will reap benefits that chart parsers do not currently achieve because the reorganization is taking advantage of a number of combinatoric regularities especially convolution that are not easily encoded into a chartsection 9 presents an example of the reorganization13 there is a difference between these two sentences because quotputquot subcategorizes for two objects unlike quotseequot suppose we analyze quotseequot as lexically ambiguous between two senses one that selects for exactly two objects like quotputquot and one that selects for exactly one object as in quoti saw itquot the first sense contributes the same number of parses as quotputquot and the second sense contributes an additional catalan factorperhaps it is worthwhile to reformulate chart parsing in our terms in order to show which of the above results can be captured by such an approach and which cannottraditionally chart parsers maintain a chart m whose entries m1 contain the set of category labels that span from position i to position j in the input sentencethis is accomplished by finding a position k between i and j such that there is a phrase from i to k that can combine with another phrase from k to jan implementation of the inner loop looks something like essentially then a chart parser is maintaining the invariant where addition and multiplication of matrix elements is related to parallel and series combinationthus chart parsers are able to process very ambiguous sentences in polynomial time as opposed to exponential timehowever the examples above illustrate cases where chart parsers are not as efficient as they might bein particular chart parsers implement convolution the quotlong wayquot by picking each possible dividing point k and parsing from i to k and from k to j they do not reduce the convolution of two catalans as we did abovesimilarly chart parsers do not make use of the quotevery way ambiguousquot generalization given a catalan grammar chart parsers will eventually enumerate all possible values of i j and k thus far most of our derivations have been justified in terms of successive approximationit is also possible to derive some interesting results directly from the grammar itselfsuppose for the sake of discussion that we choose to analyze adjuncts with a right branching grammar14 first we translate the grammar into an equation in the usual waythat is adjs is modeled as a parallel combination of two subgrammars adj adjs and awe can simplify so the right hand side is expressed in terminal symbols alone with no references to nonterminalsthis is very useful for processing because it is much easier for the parser to determine the presence or absence of terminals than of nonterminalsthat is it is easier for the parser to determine for example whether a word is an adj than it is to decide whether a substring is an adjs phrasethe simplification moves all references to adjs to the left hand side by subtracting from both sides grammars like adjs will sometimes be referred to as a step by analogy to a unit step function in electrical engineering8computing the power series from the atn this section will rederive the power series for the unit step grammar directly from the atn representation by treating the networks as flow graphs the graph transformations presented here are directly analogous to the algebraic simplifications employed in the previous sectionfirst we translate the grammar into an atn in the usual way this graph can be simplified by performing a compiler optimization call tail recursion this transformation replaces the final push arc with a jump jump tail recursion corresponds directly to the algebraic operations of moving the adjs term to the left hand side factoring out the adjs and dividing from both sidesthen we remove the top jump arc by series reductionthis step corresponds to multiplying by 1 since a jump arc is the atn representation for the identity element under series combination where the zeroth term corresponds to zero iterations around the loop the first term corresponds to a single iteration the second term to two iterations and so onrecall that is equivalent to 1 1adj with this observation it is possible to open the loop adjs01 jump pop after one final series reduction the atn is equivalent to expression aboveintuitively an atn loop is a division operatorwe now have composition operators for parallel composition series composition and loops an atn loop can be implemented in terms of the table lookup scheme discussed abovefirst we reformulate the loop as an infinite sum then we construct a table so that the ith entry in the table tells the parser how to parse i occurrences of adjsuppose for example that we were given the following grammar by inspection we notice that np and pp are catalan grammars and that adjs is a step grammarwith these observations the parser can process pps nps and adjss by counting the number of occurrences of terminal symbols and looking up those numbers in the appropriate tableswe now substitute into vp v np adjs v and simplify the convolution of the two catalan functions vp v i so that the parser can also find vps by just counting coccurrences of terminal symbolsnow we simplify so that s phrases can also be parsed by just counting occurrences of terminal symbolsfirst translate into the equation furthermore the number of parse trees for a given input sentence can be found by multiplying three numbers the catalan of the number of p n before the verb the catalan of one more than the number of p n after the verb and the ramp of the number of adjfor example the sentence the man on the hill saw the boy with a telescope yesterday in the morning has cat cat2 3 6 parsesthat is there is one way to parse quotthe man on the hillquot two ways to parse quotsaw the boy with a telescopequot or is attached to quotboyquot as in and three ways to parse the adjuncts or they could both attach to the vp or they could split all and only these possibilities are permitted by the grammarwe began our discussion with the observation that certain grammars are quotevery way ambiguousquot and suggested that this observation could lead to improved parsing performancecatalan grammars were then introduced to remedy the situation so that the processor can delay attachment decisions until it discovers some more useful constraintsuntil such time the processor can do little more than note that the input sentence is quotevery way ambiguousquot we suggested that a table lookup scheme might be an effective method to implement such a processorwe then introduced rules for combining primitive grammars such as catalan grammars into composite grammarsthis linear systems view quotbundles upquot all the parse trees into a single concise description capable of telling us everything we might want to know about the parses this abstract view of ambiguity enables us to ask questions in the most convenient order and to delay asking until it is clear that the payoff will exceed the costthis abstraction was very strongly influenced by the notion of delayed bindingwe have presented combination rules in three different representation systems power series atns and contextfree grammars each of which contributed its own insightspower series are convenient for defining the algebraic operations atns are most suited for discussing implementation issues and contextfree grammars enable the shortest derivationsperhaps the following quotation best summarizes our motivation for alternating among these three representation systems a thing or idea seems meaningful only when we have several different ways to represent it different perspectives and different associationsthen you can turn it around in your mind so to speak however it seems at the moment you can see it another way you never come to a full stop in each of these representation schemes we have introduced five primitive grammars catalan unit step 1 and 0 and terminals and four composition rules addition subtraction multiplication and divisionwe have seen that it is often possible to employ these analytic tools in order to reorganize the grammar into a form more suitable for processing efficientlywe have identified certain situations where the ambiguity is combinatoric and have sketched a few modifications to the grammar that enable processing to proceed in a more efficient mannerin particular we have observed it to be important for the grammar to avoid referencing quantities that are not easily determined such as the dividing point between a noun phrase and a prepositional phrase as in put the block in the box on the table in the kitchen we have seen that the desired reorganization can be achieved by taking advantage of the fact that the autoconvolution of a catalan series produces another catalan seriesthis reduced processing time from 0 to almost linear timesimilar analyses have been discussed for a number of lexically and structurally ambiguous constructions culminating with the example in section 9 where we transformed a grammar into a form that could be parsed by a single lefttoright pass over the terminal elementscurrently these grammar reformulations have to be performed by handit ought to be possible to automate this process so that the reformulations could be performed by a grammar compilerwe leave this project open for future researchwe would like to thank jon allen sarah ferguson lowell hawkinson kris halvorsen bill long mitch marcus rohit parikh and peter szolovits for their very useful comments on earlier draftswe would especially like to thank bill martin for initiating the project
J82-3004
coping with syntactic ambiguity or how to put the block in the box on the tablesentences are far more ambiguous than one might have thoughtthere may be hundreds perhaps thousands of syntactic parse trees for certain very natural sentences of englishthis fact has been a major problem confronting natural language processing especially when a large percentage of the syntactic parse trees are enumerated during semanticpragmatic processingin this paper we propose some methods for dealing with syntactic ambiguity in ways that exploit certain regularities among alternative parse treesthese regularities will be expressed as linear combinations of atn networks and also as sums and products of formal power serieswe believe that such encoding of ambiguity will enhance processing whether syntactic and semantic constraints are processed separately in sequence or interleaved togetherthe number of possible binarybranching parses of a sentence is defined by the catalan number an exponential combinatoric function
attention intentions and the structure of discourse in this paper we explore a new theory of discourse structure that stresses the role of purpose and processing in discourse in this theory discourse structure is composed of three separate but interrelated components the structure of the sequence of utterances a structure of purposes and the state of focus of attention the linguistic structure consists of segments of the discourse into which the utterances naturally aggregate the intentional structure captures the discourserelevant purposes expressed in each of the linguistic segments as well as relationships among them the attentional state is an abstraction of the focus of attention of the participants as the discourse unfolds the attentional state being dynamic records the objects properties and relations that are salient at each point of the discourse the distinction among these components is essential to provide an adequate explanation of such discourse phenomena as cue phrases referring expressions and interruptions the theory of attention intention and aggregation of utterances is illustrated in the paper with a number of example discourses various properties of discourse are described and explanations for the behavior of cue phrases referring expressions and interruptions are explored this theory provides a framework for describing the processing of utterances in a discourse discourse processing requires recognizing how the utterances of the discourse aggregate into segments recognizing the intentions expressed in the discourse and the relationships among intentions and tracking the discourse through the operation of the mechanisms associated with attentional state this processing description specifies in these recognition tasks the role of information from the discourse and from the participants knowledge of the domain in this paper we explore a new theory of discourse structure that stresses the role of purpose and processing in discoursein this theory discourse structure is composed of three separate but interrelated components the structure of the sequence of utterances a structure of purposes and the state of focus of attention the linguistic structure consists of segments of the discourse into which the utterances naturally aggregatethe intentional structure captures the discourserelevant purposes expressed in each of the linguistic segments as well as relationships among themthe attentional state is an abstraction of the focus of attention of the participants as the discourse unfoldsthe attentional state being dynamic records the objects properties and relations that are salient at each point of the discoursethe distinction among these components is essential to provide an adequate explanation of such discourse phenomena as cue phrases referring expressions and interruptionsthe theory of attention intention and aggregation of utterances is illustrated in the paper with a number of example discoursesvarious properties of discourse are described and explanations for the behavior of cue phrases referring expressions and interruptions are exploredthis theory provides a framework for describing the processing of utterances in a discoursediscourse processing requires recognizing how the utterances of the discourse aggregate into segments recognizing the intentions expressed in the discourse and the relationships among intentions and tracking the discourse through the operation of the mechanisms associated with attentional statethis processing description specifies in these recognition tasks the role of information from the discourse and from the participants knowledge of the domainthis paper presents the basic elements of a computational theory of discourse structure that simplifies and expands upon previous workby specifying the basic units a discourse comprises and the ways in which they can relate a proper account of discourse structure provides the basis for an account of discourse meaningan account of discourse structure also plays a central role in language processing because it stipulates constraints on those portions of a discourse to which any given utterance in the discourse must be relatedan account of discourse structure is closely related to two questions what individuates a discoursewhat makes it coherentthat is faced with a sequence of utterances how does one know whether they constitute a single discourse several discourses or noneas we develop it the theory of discourse structure will be seen to be intimately connected with two nonlinguistic notions intention and attentionattention is an essential factor in explicating the processing of utterances in discourseintentions play a primary role in explaining discourse structure defining discourse coherence and providing a coherent conceptualization of the term quotdiscoursequot itselfcopyright 1986 by the association for computational linguisticspermission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the cl reference and this copyright notice are included on the first pageto copy otherwise or to republish requires a fee andor specific permissionthe theory is a further development and integration of two lines of research work on focusing in discourse and more recent work on intention recognition in discourse our goal has been to generalize these constructs properly to a wide range of discourse typesgrosz demonstrated that the notions of focusing and task structure are necessary for understanding and producing taskoriented dialogueone of the main generalizations of previous work will be to show that discourses are generally in some sense quottaskorientedquot but the kinds of quottasksquot that can be engaged in are quite varied some are physical some mental others linguisticconsequently the term quottaskquot is misleading we therefore will use the more general terminology of intentions for most of what we sayour main thesis is that the structure of any discourse is a composite of three distinct but interacting components the distinction among these components is essential to an explanation of interruptions as well as to explanations of the use of certain types of referring expressions and various other expressions that affect discourse segmentation and structure most related work on discourse structure fails to distinguish among some of these componentsas a result significant generalizations are lost and the computational mechanisms proposed are more complex than necessaryby carefully distinguishing these components we are able to account for significant observations in this related work while simplifying both the explanations given and computational mechanisms usedin addition to explicating these linguistic phenomena the theory provides an overall framework within which to answer questions about the relevance of various segments of discourse to one another and to the overall purposes of the discourse participantsvarious properties of the intentional component have implications for research in naturallanguage processing in generalin particular the intentions that underlie discourse are so diverse that approaches to discourse coherence based on selecting discourse relationships from a fixed set of alternative rhetorical patterns are unlikely to sufficethe intentional structure introduced in this paper depends instead on a small number of structural relations that can hold between intentionsthis study also reveals several problems that must be confronted in expanding speechactrelated theories from coverage of individual utterances to coverage of extended sequences of utterances in discoursealthough a definition of discourse must await further development of the theory presented in this paper some properties of the phenomena we want to explain must be specified nowin particular we take a discourse to be a piece of language behavior that typically involves multiple utterances and multiple participantsa discourse may be produced by one or more of these participants as speakers or writers the audience may comprise one or more of the participants as hearers or readersbecause in multiparty conversations more than one participant may speak different utterances within a segment the terms speaker and hearer do not differentiate the unique roles that the participants maintain in a segment of a conversationwe will therefore use the terms initiating conversational participant and other conversational participant to distinguish the initiator of a discourse segment from its other participantsthe icp speaks the first utterance of a segment but an ocp may be the speaker of some subsequent utterancesby speaking of icps and ocps we can highlight the purposive aspect of discoursewe will use the terms speaker and hearer only when the particular speakinghearing activity is important for the point being madein most of this paper we will be concerned with developing an abstract model of discourse structure in particular the definitions of the components will abstract away from the details of the discourse participantswhether one constructs a computer system that can participate in a discourse or defines a psychological theory of language use the task will require the appropriate projection of this abstract model onto properties of a language user and specification of additional details we do however address ourselves directly to certain processing issues that are essential to the computational validity of the abstract model and to its utilization for a languageprocessing system or psychological theoryfinally it is important to note that although discourse meaning is a significant unsolved problem we will not address it in this paperan adequate theory of discourse meaning needs to rest at least partially on an adequate theory of discourse structureour concern is with providing the latterthe next section examines the basic theory of discourse structure and presents an overview of each of the components of discourse structuresection 3 analyzes two sample discourses a written text and a fragment of taskoriented dialogue from the perspective of the theory being developed these two examples are also used to illustrate various points in the remainder of the papersection 4 investigates various processing issues that the theory raisesthe following two sections describe the role of the discourse structure components in explaining various properties of discourse thereby corroborating the necessity of distinguishing among its three componentssection 7 describes the generalization from utterancelevel to discourselevel intentions establishes certain properties of the latter and contrasts them with the rhetorical relations of alternative theoriesfinally section 8 poses a number of outstanding research questions suggested by the theorydiscourse structure is a composite of three interacting constituents a linguistic structure an intentional structure and an attentional statethese three constituents of discourse structure deal with different aspects of the utterances in a discourseutterances the actual saying or writing of particular sequences of phrases and clauses are the linguistic structure basic elementsintentions of a particular sort and a small number of relationships between them provide the basic elements of the intentional structureattentional state contains information about the objects properties relations and discourse intentions that are most salient at any given pointit is an abstraction of the focus of attention of the discourse participants it serves to summarize information from previous utterances crucial for processing subsequent ones thus obviating the need for keeping a complete history of the discoursetogether the three constituents of discourse structure supply the information needed by the cps to determine how an individual utterance fits with the rest of the discourse in essence enabling them to figure out why it was said and what it meansthe context provided by these constituents also forms the basis for certain expectations about what is to come these expectations play a role in accommodating new utterancesthe attentional state serves an additional purpose namely it furnishes the means for actually using the information in the other two structures in generating and interpreting individual utterancesthe first component of discourse structure is the structure of the sequence of utterances that comprise a discourse1 just as the words in a single sentence form constituent phrases the utterances in a discourse are naturally aggregated into discourse segmentsthe utterances in a segment like the words in a phrase serve particular roles with respect to that segmentin addition the discourse segments like the phrases fulfill certain functions with respect to the overall discoursealthough two consecutive utterances may be in the same discourse segment it is also common for two consecutive utterances to be in different segmentsit is also possible for two utterances that are nonconsecutive to be in the same segmentthe factoring of discourses into segments has been observed across a wide range of discourse typesgrosz showed this for taskoriented dialogueslinde found it valid for descriptions of apartments linde and goguen describe such structuring in the watergate transcriptsreichmanadar observed it in informal debates explanations and therapeutic discoursecohen found similar structures in essays in rhetorical textspolanyi and scha discuss this feature of narrativesalthough different researchers with different theories have examined a variety of discourse types and found discourselevel segmentation there has been very little investigation of the extent of agreement about where the segment boundaries liethere have been no psychological studies of the consistency of recognition of section boundarieshowever mann asked several people to segment a set of dialogueshe has reported personal communication that his subjects segmented the discourses approximately the same their disagreements were about utterances at the boundaries of segments2 several studies of spontaneously produced discourses provide additional evidence of the existence of segment boundaries as well as suggesting some of the linguistic cues available for detecting boundarieschafe found differences in pause lengths at segment boundariesbutterworth found speech rate differences that correlated with segments speech rate is slower at start of a segment than toward the endthe linguistic structure consists of the discourse segments and an embedding relationship that can hold between themas we discuss in sections 22 and 5 the embedding relationships are a surface reflection of relationships among elements of the intentional structureit is important to recognize that the linguistic structure is not strictly decompositionalan individual segment may include a combination of subsegments and utterances only in that segment both of the examples in section 3 exhibit such nonstrict decompositionalitybecause the linguistic structure is not strictly decompositional various properties of the discourse are functions of properties of individual utterances and properties of segmentsthere is a twoway interaction between the discourse segment structure and the utterances constituting the discourse linguistic expressions can be used to convey information about the discourse structure conversely the discourse structure constrains the interpretation of expressions not surprisingly linguistic expressions are among the primary indicators of discourse segment boundariesthe explicit use of certain words and phrases and more subtle cues such as intonation or changes in tense and aspect are included in the repertoire of linguistic devices that function wholly or in part to indicate these boundaries reichman discusses some words that function in this way and coined the term clue wordswe will use the term cue phrases to generalize on her observation as well as many others because each one of these devices cue the hearer to some change in the discourse structureas discussed in section 6 these linguistic boundary markers can be divided according to whether they explicitly indicate changes in the intentional structure or in the attentional state of the discoursethe differential use of these linguistic markers provides one piece of evidence for considering these two components to be distinctbecause these linguistic devices function explicitly as indicators of discourse structure it becomes clear that they are best seen as providing information at the discourse level and not at the sentence level hence certain kinds of questions do not make sensefor example in the utterance incidentally jane swims every day the incidentally indicates an interruption of the main flow of discourse rather than affecting in any way the meaning of jane swims every dayjane swimming every day could hardly be fortuitousjust as linguistic devices affect structure so the discourse segmentation affects the interpretation of linguistic expressions in a discoursereferring expressions provide the primary example of this effect3 the segmentation of discourse constrains the use of referring expressions by delineating certain points at which there is a significant change in what entities are being discussedfor example there are different constraints on the use of pronouns and reduced definitenoun phrases within a segment than across segment boundarieswhile discourse segmentation is obviously not the only factor governing the use of referring expressions it is an important onea rather straightforward property of discourses namely that they have an overall purpose turns out to play a fundamental role in the theory of discourse structurein particular some of the purposes that underlie discourses and their component segments provide the means of individuating discourses and of distinguishing discourses that are coherent from those that are notthese purposes also make it possible to determine when a sequence of utterances comprises more than one discoursealthough typically the participants in a discourse may have more than one aim in participating in the discourse we distinguish one of these purposes as foundational to the discoursewe will refer to it as the discourse purpose from an intuitive perspective the discourse purpose is the intention that underlies engaging in the particular discoursethis intention provides both the reason a discourse rather than some other action is being performed and the reason the particular content of this discourse is being conveyed rather than some other informationfor each of the discourse segments we can also single out one intention the discourse segment purpose from an intuitive standpoint the dsp specifies how this segment contributes to achieving the overall discourse purposethe assumption that there are single such intentions will in the end prove too stronghowever this assumption allows us to describe the basic theory mare clearlywe must leave to future research the exploration and discussion of the complications that result from relaxing this assumptiontypically an icp will have a number of different kinds of intentions that lead to initiating a discourseone kind might include intentions to speak in a certain language or to utter certain wordsanother might include intentions to amuse or to impressthe kinds of intentions that can serve as discourse purposes or discourse segment purposes are distinguished from other intentions by the fact that they are intended to be recognized whereas other intentions are private that is the recognition of the dp or dsp is essential to its achieving its intended effectdiscourse purposes and discourse segment purposes share this property with certain utterancelevel intentions that grice uses in defining utterance meaning it is important to distinguish intentions that are intended to be recognized from other kinds of intentions that are associated with discourseintentions that are intended to be recognized achieve their intended effect only if the intention is recognizedfor example a compliment achieves its intended effect only if the intention to compliment is recognized in contrast a scream of boo typically achieves its intended effect without the hearer having to recognize the speaker intentionsome intention that is private and not intended to be recognized may be the primary motivation for an icp to begin a discoursefor example the icp may intend to impress someone or may plan to teach someonein neither case is the icp intention necessarily intended to be recognizedquite the opposite may be true in the case of impressing as the icp may not want the ocp to be aware of his intentionwhen teaching the icp may not care whether the ocp knows the icp is teaching him or herthus the intention that motivates the icp to engage in a discourse may be privateby contrast the discourse segment purpose is always intended to be recognizeddps and dsps are basically the same sorts of intentionsif an intention is a dp then its satisfaction is a main purpose of the discourse whereas if it is a dsp then its satisfaction contributes to the satisfaction of the dpthe following are some of the types of intentions that could serve as dpdsps followed by one example of each typewe have identified two structural relations that play an important role in discourse structure dominance and satisfactionprecedencean action that satisfies one intention say dsp1 may be intended to provide part of the satisfaction of another say dsp2when this is the case we will say that dsp1 contributes to dsp2 conversely we will say that dsp2 dominates dsp1 the dominance relation invokes a partial ordering on dsps that we will refer to as the dominance hierarchyfor some discourses including taskoriented ones the order in which the dsps are satisfied may be significant as well as being intended to be recognizedwe will say that dsp1 satisfactionprecedes dsp2 whenever dsp1 must be satisfied before dsp24 any of the intentions on the preceding list could be either a dp or a dspfurthermore a given instance of any one of them could contribute to another or to a different instance of the same typefor example the intention that someone intend to identify some object might dominate several intentions that she or he know some property of that object likewise the intention to get someone to believe some fact might dominate a number of contributing intentions that that person believe other factsas the above list makes clear the range of intentions that can serve as discourse or discourse segment purposes is openended much like the range of intentions that underlie more general purposeful actionthere is no finite list of discourse purposes as there is say of syntactic categoriesit remains an unresolved research question whether there is a finite description of the openended set of such intentionshowever even if there were finite descriptions there would still be no finite list of intentions from which to choosethus a theory of discourse structure cannot depend on choosing the dpdsps from a fixed list nor on the particulars of individual intentionsalthough the particulars of individual intentions like a wide range of common sense knowledge are crucial to understanding any discourse such particulars cannot serve as the basis for determining discourse structurewhat is essential for discourse structure is that such intentions bear certain kinds of structural relationships to one anothersince the cps can never know the whole set of intentions that might serve as dpdsps what they must recognize is the relevant structural relationships among intentionsalthough there is an infinite number of intentions there are only a small number of relations relevant to discourse structure that can hold between themin this paper we distinguish between the determination of the dsp and the recognition of itwe use the term determination to refer to a semanticlike notion namely the complete specification of what is intended by whom we use the term recognition to refer to a processing notion namely the processing that leads a discourse participant to identify what the intention isthese are obviously related concepts the same information that determines a dsp may be used by an ocp to recognize ithowever some questions are relevant to only one of themfor example the question of when the information becomes available is not relevant to determination but is crucial to recognitionan analogous distinction has been drawn with respect to sentence structure the parse tree is differentiated from the parsing process that produces the treethe third component of discourse structure the attentional state is an abstraction of the participants focus of attention as their discourse unfoldsthe attentional state is a property of the discourse itself not of the discourse participantsit is inherently dynamic recording the objects properties and relations that are salient at each point in the discoursethe attentional state is modeled by a set of focus spaces changes in attentional state are modeled by a set of transition rules that specify the conditions for adding and deleting spaceswe call the collection of focus spaces available at any one time the focusing structure and the process of manipulating spaces focusingthe focusing process associates a focus space with each discourse segment this space contains those entities that are salient either because they have been mentioned explicitly in the segment or because they became salient in the process of producing or comprehending the utterances in the segment the focus space also includes the dsp the inclusion of the purpose reflects the fact that the cps are focused not only on what they are talking about but also on why they are talking about itto understand the attentional state component of discourse structure it is important not to confuse it with two other conceptsfirst the attentional state component is not equivalent to cognitive state but is only one of its componentscognitive state is a richer structure one that includes at least the knowledge beliefs desires and intentions of an agent as well as the cognitive correlates of the attentional state as modeled in this papersecond although each focus space contains a dsp the focus structure does not include the intentional structure as a wholefigure 1 illustrates how the focusing structure in addition to modeling attentional state serves during processing to coordinate the linguistic and intentional structuresthe discourse segments are tied to focus spaces the focusing structure is a stackinformation in lower spaces is usually accessible from higher ones we use a line with intersecting hash marks to denote when this is not the casesubscripted terms are used to indicate the relevant contents of the focus spaces because the spaces contain representations of entities and not linguistic expressionspart one of figure 1 shows the state of focusing when discourse segment ds2 is being processedsegment ds1 gave rise to fs1 and had as its discourse purpose dspithe properties objects relations and purpose represented in fs1 are accessible but less salient than those in fs2ds2 yields a focus space that is stacked relative to fs1 because dsp of ds1 dominates ds2 dsp dsp2as a result of the relationship between fs1 and fs2 reduced noun phrases will be interpreted differently in ds2 than in ds1for example if some red balls exist in the world one of which is represented in ds2 and another in fs1 then the red ball used in ds2 will be understood to mean the particular red ball that is represented in ds2if however there is also a green truck and it is represented only in fs1 the green truck uttered in ds2 will be understood as referring to that green truckpart two of figure 1 shows the state of focusing when segment ds3 is being processedfs2 has been popped from the stack and fs3 has been pushed onto it because the dsp of ds3 dsp3 is dominated solely by dspi not by dsp2in this example the intentional structure includes only dominance relationships although it may in general also include satisfactionprecedence relationshipsthe stacking of focus spaces reflects the relative salience of the entities in each space during the corresponding segment portion of the discoursethe stack relationships arise from the ways in which the various dsps relate information about such relationships is represented in the dominance hierarchy the spaces in figure 1 are snapshots illustrating the results of a sequence of operations such as pushes onto and pops from a stacka push occurs when the dsp for a new segment contributes to the dsp for the immediately preceding segmentwhen the dsp contributes to some intention higher in the dominance hierarchy several focus spaces are popped from the stack before the new one is insertedtwo essential properties of the focusing structure are now clearfirst the focusing structure is parasitic upon the intentional structure in the sense that the relationships among dsps determine pushes and popsnote however that the relevant operation may sometimes be indicated in the language itselffor example the cue word first often indicates the start of a segment whose dsp contributes to the dsp of the preceding segmentsecond the focusing structure like the intentional and linguistic structures evolves as the discourse proceedsnone of them exists a priorieven in those rare cases in which an icp has a complete plan for the discourse prior to uttering a single word the intentional structure is constructed by the cps as the discourse progressesthis discoursetime construction of the intentional structure may be more obviously true for speakers and hearers of spoken discourse than for readers and writers of texts but even for the writer the intentional structure is developed as the text is being writtenfigure 1 illustrates some fundamental distinctions between the intentional and attentional components of discourse structurefirst the dominance hierarchy provides among other things a complete record of the discourselevel intentions and their dominance relationships whereas the focusing structure at any one time can essentially contain only information that is relevant to purposes in a portion of the dominance hierarchysecond at the conclusion of a discourse if it completes normally the focus stack will be empty while the intentional structure will have been fully constructedthird when the discourse is being processed only the attentional state can constrain the interpretation of referring expressions directlywe can now also clarify some misinterpretations of focusspace diagrams and task structure in our earlier work the focusspace hierarchies in that work are best seen as representing attentional statethe task structure was used in two ways although the same representational scheme was used for encoding the focusspace hierarchies and the task structure the two structures were distinctbarbara j grosz and candace l sidner attention intentions and the structure of discourse several researchers misinterpreted the original research in an unfortunate and unintended way they took the focusspace hierarchy to include the task structurethe conflation of these two structures forces a single structure to contain information about attentional state intentional relationships and general task knowledgeit prevents a theory from accounting adequately for certain aspects of discourse including interruptions a second instance of confusion was to infer that the task structure was necessarily a prebuilt treeif the task structure is taken to be a special case of intentional structure it becomes clear that the tree structure is simply a more constrained structure than one might require for other discourses the nature of the task related to the taskoriented discourse is such that the dominance hierarchy of the intentional structure of the dialogue has both dominance and satisfactionprecedence relationships5 while other discourses may not exhibit significant precedence constraints among the dspsfurthermore there has never been any reason to assume that the task structures in taskoriented dialogues are prebuilt any more than the intentional structure of any other kind of discoursesit is rather that one objective of discourse theory is to explain how the ocp builds up a model of the task structure by using information supplied in the discoursehowever it is important to note that conflating the aforementioned two roles of information about the task itself was regrettable as it fails to make an important distinctionfurthermore as is clear when intentional structures are considered more generally such a conflation of roles does not allow for differences between what one knows about a task and one intentions for performing a taskin summary the focusing structure is the central repository for the contextual information needed to process utterances at each point in the discourseit distinguishes those objects properties and relations that are most salient at that point and moreover has links to relevant parts of both the linguistic and intentional structuresduring a discourse an increasing amount of information only some of which continues to be needed for the interpretation of subsequent utterances is discussedhence it becomes more and more necessary to be able to identify relevant discourse segments the entities they make salient and their dspsthe role of attentional state in delineating the information necessary for understanding is thus central to discourse processingto illustrate the basic theory we have just sketched we will give a brief analysis of two kinds of discourse an argument from a rhetoric text and a taskoriented dialoguefor each example we discuss the segmentation of the discourse the intentions that underlie this segmentation and the relationships among the various dspsin each case we point out some of the linguistic devices used to indicate segment boundaries as well as some of the expressions whose interpretations depend on those boundariesthe analysis is concerned with specifying certain aspects of the behavior to be explicated by a theory of discourse the remainder of the paper provides a partial account of this behaviorour first example is an argument taken from a rhetoric text it is an example used by cohen in her work on the structure of argumentsfigure 2 shows the dialogue and the eight discourse segments of which it is composedthe division of the argument into separate clauses is cohen but our analysis of the discourse structure is different since in cohen analysis every utterance is directly subordinated to another utterance and there is only one structure to encode linguistic segmentation and the purposes of utterancesalthough both analyses segment utterance separately from utterances some readers place this utterance in ds1 with utterances through this is an example of the kind of disagreement about boundary utterances found in mann data the two placements lead to slightly different dsps but not to radically different intentional structuresbecause the differences do not affect the major thrust of the argument we will discuss only one segmentationcomputational linguistics volume 12 number 3 julyseptember 1986 figure 3 lists the primary component of the dsp for each of these segments and figure 4 shows the dominance relationships that hold among these intentionsin section 7 we discuss additional components of the discourse segment purpose because these additional components are more important for completeness of the theory than for determining the essential dominance and satisfactionprecedence relationships between dsps we omit such details hererather than commit ourselves to a formal language in which to express the intentions of the discourse we will use a shorthand notation and english sentences that are intended to be a gloss for a formal statement of the actual intentions where po the proposition that parents and teachers should guard the young from overindulgence in the moviesii where p1 the proposition that it is time to consider the effect of movies on mind and morals where p2 the proposition that young people cannot drink in through their eyes a continuous spectacle of intense and strained activity without harmful effects where p3 the proposition that it is undeniable that great educational and ethical gains may be made through the movies where p4 the proposition that although there are gains the total result of continuous and indiscriminate attendance at movies is harmful15 where p5 the proposition that the content of movies is not the best16 where p6 the proposition that the stories in movies are exciting and overemotional17 where p7 the proposition that movies portray strong emotion and buffoonery while neglecting the quiet and reasonable aspects of lifeall the primary intentions for this essay are intentions that the reader come to believe some propositionsome of these propositions such as p5 and p6 can be read off the surface utterances directlyother propositions and the intentions of which they are part such as p2 and 12 are mote indirectlike the gricean utterancetlevel intentions dsps may or may not be directly expressed in the discoursein particular they may be expressed in any of the following ways not only may information about the dsp be conveyed by a number of features of the utterances in a discourse but it also may come in any utterance in a segmentfor example although io is the dp it is stated directly only in the last utterance of the essaythis leads to a number of questions about the ways in which ocps can recognize discourse purposes and about those junctures at which they need to do sowe turn to these matters directly in subsection 41this discourse also provides several examples of the different kinds of interactions that can hold between the linguistic expressions in a discourse and the discourse structureit includes examples of the devices that may be used to mark overtly the boundaries between discourse segments examples of the use of aspect mood and particular cue phrases as well as of the use of referring expressions that are affected by discourse segment boundariesthe use of cue phrases to indicate discourse boundaries is illustrated in utterances and in the phrase in the first place marks the beginning of ds5 while in moreover ends ds5 and marks the start of ds6these phrases also carry information about the intentional structure namely that dsp5 and dsp6 are dominated by dsp4in some cases cue phrases have multiple functions they convey propositional content as well as marking discourse segment boundariesthe but in utterance is an example of such a multiple function usethe boundaries between ds1 and ds2 ds4 and ds5 and ds4 and ds2 reflect changes of aspect and moodthe switch from declarative present tense to interrogative modal aspect does not in itself seem to signal the boundary in this discourse unambiguously but it does indicate a possible line of demarcation which in fact is validthe effect of segmentation on referring expressions is shown by the use of the generic noun phrase a moving picture show in although a reference to the movies was made with a pronoun in a full noun phrase is used in this use reflects and perhaps in part marks the boundary between the segments ds1 and ds2finally this discourse has an example of the tradeoff between explicitly marking a discourse boundary as well as the relationship between the associated dsps and reasoning about the intentions themselvesthere is no overt linguistic marker of the beginning of ds7 its separation must be inferred from dsp7 and its relationship to dsp6the second example is a fragment of a taskoriented dialogue taken from grosz figure 5 contains the dialogue fragment and indicates the boundaries for its main segments7 figure 6 gives the primary component of the dsps for this fragment and shows the dominance rela1ionships between themin contrast with the movies essay the primary components of the dsps in this dialogue are mostly intentions of the segment icp that the ocp intend to perform some actionalso unlike the essay the dialogue has two agents initiating the different discourse segmentsin this particular segment the expert is the icp of ds1 and ds5 while the apprentice is the icp of ds24to furnish a complete account of the intentional structure of this discourse one must be able to say how the satisfaction of one agent intentions can contribute to satisfying the intentions of another agentsuch an account is beyond the scope of this paper but in section 7 we discuss some of the complexities involved in providing one for the purposes of discussing this example though we need to postulate two properties of the relationships among the participants intentionsthese properties seem to be rooted in features of cooperative behavior and depend on the two participants sharing some particular knowledge of the taskfirst it is a shared belief that unless he states otherwise the ocp will adopt the intention to perform an action that the icp intended him tosecond in adopting the intention to carry out that action the ocp also intends to perform whatever subactions are necessarythus once the apprentice intends to remove the flywheel he also commits himself to the collateral intentions of loosening the setscrews and pulling the wheel offnote however that not all the subactions need to be introduced explicitly into the discoursethe apprentice may do several actions that are never mentioned and the expert may assume that these are being undertaken on the basis of other information that the apprentice obtainsthe partiality of the intentional structure stems to some extent from these characteristics of intentions and actions place the jaws around the hub of the wheel then tighten the screw onto the center of the shaftthe wheel should slide offas in the movies essay some of the dsps for this dialogue are expressed directly in utterancesfor instance utterances and directly express the primary components of dsp dsp2 and dsp3 respectivelythe primary component of dsp4 is a derived intentionthe surface intention of but i am having trouble getting the wheel off is that the apprentice intends the expert to believe that the apprentice is having trouble taking off the flywheel14 is derived from the utterance and its surface intention as well as from features of discourse conventions about what intentions are associated with the i am having trouble doing x type of utterance and what the icp and ocp know about the task they have undertakenthe dominance relationship that holds between il and 12 as well as the one that holds between il and 13 may seem problematic at first glanceit is not clear how locating any single setscrew contributes to removing the flywheelit is even less clear how in and of itself identifying another tool doestwo facts provide the link first that the apprentice has taken on the task of removing the flywheel second that the apprentice and expert share certain knowledge about the tasksome of this shared task knowledge comes from the discourse per se eg utterance but some of it comes from general knowledge perceptual information and the likethus a combination of information is relevant to determining 12 and 13 and their relationships to 1 including all of the following the fact that ii is part of the intentional structure the fact that the apprentice is currently working on satisfying ii the utterancelevel intentions of utterances and and general knowledge about the taskthe satisfactionprecedence relations among 12 13 and 14 are not communicated directly in the dialogue but like dominance relations depend on domain knowledgeone piece of relevant knowledge is that a satisfaction precedence relation exists between loosening the setscrews and pulling off the flywheelthat relation is shared knowledge that is stated directly the relation along with the fact that both 12 and 13 contribute to loosening the setscrews and that 14 contributes to pulling off the flywheel makes it possible to conclude 13 sp 14 and 12 sp 14to conclude that 12 sp 13 the apprentice must employ knowledge of how to go about loosening screwlike objectsthe dominance and satisfactionprecedence relations for this taskoriented fragment form a tree of intentions rather than just a partial orderingin general however for any fragment taskoriented or otherwise this is not necessaryit is essential to notice that the intentional structure is neither identical to nor isomorphic to a general plan for removing the flywheelit is not identical because a plan encompasses more than a collection of intentions and relationships between them critique of al planning formalisms as the basis for inferring intentions in discourseit is not isomorphic because the intentional structure has a different substructure from the general plan for removing the flywheelin addition to the intentions arising from steps in the plan the intentional structure typically contains dsps corresponding to intentions generated by the particular execution of the task and the dialoguefor example the general plan for the disassembly of a flywheel includes subplans for loosening the setscrews and pulling off the wheel it might also include subplans for finding the setscrews finding a tool with which to loosen the screws and loosening each screw individuallyhowever this plan would not contain contingency subplans for what to do when one cannot find the screws or realizes that the available tool is unsatisfactoryintentions 12 and 13 stem from difficulties encountered in locating and loosening the setscrewsthus the intentional structure for this fragment is not isomorphic to the general plan for removing the flywheelutterance offers another example of the difference between the intentional structure and a general plan for the taskthis utterance is part of ds4 not just part of ds1 even though it contains references to more than one single part of the overall task it functions to establish a new dsp 14 as most salientrather than being regarded as a report on the overall status of the task the first clause is best seen as modifying the dsp8 with it the apprentice tells the expert that the trouble in removing the wheel is not with the screwsthus although general task knowledge is used in determining the intentional structure it is not identical to itin this dialogue there are fewer instances in which cue phrases are employed to indicate segment boundaries than occur in the movies essaythe primary example is the use of first in to mark the start of the segment and to indicate that its dsp is the first of several intentions whose satisfaction will contribute to satisfying the larger discourse of which they are a partthe dialogue includes a clear example of the influence of discourse structure on referring expressionsthe phrase the screw in the center is used in to refer to the center screw of the wheelpuller not one of the two setscrews mentioned in this use of the phrase is possible because of the attentional state of the discourse structure at the time the phrase is utteredin previous sections of the paper we abstracted from the cognitive states of the discourse participantsthe various components of discourse structure discussed so far are properties of the discourse itself not of the discourse participantsto use the theory in constructing computational models requires determining how each of the individual components projects onto the model of an individual discourse participantin this regard the principal issues include specifying in essence the ocp must judge for each utterance whether it starts a new segment ends the current one or contributes to the current onethe information available to the ocp for recognizing that an utterance starts a new segment includes any explicit linguistic cues contained in the utterance as well as the relationship between its utterancelevel intentions and the active dsps likewise the fact that an utterance ends a segment may be indicated explicitly by linguistic cues or implicitly from its utterancelevel intentions and their relationship to elements of the intentional structureif neither of these is the case the utterance is part of the current segmentthus intention recognition and focus space management play key roles in processingmoreover they are also related the intentional structure is a primary factor in determining focus space changes and the focus space structure helps constrain the intention recognition processthe recognition of dpdsps is the central issue in the computational modeling of intentional structureif as we have claimed for the discourse to be coherent and comprehensible the ocp must be able to recognize both the dpdspsl and relationships between them then the question of how the ocp does so is a crucial issuefor the discourse as a whole as well as for each of its segments the ocp must identify both the intention that serves as the discourse segment purpose and its relationship to other discourselevel intentionsin particular the ocp must be able to recognize which other dsps that specific intention dominates and is dominated by and where relevant with which other dsps it has satisfactionprecedence relationshipstwo issues that are central to the recognition problem are what information the ocp can utilize in effecting the recognition and at what point in the discourse that information becomes availablean adequate computational model of the recognition process depends critically on an adequate theory of intention and action this of course is a large research problem in itself and one not restricted to matters of discoursethe need to use such a model for discourse however adds certain constraints on the adequacy of any theory or modelpollack describes several properties such theories and models must possess if they are to be adequate for supporting recognition of intention in singleutterance queries she shows how current al planning models are inadequate and proposes an alternative planning formalismthe need to enable recognition of discourselevel intentions leads to yet another set of requirementsas will become clear in what follows the information available to the ocp comes from a variety of sourceseach of these can typically provide partial information about the dsps and their relationshipsthese sources are each partially constraining but only in their ensemble do they constrain in fullto the extent that more information is furnished by any one source commensurately less is needed from the othersthe overall processing model must be one of constraint satisfaction that can operate on partial informationit must allow for incrementally constraining the range of possibilities on the basis of new information that becomes available as the segment progressesat least three different kinds of information play a role in the determination of the dsp specific linguistic markers utterancelevel intentions and general knowledge about actions and objects in the domain of discourseeach plays a part in the ocp recognition of the dsp and can be utilized by the icp to facilitate this recognitioncue phrases are the most distinguished linguistic means that speakers have for indicating discourse segment boundaries and conveying information about the dsprecent evidence by hirschberg and pierrehumbert suggests that certain intonational properties of utterances also provide partial information about the dsp relationshipsbecause some cue phrases may be used as clausal connectors there is a need to distinguish their discourse use from their use in conveying propositional content at the utterance levelfor example the word but functions as a boundary marker in utterance of the discourse in section 31 but it can also be used solely to convey propositional content and serve to connect two clauses within a segmentas discussed in section 6 cue phrases can provide information about dominance and satisfactionprecedence relationships between segments dspshowever they may not completely specify which dsp dominates or satisfactionprecedes the dsp of the segment they startfurthermore cue phrases that explicitly convey information only about the attentional structure may be ambiguous about the state to which attention is to shiftfor example if there have been several interruptions the phrase but anyway indicates a return to some previously interrupted discourse but does not specify which onealthough cue phrases do not completely specify a dsp the information they provide is useful in limiting the options to be consideredthe second kind of information the ocp has available is the utterancelevel intention of each utterance in the discourseas the discussion of the movies example pointed out the dsp may be identical to the utterancelevel intention of some utterance in the segmentalternatively the dsp may combine the intentions of several utterances as is illustrated in the following discourse segment i want you to arrange a trip for me to palo altoit will be for two weeksi only fly on twathe dsp for this segment is roughly that the icp intends for the ocp to make trip arrangements for the icp to go to palo alto for two weeks under the constraint that any flights be on twathe gricean intentions for these three utterances are as follows utterance i icp intends that ocp believe that icp intends that ocp intend to make trip plans for icp to go to palo alto utterance2 icp intends that ocp believe that icp intends ocp to believe that the trip will last two weeks utterance3 icp intends that ocp believe that icp intends ocp to believe that icp flies only on twa these intentions must be combined in some way to produce the dspthe process is quite complex since the ocp must recognize that the reason for utterances 2 and 3 is not simply to have some new beliefs about the icp but to use those beliefs in arranging the tripwhile this example fits the schema of a request followed by two informings schemata will not suffice to represent the behavior as a general rulea different sequence of utterances with different utterancelevel intentions can have the same dsp this is the case in the following segment it is possible for a sequence that consists of a request followed by two informings not to result in a modification of the trip plansfor example in the following sequence the third utterance results in changing the way the arrangements are made rather than constraining the nature of the arrangements themselvesi want you to arrange a twoweek trip for me to palo altoi fly only on twathe rates go up tomorrow so you will want to call todaynot only is the contribution of utterancelevel intentions to dsps complicated but in some instances the dsp for a segment may both constrain and be partially determined by the gricean intention for some utterance in the segmentfor example the griceanintention for utterance in the movies example is derived from a combination of facts about the utterance itself and from its place in the discourseon the surface appears to be a question addressed to the ocp its intention would be roughly that the icp intends the ocp to believe that the icp wants to know how young people etcbut is actually a rhetorical question and has a very different intention associated with it namely that the icp intends the ocp to believe proposition p2 in this example this particular intention is also the primary component of the dspthe third kind of information that plays a role in determining the dpdsps is shared knowledge about actions and objects in the domain of discoursethis shared knowledge is especially important when the linguistic markers and utterancelevel intentions are insufficient for determining the dsp preciselyin section 7 we introduce two relations a supports relation between propositions and a generates relation between actions and present two rules stating equivalences one links a dominance relation between two dsps with a supports relation between propositions and the other links a dominance relation between dsps to a generates relation between actionsuse of these rules in one direction allows for determining what supports or generates relationship holds from the dominance relationshipbut the rules can be used in the opposite direction also if from the content of utterances and reasoning about the domain of discourse a supports or generates relationship can be determined then the dominates relationship between dsps can be determinedin such cases it is important to derive the dominance relationship so that the appropriate intentional and attentional structures are available for processing or determining the interpretation of the subsequent discoursefrom the perspective of recognition a tradeoff implicit in the two equivalences is importantif the icp makes the dominance relationship between two dsps explicit then the ocp can use this information to help recognize the supports relationshipconversely if the icp utterances make clear the supports or generates relationship then the ocp can use this information to help recognize the dominance relationshipalthough it is most helpful to use the dominance relationships to constrain the search for appropriate supports and generates relationships sometimes these latter relationships can be inferred reasonably directly from the utterances in a segment using general knowledge about the objects and actions in the domain of discourseit remains an open question what inferences are needed and how complex it will be to compute supports and generates relationships if the dominance relationship is not directly indicated in a discourseutterances from the movies essay illustrate this tradeoffin utterance the phrase in the first place expresses the dominance relationship between dsps of the new segment ds5 and the parent segment ds4 directlybecause of the dominance relationship the ocp can determine that the icp believes that the proposition that the content of the plays is not the best provides support for the proposition that the result of indiscriminate movie going is harmfulhence determining dominance yields the support relationthe support relation can also yield dominanceutterances which comprise ds7 are not explicitly marked for a dominance relationit can be inferred from the fact that the propositions in provide support for the proposition embedded in dsp6 that dsp6 dominates dsp7finally the more information an icp supplies explicitly in the actual utterances of a discourse the less reasoning about domain information an ocp has to do to achieve recognitioncohen has made a similar claim regarding the problem of recognizing the relationship between one proposition and anotheras discussed in section 22 the intentional structure evolves as the discourse doesby the same token the discourse participants mentalstate correlates of the intentional structure are not prebuilt neither participant may have a complete model of the intentional structure quotin mindquot until the discourse is completedthe dominance relationships that actually shape the intentional structure cannot be known a priori because the specific intentions that will come into play are not known until the utterances in the discourse have been madealthough it is assumed that the participants common knowledge includesquot enough information about the domain to determine various relationships such as supports and generates it is not assumed that prior to a discourse they actually had inferred and are aware of all the relationships they will need for that discoursebecause any of the utterances in a segment may contribute information relevant to a complete determination of the dsp the recognition process is not complete until the end of the segmenthowever the ocp must be able to recognize at least a generalization of the dsp so that he can make the proper moves with respect to the attentional structurethat is some combination of explicit indicators and intentional and propositional content must allow the ocp to ascertain where the dsp will fit in the intentional structure at the beginning of a segment even if the specific intention that is the dsp cannot be determined until the end of the segmentutterance in the movies example illustrates this pointthe author writes quothow can our young people drink in through their eyes a continuous spectacle of intense and strained activity and feeling without harmful effectsquot the primary intention 12 is derived from this utterance but this cannot be done until very late in the discourse segment since occurs at the end of ds2furthermore the segment for which 12 is primary has complex embedding of other segmentsutterance intention io and dso constitute another example of the expression of a primary intention late in a discourse segmentin that case 10 cannot be computed until has been read and is not only the last utterance in dso but is one that covers the entire essayif an ocp must recognize a dsp to understand a segment then we ask how does the ocp recognize a dsp when the utterance from which its primary intention is derived comes so late in the segmentwe conjecture with regard to such segments as d2 of the movies essay that the primary intention may be determined partially before the point at which it is actually expressed in the discoursewhile the dpdsp may not be expressed early there is still partial information about itthis partial information often suffices to establish dominance relationships for additional segmentsas these latter are placed in the hierarchy their dsps can provide further partial information for the underspecified dspfor example even though the intention 10 is expressed directly only in the last utterance of the movies essay utterance expresses an intention to know whether p or p is true 10 is an intention to believe whose proposition is a generalization of the p expressed in consider also the primary intention 14it occurs in a segment embedded within ds2 is more general than 12 but is an approximation to itit would not be surprising to discover that ocps can in fact predict something close to 12 on the basis of 14 utterances and the partial dominance hierarchy available at each point in the discoursethe focus space structure enables certain processing decisions to be made locallyin particular it limits the information that must be considered in recognizing the dsp as well as that considered in identifying the referents of certain classes of definite noun phrasesa primary role of the focus space stack is to constrain the range of dsps considered as candidates for domination or satisfactionprecedence of the dsp of the current segmentonly those dsps in some space on the focusing stack are viable prospectsas a result of this use of the focusing structure the theory predicts that this decision will be a local one with respect to attentional statebecause two focus spaces may be close to each other in the attentional structure without the discourse segments they arise from necessarily being close to one another and vice versa this prediction corresponds to a claim that locality in the focusing structure is what matters to determination of the intentional structurea second role of the focusing structure is to constrain the ocp search for possible referents of definite noun phrases and pronounsto illustrate this role we will consider the phrase the screw in the center in utterance of the taskoriented dialogue of section 3the focus stack configuration when utterance is spoken is shown in figure 7the stack contains focus spaces fs1 fs4 and fs5 for segments ds1 ds4 and ds5 respectivelyfor ds5 the wheelpuller is a focused entity while for d54 the two setscrews are the entities in fs5 are considered before those in fs4 as potential referentsthe wheelpuller has three screws two small screws fasten the side arms and a large screw in the center is the main functioning partas a result this large screw is implicitly in focus in fs5 and thus identified as the referent without the two setscrews ever being consideredattentional state also constrains the search for referents of pronounsbecause pronouns contain less explicit information about their referents than definite descriptions additional mechanisms are needed to account for what may and may not be pronominalized in the discourseone such mechanism is centering centering like focusing is a dynamic behavior but is a more local phenomenonin brief a backwardlooking center is associated with each utterance in a discourse segment of all the focused elements the backwardlooking center is the one that is central in that utterance a combination of syntactic semantic and discourse information is used to identify the backwardlooking centerthe fact that some entity is the backwardlooking center is used to constrain the search for the referent of a pronoun in a subsequent utterancenote that unlike the dsp which is constant for a segment the backwardlooking center may shift different entities may become more salient at different points in the segmentthe presence of both centers and dsps in this theory leads us to an intriguing conjecture that quottopicquot is a concept that is used ambiguously for both the dsp of a segment and the centerin the literature the concept of quottopicquot has appeared in many guisesin syntactic form it is used to describe the preposing of syntactic constituents in english and the quotwaquot marking in japaneseresearchers have used it to describe the sentence topic and as a pragmatic notion others want to use the term for discourse topic either to mean what the discourse is about or to be defined as those proposition the icp provides or requests new information about for a review of many of the notions of aboutness and topicit appears that many of the descriptions of sentence topic correspond to centers while discourse topic corresponds to the dsp of a segment or of the discourseinterruptions in discourses pose an important test of any theory of discourse structurebecause processing an utterance requires ascertaining how it fits with previous discourse it is crucial to decide which parts of the previous discourse are relevant to it and which cannot beinterruptions by definition do not fit consequently their treatment has implications for the treatment of the normal flow of discourseinterruptions may take many forms some are not at all relevant to the content and flow of the interrupted discourse others are quite relevant and many fall somewhere in between these extremesa theory must differentiate these cases and explain what connections exist between the main discourse and the interruption and how the relationship between them affects the processing of the utterances in boththe importance of distinguishing between intentional structure and attentional state is evident in the three examples considered in subsections 52 53 and 54the distinction also permits us to explain a type of behavior deemed by others to be similar socalled semantic returns an issue we examine in subsection 55these examples do not exhaust the types of interruptions that can occur in discoursethere are other ways to vary the explicit linguistic indicators used to indicate boundaries the relationships between dsps and the combinations of focus space relationships presenthowever the examples provide illustrations of interruptions at different points along the spectrum of relevancy to the main discoursebecause they can be explained more adequately by the theory of discourse structure presented here than by previous theories they support the importance of the distinctions we have drawnfrom an intuitive view we observe that interruptions are pieces of discourse that break the flow of the preceding discoursean interruption is in some way distinct from the rest of the preceding discourse after the break for the interruption the discourse returns to the interrupted piece of discoursein the example below from polanyi and scha there are two discourses d1 indicated in normal type and d2 in italicsd2 is an interruption that breaks the flow of d1 and is distinct from dldl john came by and left the groceries d2 stop that you kids dl and i put them away after he left using the theory described in previous sections we can capture the above intuitions about the nature of interruptions with two slightly different definitionsthe strong definition holds for those interruptions we classify as quottrue interruptionsquot and digressions while the weaker form holds for those that are flashbacksthe two definitions are as follows strong definition an interruption is a discourse segment whose dsp is not dominated nor satisfactionpreceded by the dsp of any preceding segmentweak definition an interruption is a discourse segment whose dsp is not dominated nor satisfactionpreceded by the dsp of the immediately preceding segmentneither of the above definitions includes an explicit mention of our intuition that there is a quotreturnquot to the interrupted discourse after an interruptionthe return is an effect of the normal progress of a conversationif we assume a focus space is normally popped from the focus stack if and only if a speaker has satisfied the dsp of its corresponding segment then it naturally follows both that the focus space for the interruption will be popped after the interruption and that the focus space for the interrupted segment will be at the top of the stack because its dsp is yet to be satisfiedthere are other kinds of discourse segments that one may want to consider in light of the interruption continuum and these definitionsclarification dialogues and debugging explanations are two such possibilitiesboth of them unlike the interruptions discussed here share a dsp with their preceding segment and thus do not conform to our definition of interruptionthese kinds of discourses may constitute another general class of discourse segments that like interruptions can be abstractly definedthe first kind of interruption is the true interruption which follows the strong definition of interruptionsit is exemplified by the interruption given in the previous subsectiondiscourses d1 and d2 have distinct unrelated purposes and convey different information about properties objects and relationssince d2 occurs within d1 one expects the discourse structures for the two segments to be somehow embedded as wellthe theory described in this paper differs from polanyi and scha because the quotembeddingquot occurs only in the attentional structureas shown in figure 8 the focus space for d2 is pushed onto the stack above the focus space for d1 so that the focus space for d2 is more salient than the one for d1 until d2 is completedthe intentional structures for the two segments are distinctthere are two dpdsp structures for the utterances in this sequence one for those in d1 and the other for those in d2it is not necessary to relate these two indeed from an intuitive point of view they are not relatedthe focusing structure for true interruptions is different from that for the normal embedding of segments because the focusing boundary between the interrupted discourse and the interruption is impenetrable12 the impenetrable boundary between the focus spaces prevents entities in the spaces below the boundary from being available to the spaces above itbecause the second discourse shifts attention totally to a new purpose the speaker cannot use anyreferential expressions during it that depend on the accessibility of entities from the first discoursesince the boundary between the focus space for d1 and the one for d2 is impenetrable if d2 were to include an utterance such as put them away the pronoun would have to refer deictically and not anaphorically to the groceriesin this sample discourse however d1 is resumed almost immediatelythe pronoun them in and i put them away cannot refer to the children but only to the groceriesfor this to be clear to the ocp the icp must indicate a return to d1 explicitlyone linguistic indicator in this example is the change of mood from imperativeindicators that the stop that utterance is an interruption include the change to imperative mood and the use of the vocative two other indicators may be assumed to have been present at the time of the discourse a change of intonation and a shift of gaze it is also possible that the type of pause present in such cases is evidence of the interruption but further research is needed to establish whether this is indeed the casein contrast to previous accounts we are not forced to integrate these two discourses into a single grammatical structure or to answer questions about the specific relationship between segments d2 and d1 as in reichman model instead the intuition that readers have of an embedding in the discourse structure is captured in the attentional state by the stacking of focus spacesin addition a reader intuitive impression of the distinctness of the two segments is captured in their different intentional structuressometimes an icp interrupts the flow of discussion because some purposes propositions or objects need to be brought into the discourse but have not been the icp forgot to include those entities first and so must now go back and fill in the missing informationa flashback segment occurs at that point in the discoursethe flashback is defined as a segment whose dsp satisfactionprecedes the interrupted segment and is dominated by some other segment dsphence it is a specialization of the weak definition of interruptionsthis type of interruption differs from true interruptions both intentionally and linguistically the dsp for the flashback bears some relationship to the dp for the whole discoursethe linguistic indicator of the flashback typically includes a comment about something going wrongin addition the audience always remains the same whereas it may change for a true interruption in the example below taken from sidner the 1cp is instructing a mockup system about how to define and display certain information in a particular knowledgerepresentation languageagain the interruption is indicated by italicsok now how do i say that bill is whoops i forgot about abci need an individual concept for the company abc lremainder of discourse segment on abc now back to billhow do i say that bill is an employee of abcthe dp for the larger discourse from which this sequence was taken is to provide information about various companies and their employeesthe outer segment in this example dbii1 has a dsp dspbill to tell about bill while the inner segment dab c has a dsp dspabc to convey certain information about abcbecause of the nature of the information being told there is order in the final structure of the dpdsps information about abc must be conveyed before all of the information about bill can bethe icp in this instance does not realize this constraint until after he beginsthe quotflashbackquot interruption allows him to satisfy dspabc while suspending satisfaction of dspbb1 hence there is an intentional structure rooted at dp and with dspabc and dspbill as ordered sister nodesthe following three relationships hold between the different dsps14this kind of interruption is distinct from a true interruption because there is a connection although indirect between the dsps for the two segmentsfurthermore the linguistic features of the start of the interruption signify that there is a precedence relation between these dsps flashbacks are also distinct from normally embedded discourses because of the precedence relationship between the dsps for the two segments and the order in which the segments occurthe available linguistic data permit three possible attentional states as appropriate models for flashbacktype interruptions one is identical to the state that would ensue if the flashback segment were a normally embedded segment the second resembles the model of a true interruption and the third differs from the others by requiring an auxiliary stackan example of the stack for a normally embedded sequence is given in section 42 figure 9 illustrates the last possibilitythe focus space for the flashback fsabc is pushed onto the stack after an appropriate number of spaces including the focus space for the outer segment fsbill have been popped from the main stack and pushed onto an auxiliary stackall of the entities in the focus spaces remaining on the main stack are normally accessible for reference but none of those on the auxiliary stack arein the example in the figure entities in the spaces from fsa to fsb are accessible as well those in space fsabcevidence for this kind of stack behavior could come from discourses in which phrases in the segment about abc could refer to entities represented in fsb but not to those in fsbill or fscafter an explicit indication that there is a return to dspbili any focus spaces left on the stack from the flashback are popped off and all spaces on the auxiliary stack are returned to the main stacknote however that this model does not preclude the possibility of a return to some space between fsa and fsb before popping the auxiliary stackwhether there are discourses that include such a return and are deemed coherent is an open questionthe auxiliary stack model differs from the other two models by the references permitted and by the spaces that can be popped togiven the initial configuration in figure 9 if the segment with dspabc were normally embedded fsabc would just be added to the top of the stackif it were a true interruption the space would also be added to the stack but with an impenetrable boundary between it and fsbiliin the normal stack model entities in the spaces lower in the stack would be accessible in the true interruption they would notin either of these two models however fsbill would be the space returned to firstthe auxiliary stack model is obviously more complicated than the other two alternativeswhether it is necessary depends on facts of discourse behavior that have not yet been determinedthe third type of interruption which we call a digression is defined as a strong interruption that contains a reference to some entity that is salient in both the interruption and the interrupted segmentfor example if while discussing bill role in company abc one conversational participant interrupts with speaking of bill that reminds me he came to dinner last week bill remains salient but the dp changesdigressions commonly begin with phrases such as speaking of john or that reminds me although no cue phrase need be present and that reminds me may also signal other stack and intention shiftsin the processing of digressions the discourselevel intention of the digression forms the base of a separate intentional structure just as in the case of true interruptionsa new focus space is formed and pushed onto the stack but it contains at least one and possibly other entities from the interrupted segment focus spacelike the flashbacktype interruption the digression must usually be closed with an explicit utterance such as getting back to abc or anywayone case of discourse behavior that we must distinguish comprises the socalled quotsemantic returnsquot observed by reichman and discussed by polanyi and scha in all the interruptions we have considered so far the stack must be popped when the interruption is over and the interrupted discourse is resumedthe focus space for the interrupted segment is quotreturned toquot in the case of semantic returns entities and dsps that were salient during a discourse in the past are taken up once again but are explicitly reintroducedfor example suppose that yesterday two people discussed how badly jack was behaving at the party then today one of them says remember our discussion about jack at the partywell a lot of other people thought he acted just as badly as we thought he didthe utterances today recall or return to yesterday conversation to help satisfy the intention that more be said about jack poor behavioranything that can be talked about once can be talked about againhowever if there is no focus space on the stack corresponding to the segment and dsp being discussed further then as polanyi and scha point out there is no popping of the stackthere need not be any discourse underway when a semantic return occurs in such cases the focus stack will be emptythus unlike the returns that follow normal interruptions semantic returns involve a push onto the stack of a new space containing among other things representations of the reintroduced entitiesthe separation of attentional state from intentional structure makes clear not only what is occurring in such cases but also the intuitions underlying the term semantic returnin reintroducing some entities from a previous discourse conversational participants are establishing some connection between the dsp of the new segment and the intentional structure of the original discourseit is not a return to a previous focus space because the focus space for the original discourse is gone from the stack and the items to be referred to must be reestablished explicitlyfor example the initial reference to jack in the preceding example cannot be accomplished with a pronoun with no prior mention of jack in the current discussion one cannot say remember our discussion about him at the partythe intuitive impression of a return in the strict sense is only a return to a previous intentional structureboth attentional state and intentional structure change during a discourseicps rarely change attention by directly and explicitly referring to attentional state likewise discourses only occasionally include an explicit reference to a change in purpose more typically icps employ indirect means of indicating that a change is coming and what kind of change it iscue phrases provide abbreviated indirect means of indicating these changesin all discourse changes the icp must provide information that allows the ocp to determine all of the following cue phrases can pack in all of this information except for in this section we explore the predictions of our discourse structure theory about different uses of these phrases and the explanations the theory offers for their various roleswe use the configuration of attentional state and intentional structure illustrated in figure 10 as the starting point of our analysisin the initial configuration the focus space stack has a space with dsp x at the bottom and another space with dsp a at the topthe intentional structure includes the information that x dominates afrom this initial configuration a wide variety of moves may be madewe examine several changes and the cue phrases that can indicate each of thembecause these phrases and words in isolation may ambiguously play either discourse or other functional roles we also discuss the other uses whenever appropriatefurthermore cue phrases do not function unambiguously with respect to a particular discourse rolethus for example first can be used for two different moves that we discuss belowfirst consider what happens when the icp shifts to a new dsp b that is dominated by a the dominance relationship between a and b becomes part of the intentional structurein addition the change in dsp results in a change in the focus stackthe focus stack models this change which we call new dominance by a having new space pushed onto the stack with b as the dsp of that space the space containing a is salient but less so than the space with bcue phrase to signal this case and only this one must communicate two pieces of information that there is a change to some new purpose and that the new purpose is dominated by dsp atypical cue phrases for this kind of change are for example and to wit and sometimes first and secondcue phrases can also exhibit the existence of a satisfactionprecedence relationshipif b is to be the first in a list of dsps dominated by a then words such as first and in the first place can be used to communicate this factlater in the discourse cue phrases such as second third and finally can be used to indicate dsps that are dominated by a and satisfactionpreceded by bin these cases the focus space containing b would be popped from the stack and the new focus space inserted above the one containing athere are three other kinds of discourse segments that change the intentional structure with a resulting push of new focus spaces onto the stack the trueinterruption where b is not dominated by a the flashback where b satisfactionprecedes a and the digression where b is not dominated by a but some entity from the focus space containing a is carried over to the new focus spaceone would expect that there might be cue phrases that would distinguish among all four of these kinds of changesjust that is sothere are cue phrases that announce one and only one kind of changethe cue phrases mentioned above for new dominance are never used for the three kinds of discourse interruption pushesthe cue phrases for trueinterruptions express the intention to interrupt while the distinct cue phrase for flashbacks indicates that something is out of orderthe typical opening cue phrases of the digression mention the entity that is being carried forward cue phrases can also exhibit the satisfaction of a dsp and hence the completion of a discourse segmentthe completion of a segment causes the current space to be popped from the stackthere are many means of linguistically marking completionsin texts paragraph and chapter boundaries and explicit comments are common in conversations completion can be indicated either with cue phrases such as fine or ok15 or with more explicit references to the satisfaction of the intention most cue phrases that communicate changes to attentional state announce pops of the focus stackhowever at least one cue phrase can be construed to indicate a push namely that reminds meby itself this phrase does not specify any particular change in intentional structure but merely shows that there will be a new dspsince this is equivalent to indicating that a new focus space is to be pushed onto the stack this cue phrase is best seen as conveying attentional informationcue phrases that indicate pops to some other space back in the stack include but anyway anyway in any case and now back towhen the current focus space is popped from the stack a space already on the stack becomes most salientfrom the configuration in figure 10 the space with a is popped from the stack perhaps with others and another space on the stack becomes the top of the stackpopping back changes the stack without creating a new dsp or a dominance or satisfactionprecedence relationshipthe pop entails a return to an old dsp no change is effected in the intentional structurethere are cue phrases such as now and next that signal a change of attentional state but do not distinguish between the creation of a new focus space and the return to an old onethese words can be used for either movefor example in a taskoriented discourse during which some task has been mentioned but put aside to ask a question the use of now indicates a change of focusthe utterance following now however will either return the discussion to the deferred task or will introduce some new task for considerationnote finally that a pop of the focus stack may be achieved without the use of cue phrases as in the following fragment of a taskoriented dialogue a one bolt is stucki am trying to use both the pliers and the wrench to get it unstuck but i have not had much lucke do not use pliersshow me what you are doinga i am pointing at the boltse show me the 12quot combination wrench pleasea ok e good now show me the 12quot box wrencha i already got it loosenedthe last utterance in this fragment returns the discourse to the discussion of the unstuck boltthe pop can be inferred only from the content of the main portion of the utterancethe pronoun is a cue that a pop is needed but only the reference to the loosening action allows the ocp to recognize to which discourse segment this utterance belongs as discussed by sidner and robinson a summary of the uses of cue phrases is given in figure 12attentional change now next that reminds me and but anyway but anyway in any case now back to the end ok fine true interruption i must interrupt excuse me flashbacks oops i forgotdigressions by the way incidentally speaking of did you hear about that reminds me satisfactionprecedes in the first place first second finally moreover furthermore new dominance for example to wit first second and moreover furthermore therefore finally the cases listed here do not exhaust the changes in focus spaces and in the dominance hierarchy that can be represented nor have we furnished a set of rules that specify when cue phrases are necessaryadditional cases especially special subcases of these may be possiblewhen discourse is viewed in terms of intentional structure and attentional state it is clearer just what kinds of information linguistic expressions and intonation convey to the hearer about the discourse structurefurthermore it is clear that linguistic expressions can function as cue phrases as well as sentential connections they can tell the hearer about changes in the discourse structure and be carriers of discourse rather than sentencelevel semantic meaningthe intentions that serve as dpdsps are natural extensions of the intentions grice considers essential to developing a theory of utterer meaningthere is a crucial difference however between our use of discourselevel intentions in this paper and grice use of utterancelevel intentionswe are not yet addressing the issue of discourse meaning but are concerned with the role of dpdsps in determining discourse structure and in specifying how these intentions can be recognized by an ocpalthough the intentional structure of a discourse plays a role in determining discourse meaning the dpdsps do not in and of themselves constitute discourse segment meaningthe connection between intentional structure and discourse meaning is similar to that between attentional and cognitive states the attentional state plays a role in a hearer understanding of what the speaker means by a given sequence of utterances in a discourse segment but it is not the only aspect of cognitive state that contributes to this understandingwe will draw upon some particulars of grice definition of utterer meaning to explain dsps more fullyhis initial definition is as follows you meant something by uttering x is true if f for some audience a grice refines this definition to address a number of counterexamplesthe following portion of his final definition16 is relevant to this paper by uttering x you meant that op is true if f grice takes typ to be the meaning of the utterance where 1p is a mood indicator associated with the propositional attitude 1p he considers attitudes like believing that icp is a german soldier and intending to give the icp a beer as examples of the kinds of ming that p that utterance intentions can embedfor expository purposes we use the following notation to represent these utterancelevel intentions intend intend to extend grice definition to discourses we replace the utterance x with a discourse segment ds the utterer you with the initiator of a discourse segment icp and the audience a with the ocpto complete this extension the following problems must be resolved although each of these issues is an unresolved problem in discourse theory this paper has provided partial answersthe examples presented illustrate the range of discourselevel intentions these intentions appear to be similar to utterancelevel intentions in kind but differ in that they occur in a context in which several utterances may be required to ensure their comprehension and satisfactionthe features so far identified as conveying information about dsps are specific linguistic markers utterancelevel intentions and propositional content of the utteranceswe have not explored the problem of identifying modes of correlation in any detail but it is clear that those modes that operate at the utterance level also function at the discourse levelas discussed previously the proper treatment of the recognition of discourselevel intentions is especially necessary for a computationally useful account of discourseat the discourse level just as at the utterance level the intended recognition of intentions plays a central rolethe dsps are intended to be recognized they achieve their effects in part because the ocp recognizes the icp intention for the ocp to 4 that p the ocp recognition of this intention is crucial to its achieving the desired effectin section 4 we described certain constraints on the recognition processin extending grice analysis to the discourse level we have to consider not only individual beliefs and intentions but also the relationships among them that arise because of the relationships among various discourse segments and the purposes the segments serve with respect to the entire discourseto clarify these relationships consider an analogous situation with nonlinguistic actions18 an action may divide into several subactions for example the planting of a rose bush divides into preparing the soil digging a hole placing the rose bush in the hole filling the rest of the hole with soil and watering the ground around the bushthe intention to perform the planting action includes several subsidiary intentions in discourse in a manner that is analogous to nonlinguistic actions the dp includes several subsidiary intentions related to the dsps it dominatesfor purposes of exposition we will use the term primary intention to distinguish the overall intention of the dp from the subsidiary intentions of the dpfor example in the movies argument of section 31 the primary intention is for the reader to come to believe that parents and teachers should keep children from seeing too many movies in the task dialogue of section 32 the intention is that the apprentice remove the flywheelsubsidiary intentions include respectively the intention that the reader believe that it is important to evaluate movies and the intention that the expert help the apprentice locate the second setscrewbecause the beliefs and intentions of at least two different participants are involved in discourse two properties of the generalaction situation do not carry overfirst in a discourse the icp intends the ocp to recognize the icp beliefs about the connections among various propositions and actionsfor example in the movies argument the reader is intended to recognize that the author believes some propositions provide support for others in the task dialogue the expert intends the apprentice to recognize that the expert believes the performance of certain actions contributes to the performance of other actionsin contrast in the generalaction situation in which there is no communication there is no need for recognition of another agent beliefs about the interrelationship of various actions and intentionsthe second difference concerns the extent to which the subsidiary actions or intentions specify the overall action or intentionto perform some action the agent must perform each of the subactions involved by performing all of these subactions the agent performs the actionin contrast in a discourse the participants share the assumption of discourse sufficiency it is a convention of the communicative situation that the icp believes the discourse is sufficient to achieve the primary intention of the dpdiscourse sufficiency does not entail logical sufficiency or action completenessit is not necessarily the case that satisfaction of all of the dsps is sufficient in and of itself for satisfaction of the dprather there is an assumption that the information conveyed in the discourse will suffice in conjunction with other information the icp believes the ocp has to allow for satisfaction of the primary intention of the dpsatisfaction of all of the dsps in conjunction with this additional information is enough for satisfaction of the dphence in discourse the intentional structure need not be completefor example the propositions expressed in the movies essay do not provide a logically sufficient proof of the claimthe author furnishes information he believes to be adequate for the reader to reach the desired conclusion and assumes the reader will supplement what is actually said with appropriate additional information and reasoninglikewise the task dialogue does not mention all the subtasks explicitlyinstead the expert and apprentice discuss explicitly only those subtasks for which some instruction is needed or in connection with which some problem arisesto be more concrete we shall look at the extension of the gricean analysis for two particular cases one involving a belief the other an intention to perform some actionwe shall consider only the simplest situations in which the primary intentions of the dpdsps are about either beliefs or actions but not a mixturealthough the task dialogue obviously involves a mixture this is an extremely complicated issue that demands additional researchin the belief case the primary intention of the dp is to get the ocp to believe some proposition say p each of the discourse segments is also intended to get the ocp to believe a proposition say qi for some i1 n in addition to the primary intention ie that the ocp should come to believe p the dp includes an intention that the ocp come to believe each of the qi and in addition an intention that the ocp come to believe the qi provide support for p we can represent this schematically as19believethere are several things to note hereto begin with the first intention is the primary component of the dspsecond each of the intended beliefs in the second conjunct corresponds to the primary component of the dsp of some embedded discourse segmentthird the supports relation is not implicationthe ocp is not intended to believe that the q imply p but rather to believe that the q in conjunction with other facts and rules that the icp assumes the ocp has available or can obtain and thus come to believe are sufficient for the ocp to conclude p fourth the dpdsp may only be completely determined at the end of the discourse as we discussed in section 4finally to determine how the discourse segments corresponding to the q are related to the one corresponding to p the ocp only has to believe that the icp believes a supports relationship holdshence for the purpose of recognizing the discourse structure it would be sufficient for the third clause to be believe however the dp of a beliefcase discourse is not merely to get the ocp to believe p but to get the ocp to believe p by virtue of believing the qithat this is so can be seen clearly by considering situations in which the ocp already believes p and is known by the icp to do so but does not have a good reason for believing p this last property of the belief case is not shared by the action casethere is an important relationship between the supports relation and the dominance relation that can hold between dpdsps it is captured in the following rule the implication in the forward direction states that if a conversational participant believes that the proposition p is supported by the proposition qi and he intends another participant to adopt these beliefs then his intention that cp2 believe p dominates his intention that cp2 believe qiviewed intuitively cpi belief that q provides support for p underlies his intention to get cp2 to believe p by getting him to believe qithe satisfaction of cp intention that cp2 should believe q will help satisfy cpi intention that cp2 believe p this relationship plays a role in the recognition of dspsan analogous situation holds for a discourse segment comprising utterances intended to get the ocp to perform some set of actions directed at achieving some overall task the full specification of the dpdsp contains a generates relation that is derived from a relation defined by goldman for this case the dpdsps are of the following form generates each intention to act represented in the second conjunct corresponds to the primary intention of some discourse segmentlike supports the generates relation is partial thus the ocp is not intended to believe that the icp believes that performance of a alone is sufficient for performance of a but rather that doing all of the ai and other actions that the ocp can be expected to know or figure out constitutes a performance of ain the task dialogue of section 32 many actions that are essential to the task are never even mentioned in the dialoguenote that it is unnecessary for the icp or ocp to have a complete plan relating all of the ai to a at the start of the discourse all that is required is that for any given segment the ocp be able to determine what intention to act the segment corresponds to and which other intentions dominate that intentionfinally unlike the belief case the third conjunct here requires only that the ocp recognize that the icp believes a generates relationship holdsthe ocp can do a by virtue of doing the ai without coming himself to believe anything about the relationships between a and the aias in the belief case there is an equivalence that links the generates relation among actions to the dominance relation between intentionsschematically it is as follows this equivalence states that if an agent believes that the performance of some action and if cp intends for cp2 to do both of these actions then his intention that cp2 perform a is dominated by his intention that cp2 perform aviewed intuitively cpi belief that doing a will contribute to doing a underlies his intention to get cp2 to do a by getting cp2 to do aithe satisfaction of cpi intention for cp2 to do a will help satisfy cpi intention for cp2 to do aso for example in the taskoriented dialogue of section 32 the expert knows that using the wheelpuller is a necessary part of removing the flywheelhis intention that the apprentice intend to use the wheelpuller is thus dominated by his intention that the apprentice intend to take off the flywheelsatisfaction of the intention to use the wheelpuller will contribute to satisfying the intention to remove the flywheelin general the action ai does not have to be a necessary action though it is in this example a definitive statement characterizing primary and subsidiary intentions for taskoriented dialogues awaits further research not only in discourse theory but also in the theory of intentions and actionsin particular a clearer statement of the interactions among the intentions of the various discourse participants awaits the formulation of a better theory of cooperation and multiagent activitywe are now in a position to contrast the role of dpdsps supports generates dom and sp in our theory with the rhetorical relations that according to a number of alternative theories are claimed to underlie discourse structureamong the various rhetorical relations that have been investigated are elaboration summarization enablement justification and challengealthough the theories each identify different specific relations they all use such relations as the basis for determining discourse structurethese rhetorical relations apply specifically to linguistic behavior and most of them implicitly incorporate intentions the intentions that typically serve as dpdsps in our theory are more basic than those that underlie such rhetorical relations in that they are not specialized for linguistic behavior in many cases their satisfaction can be realized by nonlinguistic actions as well as linguistic onesthe supports and generates relations that must sometimes be inferred to determine domination are also more basic than rhetorical relations they are general relations that hold between propositions and actionshence the inferring of relationships such as supports and generates is simpler than that of rhetorical relationshipsthe determination of whether a supports or generates relationship exists depends only on facts of how the world is not on facts of the discoursein contrast the recognition of rhetorical relations requires the combined use of discourse and domain informationfor several reasons rhetorical relationships do not have a privileged status in the account given herealthough they appear to provide a metalevel description of the discourse their role in discourse interpretation remains unclearas regards discourse processing it seems obvious that the icp and ocp have essentially different access to themin particular the icp may well have such rhetorical relationships quotin mindquot as he produces utterances system whereas it is much less clear when the ocp infers thema claim of the theory being developed in this paper is that a discourse can be understood at a basic level even if the ocp never does or can construct let alone name such rhetorical relationshipsfurthermore it appears that these relationships could be recast as a combination of domainspecific information general relations between propositions and actions and general relations between intentions 20 even so rhetorical relationships are in all likelihood useful to the theoretician as an analytical tool for certain aspects of discourse analysisthe theory of discourse structure presented in this paper is a generalization of theories of taskoriented dialoguesit differs from previous generalizations in that it carefully distinguishes three components of discourse structure one linguistic one intentional and one attentionalthis distinction provides an essential basis for explaining interruptions cue phrases and referring expressionsthe particular intentional structure used also differs from the analogous aspect of previous generalizationsalthough like those generalizations it supplies the principal framework for discourse segmentation and determines structural relationships for the focusing structure unlike its predecessors it does not depend on the special details of any single domain or type of discoursealthough admittedly still incomplete the theory does provide a solid basis for investigating both the structure and meaning of discourse as well as for constructing discourseprocessing systemsseveral difficult research problems remain to be exploredof these we take the following to be of primary importance finally the theory suggests several important conjecturesfirst that a discourse is coherent only when its discourse purpose is shared by all the participants and when each utterance of the discourse contributes to achieving this purpose either directly or indirectly by contributing to the satisfaction of a discourse segment purposesecond general intuitions about quottopicquot correspond most closely to dpdsps rather than to syntactic or attentional conceptsfinally the theory suggests that the same intentional structure can give rise to different attentional structures through different discoursesthe different attentional structures will be manifest in part because different referring expressions will be valid and in part because different cue phrases and other indicators will be necessary optional or redundantthis paper was made possible by a gift from the system development foundationsupport was also provided for the second author by the advanced research projects agency of the department of defense and was monitored by onr under contract non0001485c0079the views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies either expressed or implied of the defense advanced research projects agency or the yous government
J86-3001
attention intentions and the structure of discoursein this paper we explore a new theory of discourse structure that stresses the role of purpose and processing in discoursein this theory discourse structure is composed of three separate but interrelated components the structure of the sequence of utterances a structure of purposes and the state of focus of attention the linguistic structure consists of segments of the discourse into which the utterances naturally aggregatethe intentional structure captures the discourserelevant purposes expressed in each of the linguistic segments as well as relationships among themthe attentional state is an abstraction of the focus of attention of the participants as the discourse unfoldsthe attentional state being dynamic records the objects properties and relations that are salient at each point of the discoursethe distinction among these components is essential to provide an adequate explanation of such discourse phenomena as cue phrases referring expressions and interruptionsthe theory of attention intention and aggregation of utterances is illustrated in the paper with a number of example discoursesvarious properties of discourse are described and explanations for the behavior of cue phrases referring expressions and interruptions are exploredthis theory provides a framework for describing the processing of utterances in a discoursediscourse processing requires recognizing how the utterances of the discourse aggregate into segments recognizing the intentions expressed in the discourse and the relationships among intentions and tracking the discourse through the operation of the mechanisms associated with attentional statethis processing description specifies in these recognition tasks the role of information from the discourse and from the participants knowledge of the domainwe proposed a theory of discourse structure to account for why an utterance was said and what was meant by it
an efficient augmentedcontextfree parsing algorithm an efficient parsing algorithm for augmented contextfree grammars is introduced and its application to online natural language interfaces discussed the algorithm is a generalized lr parsing algorithm which precomputes an lr shiftreduce parsing table from a given augmented contextfree grammar unlike the standard lr parsing algorithm it can handle arbitrary contextfree grammars including ambiguous grammars while most of the lr efficiency is preserved by introducing the concept of a quotgraphstructured stackquot the graphstructured stack allows an lr shiftreduce parser to maintain multiple parses without parsing any part of the input twice in the same way we can also view our parsing algorithm as an extended chart parsing algorithm efficiently guided by lr parsing tables the algorithm is fast due to the lr table precomputation in several experiments with different english grammars and sentences timings indicate a fiveto tenfold speed advantage over earley contextfree parsing algorithm algorithm parses a sentence strictly from left to right that is starts parsing as soon as the user types in the first word of a sentence without waiting for completion of the sentence a practical online parser based on the algorithm has been implemented in common lisp and running on symbolics and hp al workstations the parser is used in the multilingual machine translation project at cmu also a commercial online parser for japanese language is being built by intelligent technology incorporation on the technique developed at an efficient parsing algorithm for augmented contextfree grammars is introduced and its application to online natural language interfaces discussedthe algorithm is a generalized lr parsing algorithm which precomputes an lr shiftreduce parsing table from a given augmented contextfree grammarunlike the standard lr parsing algorithm it can handle arbitrary contextfree grammars including ambiguous grammars while most of the lr efficiency is preserved by introducing the concept of a quotgraphstructured stackquotthe graphstructured stack allows an lr shiftreduce parser to maintain multiple parses without parsing any part of the input twice in the same waywe can also view our parsing algorithm as an extended chart parsing algorithm efficiently guided by lr parsing tablesthe algorithm is fast due to the lr table precomputationin several experiments with different english grammars and sentences timings indicate a five to tenfold speed advantage over earley contextfree parsing algorithmthe algorithm parses a sentence strictly from left to right online that is it starts parsing as soon as the user types in the first word of a sentence without waiting for completion of the sentencea practical online parser based on the algorithm has been implemented in common lisp and running on symbolics and hp al workstationsthe parser is used in the multilingual machine translation project at cmualso a commercial online parser for japanese language is being built by intelligent technology incorporation based on the technique developed at cmuparsing efficiency is crucial when building practical natural language systems on smaller computers such as personal workstationsthis is especially the case for interactive systems such as natural language database access interfaces to expert systems and interactive machine translationthis paper introduces an efficient online parsing algorithm and focuses on its practical application to natural language interfacesthe algorithm can be viewed as a generalized lr parsing algorithm that can handle arbitrary contextfree grammars including ambiguous grammarssection 2 describes the algorithm by extending the standard lr parsing algorithm with the idea of a quotgraphstructured stackquotsection 3 describes how to represent parse trees efficiently so that all possible parse trees take at most polynomial space as the ambiguity of a sentence grows exponentiallyin section 4 several examples are givensection 5 presents several empirical results of the algorithm practical performance including comparison with earley algorithmin section 6 we discuss how to enhance the algorithm to handle augmented contextfree grammars rather than pure contextfree grammarssection 7 describes the concept of online parsing taking advantage of lefttoright operation of our parsing algorithmthe online parser parses a sentence strictly from left to right and starts parsing as soon as the user types in the first word without waiting for the end of linebenefits of online parsing are then discussedfinally several versions of online parser have been implemented and they are mentioned in section 8the lr parsing algorithms were developed originally for programming languagesan lr parsing algorithm is a copyright 1987 by the association for computational linguisticspermission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the cl reference and this copyright notice are included on the first pageto copy otherwise or to republish requires a fee andor specific permission0362613x87 01003146 0300 shiftreduce parsing algorithm deterministically guided by a parsing table indicating what action should be taken nextthe parsing table can be obtained automatically from a contextfree phrase structure grammar using an algorithm first developed by deremer we do not describe the algorithms here referring the reader to chapter 6 in aho and ullman we assume that the reader is familiar with the standard lr parsing algorithm the lr paring algorithm is one of the most efficient parsing algorithmsit is totally deterministic and no backtracking or search is involvedunfortunately we cannot directly adopt the lr parsing technique for natural languages because it is applicable only to a small subset of contextfree grammars called lr grammars and it is almost certain that any practical natural language grammars are not lrif a grammar is nonlr its parsing table will have multiple entries1 one or more of the action table entries will be multiply defined figures 21 and 22 show an example of a nonlr grammar and its parsing tablegrammar symbols starting with quotsquot represent preterminalsentries quotsh nquot in the action table indicate the action quotshift one word from input buffer onto the stack and go to state nquotentries quotre nquot indicate the action quotreduce constituents on the stack using rule nquotthe entry quotaccquot stands for the action quotacceptquot and blank spaces represent quoterrorquotthe goto table decides to what state the parser should go after a reduce actionthese operations shall become clear when we trace the algorithm with example sentences in section 4the exact definition and operation of the lr parser can be found in aho and ullman we can see that there are two multiple entries in the action table on the rows of state 11 and 12 at the column labeled quotprepquotroughly speaking this is the situation where the parser encounters a preposition of a pp right after a npif this pp does not modify the np then the parser can go ahead to reduce the np into a higher nonterminal such as pp or vp using rule 6 or 7 respectively if on the other hand the pp does modify the np then the parser must wait until the pp is completed so it can build a higher np using rule 5it has been thought that for lr parsing multiple entries are fatal because once a parsing table has multiple entries deterministic parsing is no longer possible and some kind of nondeterminism is necessarywe handle multiple entries with a special technique named a graphstructured stackin order to introduce the concept we first give a simpler form of nondeterminism and make refinements on itsubsection 21 describes a simple and straightforward nondeterministic technique that is pseudoparallelism in which the system maintains a number of stacks simultaneously called the stack lista disadvantage of the stack list is then describedthe next subsection describes the idea of stack combination which was introduced in the author earlier research to make the algorithm much more efficientwith this idea stacks are represented as trees finally a further refinement the graphstructured stack is described to make the algorithm even more efficient efficient enough to run in polynomial timethe simplest idea would be to handle multiple entries nondeterministicallywe adopt pseudoparallelism maintaining a list of stacks the pseudoparallelism works as followsa number of processes are operated in paralleleach process has a stack and behaves basically the same as in standard lr parsingwhen a process encounters a multiple entry the process is split into several processes by replicating its stackwhen a process encounters an error entry the process is killed by removing its stack from the stack listall processes are synchronized they shift a word at the same time so that they always look at the same wordthus if a process encounters a shift action it waits until all other processes also encounter a shift actionfigure 23 shows a snapshot of the stack list right after shifting the word with in the sentence i saw a man on the bed in the apartment with a telescope using the grammar in figure 21 and the parsing table in figure 22for the sake of convenience we denote a stack with vertices and edgesthe leftmost vertex is the bottom of the stack and the rightmost vertex is the top of the stackvertices represented by a circle are called state vertices and they represent a state numbervertices represented by a square are called symbol vertices and they represent a grammar symboleach stack is exactly the same as a stack in the standard lr parsing algorithmthe distance between vertices does not have any significance except it may help the reader understand the status of the stacksin the figures quotpquot stands for prep and quotdquot stands for det throughout this papersince the sentence is 14way ambiguous the stack has been split into 14 stacksfor example the sixth stack is in the status where i saw a man on the bed has been reduced into s and the apartment has been reduced into npfrom the lr parsing table we know that the top of the stack state 6 is expecting det or n and eventually a npthus after a telescope comes in a pp with a telescope will be formed and the pp will modify the np the apartment and in the apartment will modify the s i saw a manwe notice that some stacks in the stack list appear to be identicalthis is because they have reached the current state in different waysfor example the sixth and seventh stacks are identical because i saw a man on the bed has been reduced into s in two different waysa disadvantage of the stack list method is that there are no interconnections between stacks and there is no way in which a process can utilize what other processes have done alreadythe number of stacks in the stack list grows exponentially as ambiguities are encountered3 for example these 14 processes in figure 23 will parse the rest of the sentence the telescope 14 i saw a man on the bed in the apartment with a telescope times in exactly the same waythis can be avoided by using a treestructured stack which is described in the following subsectionif two processes are in a common state that is if two stacks have a common state number at the rightmost vertex they will behave in exactly the same manner until the vertex is popped from the stacks by a reduce actionto avoid this redundant operation these processes are unified into one process by combining their stackswhenever two or more processes have a common state number on the top of their stacks the top vertices are unified and these stacks are represented as a tree where the top vertex corresponds to the root of the treewe call this a treestructured stackwhen the top vertex is popped the treestructured stack is split into the original number of stacksin general the system maintains a number of treestructured stacks in parallel so stacks are represented as a forestfigure 24 shows a snapshot of the treestructured stack immediately after shifting the word within contrast to the previous example the telescope will be parsed only oncealthough the amount of computation is significantly reduced by the stack combination technique the number of branches of the treestructured stack that must be maintained still grows exponentially as ambiguities are encounteredin the next subsection we describe a further modification in which stacks are represented as a directed acyclic graph in order to avoid such inefficiencyso far when a stack is split a copy of the whole stack is madehowever we do not necessarily have to copy the whole stack even after different parallel operations on the treestructured stack the bottom portion of the stack may remain the sameonly the necessary portion of the stack should therefore be splitwhen a stack is split the stack is thus represented as a tree where the bottom of the stack corresponds to the root of the treewith the stack combination technique described in the previous subsection stacks are represented as a directed acyclic graphfigure 25 shows a snapshot of the graph stackit is easy to show that the algorithm with the graphstructured stack does not parse any part of an input sentence more than once in the same waythis is because if two processes had parsed a part of a sentence in the same way they would have been in the same state and they would have been combined as one processthe graphstructured stack looks very similar to a chart in chart parsingin fact one can also view our algorithm as an extended chart parsing algorithm that is guided by lr parsing tablesthe major extension is that nodes in the chart contain more information than in conventional chart parsingin this paper however we describe the algorithm as a generalized lr parsing algorithm onlyso far we have focussed on how to accept or reject a sentencein practice however the parser must not only accept or reject sentences but also build the syntactic structure of the sentence the next section describes how to represent the parse forest and how to build it with our parsing algorithmour parsing algorithm is an allpath parsing algorithm that is it produces all possible parses in case an input sentence is ambiguoussuch allpath parsing is of ten needed in natural language processing to manage temporarily or absolutely ambiguous input sentencesthe ambiguity of a sentence may grow exponentially as the length of a sentence grows thus one might notice that even with an efficient parsing algorithm such as the one we described the parser would take exponential time because exponential time would be required merely to print out all parse trees we must therefore provide an efficient representation so that the size of the parse forest does not grow exponentiallythis section describes two techniques for providing an efficient representation subtree sharing and local ambiguity packingit should be mentioned that these two techniques are not completely new ideas and some existing systems algorithm have already adopted these techniques either implicitly or explicitlyif two or more trees have a common subtree the subtree should be represented only oncefor example the parse forest for the sentence i saw a man in the park with a telescope should be represented as in figure 31to implement this we no longer push grammatical symbols on the stack instead we push pointers to a node of the shared forest4 when the parser quotshiftsquot a word it creates a leaf node labeled with the word and the preterminal and instead of the preterminal symbol a pointer to the newly created leaf node is pushed onto the stack lithe exact same leaf node already exists a pointer to this existing node is pushed onto the stack without creating another nodewhen the parser quotreducesquot the stack it pops pointers from the stack creates a new node whose successive nodes are pointed to by those popped pointers and pushes a pointer to the newly created node onto the stackusing this relatively simple procedure our parsing algorithm can produce the shared forest as its output without any other special bookkeeping mechanism because it never does the same reduce action twice in the same mannerwe say that two or more subtrees represent local ambiguity if they have common leaf nodes and their top nodes are labeled with the same nonterminal symbolthat is to say a fragment of a sentence is locally ambiguous if the fragment can be reduced to a certain nonterminal symbol in two or more waysif a sentence has many local ambiguities the total ambiguity would grow exponentiallyto avoid this we use a technique called local ambiguity packing which works in the following waythe top nodes of subtrees that represent local ambiguity are merged and treated by higherlevel structures as if there were only one nodesuch a node is called a packed node and nodes before packing are called subnodes of the packed nodean example of a sharedpacked forest is shown in figure 32packed nodes are represented by boxeswe have three packed nodes in figure 32 one with three subnodes and two with two subnodeslocal ambiguity packing can be easily implemented with our parsing algorithm as followsin the graphstructured stack if two or more symbol vertices have a common state vertex immediately on their left and a common state vertex immediately on their right they represent local ambiguitynodes pointed to by these symbol vertices are to be packed as one nodein figure 25 for example we see one 5way local ambiguity and two 2way local ambiguitiesthe algorithm is made clear by the example in the following sectionrecently the author suggested a technique to disambiguate a sentence out of the sharedpacked forest representation by asking the user a minimal number of questions in natural language this section presents three examplesthe first example using the sentence i saw a man in the apartment with a telescope is intended to help the reader understand the algorithm more clearlythe second example with the sentence that information is important is doubtful is presented to demonstrate that our algorithm is able to handle multipartofspeech words without any special mechanismin the sentence that is a multipartofspeech word because it could also be a determiner or a pronounthe third example is provided to show that the algorithm is also able to handle unknown words by considering an unknown word as a special multipartofspeech word whose part of speech can be anythingwe use an example sentence a where s represent unknown wordsthis subsection gives a trace of the algorithm with the grammar in figure 21 the parsing table in figure 22 and the sentence i saw a man in the park with a telescopeat the very beginning the stack contains only one vertex labeled 0 and the parse forest contains nothingby looking at the action table the next action quotshift 4quot is determined as in standard lr parsingcomputational linguistics volume 13 numbers 12 januaryjune 1987 35 masaru tomita an efficient augmentedcontextfree parsing algorithm when shifting the word the algorithm creates a leaf node in the parse forest labeled with the word and its preterminal n and pushes a pointer to the leaf node onto the stackthe next action quotreduce 3 is determined from the action tablenext word aw 0 0 4 911f40 the action quotacceptquot is finally executedit returns quot25quot as the top node of the parse forest and halts the processthis subsection gives a trace of the algorithm with the sentence that information is important is doubtful to demonstrate that our algorithm can handle multipartofspeech words just like multiple entries without any special mechanismwe use the grammar at the right and the parsing table belowat the very beginning the parse forest contains nothing and the stack contains only one vertex labeled 0the first word of the sentence is that which can be categorized as that det or n the action table tells us that all of these categories are legalthus the algorithm behaves as if a multiple entry is encounteredthree actions quotshift 3quot quotshift 4quot and quotshift 5quot are to be executednote that three different leaf nodes have been created in the parse forestone of the three possibilities that as a noun is discarded immediately after the parser sees the next word informationafter executing the two shift actions we have after executing quotshift 10quot we have this time only one leaf node has been created in the parse forest because both shift actions regarded the word as belonging to the same category ie nounnow we have two active vertices and quotreduce 3quot is arbitrarily chosen as the next action to executeafter executing the parser accepts the sentence and returns quot15quot as the top node of the parse forestthe forest consists of only one tree which is the desired structure for that information is important is doubtfulin the previous subsection we saw the parsing algorithm handling a multipartofspeech word just like multiple entries without any special mechanismthat capability can also be applied to handle unknown words an unknown word can be thought of as a special type of a multipartofspeech word whose categories can be anythingin the following we present another trace of the parser with the sentence a where s represent an unknown wordwe use the same grammar and parsing table as in the first example at the very beginning we have the possibility of the first unknown word being a preposition has now disappearedthe parser accepts the sentence in only one way and returns quot10quot as the root node of the parse forestwe have shown that our parsing algorithm can handle unknown words without any special mechanismin this section we present some empirical results of the algorithm practical performancesince space is limited we only show the highlights of the results referring the reader to chapter 6 of tomita for more detailfigure 51 shows the relationship between parsing time of the tomita algorithm and the length of input sentence and figure 52 shows the comparison with earley algorithm using a sample english grammar that consists of 220 contextfree rules and 40 sample sentences taken from actual publicationsall programs are run on dec20 and written in maclisp but not compiledalthough the experiment is informal the result show that the tomita algorithm is about 5 to 10 times faster than earley algorithm due to the precompilation of the grammar into the lr tablethe earleytomita ratio seems to increase as the size of grammar grows as shown in figure 53figure 54 shows the relationship between the size of a produced sharedpacked forest representation and the ambiguity of its input sentence the sample sentences are created from the following schema noun verb det noun n1 an example sentence with this structure is i saw a man in the park on the hill with a telescopethe result shows that all possible parses can be represented in almost 0 space where n is the number of possible parses in a sentence5 figure 55 shows the relationship between the parsing time and the ambiguity of a sentencerecall that within the given time the algorithm produces all possible parses in the sharedpacked forest representationit is concluded that our algorithm can parse a very ambiguous sentence with a million possible parses in a reasonable timeso far we have described the algorithm as a pure contextfree parsing algorithmin practice it is often desired for each grammar nonterminal to have attributes and for each grammar rule to have an augmentation to define pass and test the attribute valuesit is also desired to produce a functional structure rather than the contextfree forestsubsection 61 describes the augmentation and subsection 62 discusses the sharedpacked representation for functional structureswe attach a lisp function to each grammar rule for this augmentationwhenever the parser reduces constituents into a higherlevel nonterminal using a phrase structure rule the lisp program associated with the rule is evaluatedthe lisp program handles such aspects as construction of a syntaxsemantic representation of the input sentence passing attribute values among constituents at different levels and checking syntacticsemantic constraints such as subjectverb agreementif the lisp function returns nil the parser does not do the reduce action with the ruleif the lisp function returns a nonnil value then this value is given to the newly created nonterminalthe value includes attributes of the nonterminal and a partial syntacticsemantic representation constructed thus farnotice that those lisp functions can be precompiled into machine code by the standard lisp compilera functional structure used in the functional grammar formalisms is in general a directed acyclic graph rather than a treethis is because some value may be shared by two different attributes in the same sentence pereira introduced a method to share dag structureshowever the dag structure sharing method is much more complex and computationally expensive than tree structure sharingtherefore we handle only treestructured functional structures for the sake of efficiency and simplicity6 in the example the quotagreementquot attributes of subject and main verb may thus have two different valuesthe identity of these two values is tested explicitly by a test in the augmentationsharing treestructured functional structures requires only a minor modification on the subtree sharing method for the sharedpacked forest representation described in subsection 31local ambiguity packing for augmented contextfree grammars is not as easysuppose two certain nodes have been packed into one packed nodealthough these two nodes have the same category name they may have different attribute valueswhen a certain test in the lisp function refers to an attribute of the packed node its value may not be uniquely determinedin this case the parser can no longer treat the packed node as one node and the parser will unpack the packed node into two individual nodes againthe question then is how often this unpacking needs to take place in practicethe more frequently it takes place the less significant it is to do local ambiguity packinghowever most of sentence ambiguity comes from such phenomena as ppattachment and conjunction scoping and it is unlikely to require unpacking in these casesfor instance consider the noun phrase a man in the park with a telescope which is locally ambiguous two np nodes will be packed into one node but it is unlikely that the two np nodes have different attribute values which are referred to later by some tests in the augmentationthe same argument holds with the noun phrases pregnant women and children large file equipment although more comprehensive experiments are desired it is expected that only a few packed nodes need to be unpacked in practical applicationsit is in general very painful to create extend and modify augmentations written in lispthe lisp functions should be generated automatically from more abstract specificationswe have implemented the lfg compiler that compiles augmentations in a higher level notation into lisp functionsthe notation is similar to the lexical functional grammar formalism and patr1i an example of the lfglike notation and its compiled lisp function are shown in figures 61 and 62we generate only nondestructive functions with no sideeffects to make sure that a process never alters other processes or the parser control flowa generated function takes a list of arguments each of which is a value associated with each righthand side symbol and returns a value to be associated with the lefthand side symboleach value is a list of fstructures in case of disjunction and local ambiguitythat a semantic grammar in the lfglike notation can also be generated automatically from a domain semantics specification and a purely syntactic grammar is discussed further in tomita and carbonell the discussion is however beyond the scope of this paperour parsing algorithm parses a sentence strictly from left to rightthis characteristics makes online parsing possible ie to parse a sentence as the user types it in without waiting for completion of the sentencean example session of online parsing is presented in figure 71 for the sample sentence i saw a man with a telescopeas in this example the user often wants to hit the quotbackspacequot key to correct previously input wordsin the case in which these words have already been processed by the parser the parser must be able to quotunparsequot the words without parsing the sentence from the beginning all over againto implement unparsing the parser needs to store system status each time a word is parsedfortunately this can be nicely done with our parsing algorithm only pointers to the graphstructured stack and the parse forest need to be storedit should be noted that our parsing algorithm is not the only algorithm that parses a sentence strictly from left to right other lefttoright algorithms include earley algorithm the active chart parsing algorithm and a breadthfirst version of atn despite the availability of lefttoright algorithms surprisingly few online parsers existnlmenu adopted online parsing for a menubased system but not for typed inputsin the rest of this section we discuss two benefits of online parsing quicker response time and early error detectionone obvious benefit of online parsing is that it reduces the parser response time significantlywhen the user finishes typing a whole sentence most of the input sentence has been already processed by the parseralthough this does not affect cpu time it could reduce response time from the user point of view significantlyonline parsing is therefore useful in interactive systems in which input sentences are typed in by the user online it is not particularly useful in batch systems in which input sentences are provided in a fileanother benefit of online parsing is that it can detect an error almost as soon as the error occurs and it can warn the user immediatelyin this way online parsing could provide better manmachine communicationfurther studies on human factors are necessarythis paper has introduced an efficient contextfree parsing algorithm and its application to online natural language interfaces has been discusseda pilot online parser was first implemented in maclisp at the computer science department carnegiemellon university as a part of the author thesis work the empirical results in section 5 are based on this parsercmu machine translation project adopts online parsing for multiple languagesit can parse unsegmented sentences to handle unsegmented sentences its grammar is written in a characterbased manner all terminal symbols in the grammar are characters rather than wordsthus morphological rules as well as syntactic rules are written in the augmented contextfree grammarthe parser takes about 13 seconds cpu time per sentence on a symbolics 3600 with about 800 grammar rules its response time however is less than a second due to online parsingthis speed does not seem to be affected very much by the length of sentence or the size of grammar as discussed in section 5we expect further improvements for fully segmented sentences where words rather then characters are the atomic unitsa commercial online parser for japanese language is being developed in common lisp jointly by intelligent technology incorporation and carnegie group incorporation based on the technique developed at cmufinally in the continuous speech recognition project at cmu the online parsing algocomputational linguistics volume 13 numbers 12 januaryjune 1987 45 masaru tomita an efficient augmentedcontextfree parsing algorithm rithm is being extended to handle speech input to make the speech parsing process efficient and capable of being pipelined with lower level processes such as acousticphonetic level recognition i would like to thank jaime carbonell phil hayes james allen herb simon hozumi tanaka and ralph grishman for their helpful comments on the early version of this paperkazuhiro toyoshima and hideto kagamida have implemented the runtime parser and the lr table compiler respectively in common lisplori levin teruko watanabe peggy anderson and donna gates have developed japanese and english grammars in the lfglike notationhiroaki saito has implemented the algorithm for speech inputron kaplan martin kay lauri karttunen and stuart shieber provided useful comments on the implementation of lfg and dag structure sharing
J87-1004
an efficient augmentedcontextfree parsing algorithman efficient parsing algorithm for augmented contextfree grammars is introduced and its application to online natural language interfaces discussedthe algorithm is a generalized lr parsing algorithm which precomputes an lr shiftreduce parsing table from a given augmented contextfree grammarunlike the standard lr parsing algorithm it can handle arbitrary contextfree grammars including ambiguous grammars while most of the lr efficiency is preserved by introducing the concept of a graphstructured stackthe graphstructured stack allows an lr shiftreduce parser to maintain multiple parses without parsing any part of the input twice in the same waywe can also view our parsing algorithm as an extended chart parsing algorithm efficiently guided by lr parsing tablesthe algorithm is fast due to the lr table precomputation in several experiments with different english grammars and sentences timings indicate a five to tenfold speed advantage over earley contextfree parsing algorithmthe algorithm parses a sentence strictly from left to right online that is it starts parsing as soon as the user types in the first word of a sentence without waiting for completion of the sentencea practical online parser based on the algorithm has been implemented in common lisp and running on symbolics and hp ai workstationsthe parser is used in the multilingual machine translation project at cmualso a commercial online parser for japanese language is being built by intelligent technology incorporation based on the technique developed at cmu
an algorithm for generating quantifier scopings the syntactic structure of a sentence often manifests quite clearly the predicateargument structure and relations of grammatical subordination but scope dependencies are not so transparent as a result many systems for representing the semantics of sentences have ignored scoping or generated scopings with mechanisms that have often been inexplicit as to the range of scopings they choose among or profligate in the scopings they allow this paper presents along with proofs of some of its important properties an algorithm that generates scoped semantic forms from unscoped expressions encoding predicateargument structure the algorithm is not profligate as are those based on permutation of quantifiers and it can provide a solid foundation for computational solutions where completeness is sacrificed for efficiency and heuristic efficacy and center for the study of language and information stanford university stanford ca 94305 the syntactic structure of a sentence often manifests quite clearly the predicateargument structure and relations of grammatical subordinationbut scope dependencies are not so transparentas a result many systems for representing the semantics of sentences have ignored scoping or generated scopings with mechanisms that have often been inexplicit as to the range of scopings they choose among or profligate in the scopings they allowthis paper presents along with proofs of some of its important properties an algorithm that generates scoped semantic forms from unscoped expressions encoding predicateargument structurethe algorithm is not profligate as are those based on permutation of quantifiers and it can provide a solid foundation for computational solutions where completeness is sacrificed for efficiency and heuristic efficacy1 introduction present an algorithm that generates quantifier scopings for english sentencesa principal focus of computational linguistics as a branch of computer science ought to be the design of algorithmsa large number of algorithms have undoubtedly been devised for dealing with problems every researcher has to face in constructing a natural language system but they simply have not received wide circulationthese algorithms are part of the quotfolk culturequot buried in the most technical unreadable portions of theses passed among colleagues informally at best and often reinventedit should be a practice to publish these algorithms in isolation independent of a particular implementation or systemthis paper constitutes an effort to initiate such a practicea problem that many naturallanguage efforts have faced is the recovery of implicit semantic scope dependency possibilities such as those manifest in quantifiers and modals from predicateargument relations and relations of grammatical subordination which are more or less transparently conveyed by the syntactic structure of sentencesprevious computational efforts typically have not been based on an explicit notion of the range of possible scopingsin response to this problem we the naive algorithm for generating quantifier scopings is to generate all permutations of the quantifiersfor a sentence with n quantified noun phrases this will generate n different readingsbut for the sentence there are not six different readings but only fivethe reading that is missing is the one in which most samples is outscoped by every representative but outscopes a companya model for the disallowed reading could include a different company not only for each representative but also for each samplethe reduction in number of readings for a given sentence is not significant for sentence but in the sentence there are only 42 valid readings as opposed to the 120 readings the naive algorithm would generate and this copyright 1987 by the association for computational linguisticspermission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the cl reference and this copyright notice are included on the first pageto copy otherwise or to republish requires a fee andor specific permission constitutes a significant difference indeedthe recent trend in computational linguistics has been to view more and more noun phrases as well as other constituents as introducing quantifiers so that sentences with this much quantificational complexity are not at all unusualthis observation of quotillegitimate readingsquot is not intended as a new or controversial claim about an idiosyncrasy of englishit accords well with semantic judgments about the possibility of such readingsfor instance we find it impossible to view sentence as expressing that for each representative there was a group of most samples which he saw and furthermore for each sample he saw there was a company he was a representative ofwe can find the same problem of illegitimate readings in the standard account of the quotcooper storagequot mechanism for generating quantifier scopings cooper method generates an expression in intensional logic for the illegitimate readings but the expression contains an unbound variable and a vacuous quantifierfinally the observation follows merely syntactically from the illformedness of certain logical form expressionslet us examine why this is sothe propositional content of a sentence can be seen as combining specifications that restrict the range of quantified entities together with assertions about the entities so specifiedthis intuition is often made formal in the use of logical languages that syntactically separate the notion of the range of a quantified expression from its scope by placing the information about the range in a part of the expression we call the restriction and the assertions in a part called the bodythe separation of these two semantic roles of range and scope into restriction and body as an important fact of the logical structure of english can be seen for example in woods fourpart quantifier structures in the recommendations of moore and in the generalized quantifier research of barwise and cooper and othersthe latter have demonstrated the necessity of such a separation for quantifiers other than the standard firstorder ones but under this understanding of english logical structure it follows that no sixth reading exists for sentence aboveconsider the reading in which the universal outscopes the most which outscopes the existential in the logical form for this sentencethen using the notation of moore for fourpart quantifier structures the logical form must have the following structure all since the universal is outermostnow the existential is within the scope of the universal by hypothesis and since it provides a restriction on the range of the variable r it must occur in the restriction of the quantifierthus we have all some of saw but where can the quantifier most be put to bind the variable s corresponding to the samples seenit must outscope its occurrence in the body of the universal but it must also by hypothesis outscope the existential in the restriction of the universalto outscope both it must outscope the universal itself but this violates the assumed scope relationsthus no such reading is possibleby a similar argument it follows from the logical structure of english that in general a quantifier from elsewhere in a sentence cannot come after the quantifier associated with a head noun and before the quantifier associated with a noun phrase in the head noun complementmost research in linguistic semantics eg montague and cooper has concentrated on explicitly defining the range of possible scope relationships that can be manifested in sentencesbut to our knowledge all fall prey to the profligacy of generation just outlinedwe are concerned here only with suppressing readings that are spurious for purely structural reasons that is for reasons that follow from the general relationship between the structure of sentences and the structure of their logical forms and independent of the meanings of the particular sentencesfor instance we are not concerned with logical redundancies such as those due to the commutativity of successive universal quantifierswhen we move beyond the two firstorder logical quantifiers to deal with the socalled generalized quantifiers such as most these logical redundancies become quite raresimilarly we are not concerned with the infelicity of certain readings due to lexical semantic or world knowledge such as the fact that a child cannot outscope every man in the sentence i have met a child of every man in this roomcomputational research on quantifier scoping has emphasized generating a single scoping which can be thought of as heuristically primary as discussed by for example woods pereira and grosz et al we are concerned not with generating the best reading but with generating all readingsthe reader may object that it is inappropriate in a practical natural language system to generate scopings one by one for testing against semantic and pragmatic criteriainstead one should appeal to various heuristics to generate only the most likely reading or at least to generate readings in order of their plausibilitythese include the following relational head noun usually outscopes the head noun and we are sympathetic with this viewnevertheless there are several reasons that codifying a complete algorithm remains usefulfirst a complete and sound algorithm provides a benchmark against which other approaches can be testedsecond one may actually wish to use a generateandtest mechanism in simpler implementations and it should be correct and as efficient as possibleit should not generate scopings that can be ruled out on purely structural groundsfinally the algorithm we present might be modified to incorporate heuristics to generate scopings in a certain order or only certain of the scopingsthe soundness and correctness of the underlying algorithm provide a guarantee of soundness for a heuristically guided versionwe include a few comments below about incorporating ordering heuristics into our scoping generation algorithm although we should point out that the possibilities are somewhat limited due to the local nature of where the heuristics can be applieda full discussion of heuristicallyguided scoping generation is of course beyond the scope of this paperin addition to handling the scoping of quantifiers relative to each other the algorithm we present also allows quantifiers to be scoped within or outside of opaque arguments of higherorder predicatesfor instance the algorithm generates two readings for the sentence everyone is not here corresponding to the two relative scopings of the universal quantifier and the negationin the discussion below we assume that parsing has made explicit the predicateargument relations and the relations of grammatical subordination in the form of a logical encoding in an input languagea wellformed formula in the input language is a predicate or other operator applied to one or more argumentsan argument can be a constant or variable another wff or what we will call a complex terma complex term is an ordered triple consisting of a quantifier a variable and a wff which represents the predication that is grammatically subordinated to the variablethe input representation for sentence is then the following a complex term can be read quotquantifier variable such that restrictionquot eg quotmost c such that c is a companyquotthe output language is identical to the input language except that it does not contain complex termsquantifiers are expressed in the output language as operators that take three arguments the variable bound by the quantifier a wff restricting the range of the quantified variable and the body scoped by the quantification schematically quantifier this encoding of quantification is the same as that found in woods and moore we will refer to such expressions as quantified wffsthus one reading for sentence is represented by the following quantified wff intermediate structures built during the course of scoping include both complex terms and quantified wffswe use the term full scoping for an expression in the output language ie one that has no complex termswe also will use the terms bound and free as follows an expression binds a variable v if the expression is of the form or q where g is a quantifierthe variable v is said to be bound in the expressions r or r and s respectivelya variable v is unbound or free in an expression a if there is an occurrence of v in a that is not also an occurrence in a subexpression of a binding v note that here quantified wffs and complex terms are both thought of as expressions binding a variablewe present both nondeterministic and deterministic versions of the algorithm3 in an algollike languageboth algorithms however have the same underlying structure based on the primitive operation of quotapplyingquot a complex term to a wff in which it occurs a complex term in a wff is replaced by the variable it restricts and that variable is then bound by wrapping the entire form in the appropriate quantifierthus applying the term to a wff containing that complex term say p yields the quantified wff computational linguistics volume 13 numbers 12 januaryjune 1987 49 jerry r hobbs and stuart m shieber an algorithm for generating quantifier scopings q pthis is the primitive operation by which complex terms are removed from a wff and quantified wffs are introducedit is implemented by the function applythe generation of a scoping from a wff proceeds in two stagesfirst the opaque argument positions within the wff are scopedthe function pullopaqueargs performs this task by replacing wffs in opaque argument positions by a scoping of the original wfffor instance if p were a predicate opaque in its only argument then for the wff p pullopaqueargs would generate the wff p s or the unchanged wff pin the former the opaque predicate p outscopes the quantifier qin the latter the quantifier q has not been applied yet and the wff will subsequently yield readings in which q has wider scope than p second some or all of the remaining terms are applied to the entire wffthe function applyterms iteratively chooses a complex term in the wff and applies itthus applyterms acting upon the wff depending on how many quantifiers are applied and in what orderthe choice of a complex term is restricted to a subset of the terms in the wff the socalled applicable termsthe principal restriction on applicable terms is that they not be embedded in any other complex term in the wffsection 41 discusses a further restrictionthe function applicableterm returns an applicable term in a given wffthese two stages are manifested in the function pull which generates all partial or full scopings of a wff by invoking pullopaqueargs and applytermssince ultimately only full scopings are desired an additional argument to pull and applyterms controls whether partial scopings are to be returnedwhen this flag complete is true applyterms and hence pull will return only expressions in which no more complex terms remain to be applied for example only the last two of the five readings abovefinally the restrictions of the complex terms may themselves contain complex terms and must be scoped themselvesthe apply function therefore recursively generates the scopings for the restriction by calling pull on that restriction and a quantified wff is generated for each possible partial or complete scoping of the restrictionschematically in the simplest case for the a subsequent application of the remaining complex term will yield the quotwide scopequot reading q2 qi p the disallowed readings produced by the quotall permutationsquot algorithm are never produced by this algorithm because it is everywhere sensitive to the fourpart quantifier structure of the target logical formthe difference between the nondeterministic and deterministic versions lies only in their implementation of the choice of terms and returning of valuesthis is done either nondeterministically or by iterating through and returning explicit sets of possibilitiesa nondeterministic prolog version and a deterministic common lisp version of the algorithm are given in appendices a and bthe full text of these versions is available from the authorsa variant of the common lisp version is currently being used at sri international to generate scopings in the klaus systemin the specifications below the let construct implements local variable assignmentall assignments are done sequentially not in parallelthe syntax is let in the entire expression returns what the body returnsdestructuring by pattern matching is allowed in the assignments for example let term in simultaneously binds quant var and restrict to the three corresponding components in termthe symbol quotquot is used for assignment lambda is an anonymousfunctionforming operatorits syntax is lambda where is free in we assume lexical scoping in lambda expressionsthe statement quotreturn valuequot returns a value from a functionthe binary function map applies its second argument to each of the elements of its first argument it returns a corresponding list of the values of the individual applicationsthe function integers returns a list of the integers in the range lower to upper inclusive and in order the function length is obviousthe expression listn returns the nth element of the list listthe function subst substitutes x for all occurrences of y in exprthe unary function predicate returns the main predicate in a wffthe unary function arguments returns a list of the arguments in a wffapplied to two arguments wff is a binary function that takes a predicate name and a list of arguments and returns the wff consisting of the application of the predicate to the argumentsapplied to four arguments wff is a quaternary function that takes a quantifier name a variable name a restriction and a body and returns the quantified wff consisting of the binding of the variable by the quantifier in the restriction and bodythe binary predicate opaque returns true if and only if the predicate is opaque in its nth argumentit is naturally assumed that opaque argument positions are filled by wff expressions not termseach of the unary predicates wff term and quantifier returns true if and only if its argument is a wff a complex term or a quantifier operator respectivelyin the nondeterministic version of the algorithm there are three special language constructsthe unary predicate exists evaluates its argument nondeterministically to a value and returns true if and only if there exist one or more values for the expressionthe binary operator quotail bquot nondeterministically returns one of its arguments the function term nondeterministically returns a complex term in formfinally the function applicableterm nondeterministically returns a complex term in form that can be applied to formthe nondeterministic version of the algorithm is as followsthe function gen nondeterministically returns a valid full scoping of the formula form function gen return pullthe function pull nondeterministically returns a valiciscoping of the formula formif complete is true then only full scopings are returned otherwise partial scopings are allowed as wellthe function pullopaqueargs when applied to a wff returns a wff generated from form but with arguments in opaque argument positions replaced by a valid scoping of the original valuesince the recursive call to pull has complete set to false the unchanged argument is a nondeterministic possibility even for opaque argument positionswhen applied to any other type of expression form is unchanged function pullopaqueargs if not then return form else let predicate predicate the function applyterms chooses function applyterms several terms in form nondeterministically and applies if not them to formif complete is true then only full scopings then return form are returned else let scopedform applyterms form complete in if complete then return scopedform else return scoped form ii formcomputational linguistics volume 13 numbers 12 januaryjune 1987 51 jerry r hobbs and stuart m shieber an algorithm for generating quantifier seopings the function apply returns a wff consisting of the given complex term term applied to a form form in which it occursin addition the restriction of the complex term is recursively scoped function apply let term return wff substfor the deterministic version of the algorithm there are five special language constructsthe unary predicate empty returns true if and only if set is emptypaired braces quoti iquot constitute a setforming operatorthe binary function union applies its second argument to each of the elements of its first argument it returns a corresponding set of the values of the individual applicationsthe binary infix operator you returns the union of its two arguments the function crossproduct takes a list of sets as its argument and returns the set of lists corresponding to each way of taking an element from each of the sets in orderfor example the function terms returns the set of all complex terms in formthe function applicableterms returns the set of all complex terms in form that can be applied to formthe deterministic version of the algorithm is identical in structure to the nondeterministic versioneach function operates in the same way as its nondeterministic counterpart except that they uniformly return sets rather than nondeterministically returning single valuesthe algorithm is as followsthe function gen returns a set of all valid full scopings of the formula form function gen return pullthe function pull returns a set of all valid scopings of the formula formif complete is true only full scopings are returned otherwise partial scopings are allowed as wellthe function pullopaqueargs returns a set of all wffs generated from form but with arguments in opaque argument positions replaced by a valid scoping of the original valuesince the recursive call to pull has complete set to false the unchanged argument is a possibility even for opaque argument positionswhen applied to any other type of expression the argument is unchangedthe function applyterms returns a set of scopings of form constituting all of the ways of choosing several terms in form and applying them to formif complete is true then only the full scopings are returnedthe function apply returns a set of all wffs consisting of the given complex term term applied to the form form in which it occurs with the restriction of the complex term recursively scoped in all possible ways function apply let term in returnsince the algorithm is not completely transparent it may be useful to work through the deterministic version for a detailed examplethe predicateargument structure of this sentence may be represented as follows suppose gen is called with expression as formsince this is the representation of the whole sentence pull will be called with complete equal to truethe call to pullopaqueargs will return the original wff unchanged since there are no opaque operators in the wffwe therefore call applyterms on the wffin applyterms the call to applicableterms returns a list of all of the unnested complex termsfor there will be two each of these complex terms will ultimately yield the wffs in which its variable is the more deeply nested of the twothe function apply is called for each of these complex terms and inside apply there is a recursive call to pull on the restriction of the complex termthis generates all the possible scopings for the restrictionwhen apply is called with as form and as term the result of scoping the restriction of will be the following four wffs because this call to pull has complete equal to false the unprocessed restriction itself wff as well as the partially scoped wff is returned along with the fully scoped forms of the restrictionwff will ultimately generate the two readings in which variables d and c outscope r wff is also partial as it still contains a complex termit will ultimately yield a reading in which r outscopes d but is outscoped by c the complex term for c is still available for an application that will give it wide computational linguistics volume 13 numbers 12 januaryjune 1987 53 jerry r hobbs and stuart m shieber an algorithm for generating quantifier scopings scopewffs and will ultimately yield readings in which d and c are outscoped by r each of these wffs becomes the restriction in a quantified wff constructed by applythus from restriction apply will construct the quantified wff some in in applyterms the tail recursion turns the remaining complex terms into quantifiers with wide scopethus in c and s will be given wider scope than r and d for example one of the readings generated from wff will be sentence by the way has 14 different readingsas an example of the operation of the algorithm on a wff with opaque operators we consider the sentence everyone is not herethis has the predicateargument structure not where not is an operator opaque in its only argumentthe call to pullopaqueargs returns the two scopings not nothere the call to applyterms then turns the first of these into everynot thus the following two full scopings are generated everynot nothere note that because of the recursive call in pullopaqueargs these two readings will be generated even if this form is embedded within other transparent predicatesthe notion of applicable term used above was quite simplea complex term was applicable to a wff if it was embedded in no other complex term within the wffthe restriction is motivated by the following considerationsuppose the input wff is the remaining complex term would include a free occurrence of y so that when it is later applied resulting in the formula the variable y occurs free in the restriction of q thus it is critical that a term never be applied to a form when a variable that is free in the term is bound outside of it in the formthe simple definition of applicability goes part of the way towards enforcing this requirementunfortunately this simple definition of applicability is inadequateif x had itself been free in the embedded complex term as in the wff the application of the outer term followed by the inner term would still leave an unbound variable namely xthis is because the inner term which uses x has been applied outside the scope of the binder for xsuch structures can occur for instance in sentences like the following where an embedded noun phrase requires reference to its embedding noun phrase5 every man that i know a child of has arrivedevery man with a picture of himself has arrivedin these two sentences the quantifier a cannot outscope every because the noun phrase beginning with a embeds a reference to every manif a were to outscope every then himself or the trace following child of would be outside the scope of every manthe definition of applicable term must be modified as followsa term in a wff is applicable to the wff if and only if all variable occurrences that are free in the term are free in the wff as wellour previous definition of applicability that the term be unembedded in another term in the wff is a simple consequence of this restrictionthe versions of the algorithm given in appendices a and b define the functions applicableterm and applicableterms in this waygiven this definition the algorithm can be shown never to generate unbound variables a full discussion of heuristic rules for guiding generation of quantifier scopings is outside of the aims of this paperhowever certain ordering heuristics can be incorporated relatively easily into the algorithm merely by controlling the way in which nondeterministic choices are madewe discuss a few examples here merely to give the flavor for how such heuristics might be addedfor instance suppose we want to favor the original lefttoright order in the sentencethe function applicableterms should return the complex terms in righttoleft order since quantifiers are extracted from the inside outthe union in line should return form after scopedformsif we want to give a noun phrase wide scope when it occurs as a prepositional phrase noun complement to a function word eg every side of a triangle then form should come before scopedform in line when pull has been called from line in apply where the first argument to apply is a complex term for a noun phrase satisfying those conditions eg the complex term for every side of a trianglethe modifications turn out to be quite complicated if we wish to order quantifiers according to lexical heuristics such as having each outscope somebecause of the recursive nature of the algorithm there are limits to the amount of ordering that can be done in this mannerat the most we can sometimes guarantee that the best scoping comes firstof course one can always associate a score with each reading as it is being generated and sort the list afterwardsthe algorithm as presented will operate correctly only for input structures that are themselves wellformedfor instance they must contain no unbound variablescertain natural language phenomena such as the socalled donkey sentences exhibit structures that are illformed with respect to the assumptions made by this algorithmfor instance the sentence every man who owns a donkey beats it has an illformed input structure because the pronoun has to reach inside the scope of an existential quantifier for its antecedentits predicateargument structure might be something like an alternative is to leave the pronoun unanalyzed in which case the closest reading produced by the algorithm is in fact this is not bad if we take it to mean that x is nonhuman and that x is mentioned in the prior discourse in a position determined by whatever coreference resolution process is usedthere is a problem if we take the quantifier the to mean that there is a unique such x and take the sentence to mean that a man who owns many donkeys will beat every donkey he ownsbut we can get around this if following the approach taken by hobbs we take a donkey to be generic take it to refer to the unique generic donkey that m owns and assume that to beat a generic donkey is to beat all its instancesin any case modifications to the algorithm would be needed to handle such anaphora phenomena in all their complexitywe have presented an algorithm for generating exactly those quantifier scopings that are consistent with the logical structure of englishwhile this algorithm can sometimes result in a significant savings over the naive approach it by no means solves the entire quantifier scoping problem as we have already pointed outthere has already been much research on the problem of choosing the preferred reading among these allowable ones but the methods that have been suggested need to be specified in an implementationfree fashion more precisely than they have been previously and they need to be evaluated rigorously on large bodies of naturalistic datamore important methods need to be developed for using pragmatic considerations and world knowledge particularly reasoning about quantities and dependencies among entities to resolve quantifier scope ambiguities and these methods need to be integrated smoothly with the other kinds of syntactic semantic and pragmatic processing required in the interpretation of natural language textswe have profited from discussions about this work with paul martin and fernando pereira and from the comments of the anonymous reviewers of the paperthis research was supported by nih grant lm03611 from the national library of medicine by grant ist8209346 from the national science foundation and by a gift from the system development foundationthe following is the core of a prolog implementation of the nondeterministic algorithm which includes all but the lowest level of routinesthe syntax is that of edinburgh prologs eg dec20 prologrepresentation of wffs a wff of the form p is represented as the prolog term wff where argi is the encoding of the subexpression argia constant term is represented by the homonymous prolog constanta complex term is represented by the prolog term term where restrict is the encoding of the wff that forms the restriction of the quantifierform a wff with inplace complex terms complete true iff only full scopings are allowed scopedform a term or a wff with inplace complex terms scopedform a wff with inplace complex terms complete true iff only full scopings are allowed scopedform a complex term form the wff to apply term to newform an expression in the logical form language term an expression in the logical form language term a list of variables bound along the path so far a term is an applicable toplevel term applicable_termterm bvs if it meets the definition not an applicable term of the restriction or body of a quantifier is applicable only if the variable bound by the quantifier is not free in the term applicable_termterm bvs quantifier applicable_term note the absence of a rule looking for applicable terms inside of complex termsthis limits applicable terms to be toplevelthe following is the core of a common lisp implementation of the deterministic algorithm which includes all but the lowest level of routines a wff of the form p is represented as the sexpression where argi is the encoding of the subexpression argia constant term is represented by the homonymous lisp atoma complex term is represented by the sexpression where restrict is the encoding of the wff that forms the restriction of the quantifierimplementation notes the following simple utility functions are assumed mapunion implements the binary function union crossproduct implements the function crossproduct opaque implements the binary function opaque integers implements the binary function integers the infix union is implemented with cl function union the binary prefix union is implemented under the name mapunion to avoid conflict with the cl function union the function apply is implemented under the name applyq to avoid conflict with the cl function applythis appendix includes informal proofs of some important properties of the nondeterminisitc version of the presented algorithmfirst we present a proof of the termination of the algorithmseveral criteria of the partial correctness of the algorithm are also informally shown especially that the algorithm does not generate wffs with unbound variableshowever we do not prove correctness in the sense of showing that the algorithm is semantically sound ie that it yields wffs with interpretations consistent with the interpretation of the input expression simply because we do not provide a semantics for the input languagewe do not attempt to prove completeness for the algorithm as the concept of completeness is open to interpretation depending as it does on just which scopings one deems possible but we expect that the algorithm is complete in the sense that every permutation of quantifiers respecting the considerations in the introduction is generatedwe also do not prove the nonredundancy of the nondeterminism in the algorithm ie that the algorithm will not generate the same result along different nondeterministic paths although we believe that the algorithm is nonredundantwe will use lower greek letters as variables ranging over expressions in the logical form languagewe inductively define a metric p on expressions in the logical form language as follows we will give an informal proof of termination for the nondeterministic algorithm by induction on this metric p but first we present without proof three simple but useful properties of the metriclemma 1 if a is a wff then p 0 if and only if a contains no complex termslemma 2 if a is a wff and 0 is a subexpression of a and p 0 then p 0the conditions are proved sequentiallyin particular earlier conditions for the case p n are used in the proofs of later onescondition 1 we must show that pullopaqueargs terminates with result then app1yrq v r a q where y pull and 8 substnow let p m by lemma 2 m is an applicable term in a and youyou and v v then you you and v v as wellthe unbound variables you in a can be divided into two sets you and you where you consists of those variables in you that occur in r and you consists of those variables in you that occur outside of t in anote that you you you younow assume x occurs in r then you x you youyou uo where 140 is the set of variables bound within a but outside of t and which occur free in r but t is an applicable term and by the definition of quotapplicable termquot uo must be emptyso you x you you youlet are pull and s substby the induction hypothesis you fx1 you you aresince s does not include t but does include x you 1x1 you youin forming the quantified wff 3 q the unbound variables in g consist of those in and those in s except for x that is vacuous quantified variables can be divided similarly into v and v again v v you vtrivially v vby induction v v alsosince s does not include t v vv v you v v unless the quantification of x in 13 is vacuoussince x is guaranteed to occur in s the quantification is clearly not vacuousso v v applyterms this follows straightforwardly from the previous subproof for apply and the induction hypothesis for applyterms pullopaqueargs if a is not a wff then the proof is trivialotherwise there are two cases depending on whether the predicate in a p is or is not a quantifierif p is not a quantifier then the result follows immediately from the induction hypothesis for pull and pullopaqueargsif p is a quantifier then let a pthe output 13 then is wff pullopaqueargs pullopaqueargsthe first call to pullopaqueargs merely returns xnow by an argument similar to that given in the subproof for apply the unbound variables in a can be exhaustively divided into you and you depending on whether they occur in r and s depending on whether x occurs in r you fx1 you you or you you aresimilarly you x you you or you yousuppose the second and third calls to pullopaqueargs return r and s respectivelyby the induction hypotheses you you and you youif the quantification of x in a is not vacuous then x occurs free in either r or s so you x you you you you xl youif the quantification of x is vacuous then you you and you you and vacuous quantified variables can be divided into v and vs similarlysuppose the quantification of x is vacuous then v v x you v you v by the induction hypothesis v v v and v v valso by induction x does not occur free in or stherefore the quantification of x in p is also vacuous and v x you vyou v vif the quantification of x is not vacuous then v v you v and x occurs free in either r or s by inducation x occurs free in either or s so the quantification of x in g is also nonvacuousalso by induction as before v v and v v so pull this follows directly using the previously proved induction steps for applyterms and pullopaqueargs gen this follows directly using the previously proved induction step for pullthis concludes the proof of the induction step and the theoremthe second and third criteria follow from the presumed wellformedness of a and theorem 2 which demonstrates that gen maintains wellformednessthe fourth and fifth criteria we argue informally as follows since no complex terms occur in g we can assume that every complex term t in a was applied at some time in the processing of abut if it was applied then it must have been an applicable term occurring in the wff it was applied to then the call to subst in apply will not be vacuous the quantifier will bind the same variable as t and will outscope the position held by t in athus the fourth criterion holdsalso note that all quantifiers in 13 are either the result of such an application or were in a originallythus the fifth criterion follows immediately as well
J87-1005
an algorithm for generating quantifier scopingsthe syntactic structure of a sentence often manifests quite clearly the predicateargument structure and relations of grammatical subordinationbut scope dependencies are not so transparentas a result many systems for representing the semantics of sentences have ignored scoping or generated scopings with mechanisms that have often been inexplicit as to the range of scopings they choose among or profligate in the scopings they allowthis paper presents along with proofs of some of its important properties an algorithm that generates scoped semantic forms from unscoped expressions encoding predicateargument structurethe algorithm is not profligate as are those based on permutation of quantifiers and it can provide a solid foundation for computational solutions where completeness is sacrificed for efficiency and heuristic efficacywe extend this formalism to support operators and present an enumeration algorithm that is more efficient than the naive wrapping approachwe presented an algorithm to generate quantifier scopings from a representation of predicateargument relations and the relations of grammatical subordinationwe introduce an algorithm for generating all possible quantifier scopings
grammatical category disambiguation by statistical optimization three previous efforts directed specifically to this problem the first published effort is that of klein and simmons a simple system using suffix lists and limited frame rules the second approach to lexical disambiguation is and rubin a system of several thousand contextframe rules this algorithm was used to assign initial tags to the brown corpus third is the claws system develto tag the coris a corpus of british written english parallel to the brown corpus parsing systems always encounter the problem of category ambiguity but usually the focus of such systems is at other levels making their responses less relevant for our purposes here 11 klein and simmons klein and simmons describe a method directed primarily towards the task of initial categorial tagging rather than disambiguation its primary goal is avoiding quotthe labor of constructing a very large dictionaryquot a consideration of greater import then than now the klein and simmons algorithm uses a palette of 30 categories and claims an accuracy of 90 in tagging the algorithm first seeks each word in dictionaries of about 400 function words and of about 1500 words which quotare exceptions to the computational rules usedquot the program then checks for suffixes and special characters as clues of all frame tests applied these work on scopes bounded by unambiguous words as do later algorithms however klein and simmons impose an explicit limit of three ambiguous words in a row for such ambiguous words the pair of unambiguous categories bounding it is mapped into a list the list includes all known sequences of tags occurring between the particular bounding tags all such sequences of the correct length become candidates the program then matches the candidate sequences against the ambiguities remaining from earlier steps of the algorithm when only one sequence is possible disambiguation is successful the samples used for calibration and testing were limited first klein and simmons performed quothand analysis of a sample size unspecified of golden grammatical category disambiguation by statistical optimization book encyclopedia textquot later quotwhen it was run on several pages from that encyclopedia it correctly and unambiguously tagged slightly over 90 of the wordsquot further tests were run on small from the americana from scientific american klein and simmons assert that quotoriginal fears that sequences of four or more unidentified parts of speech would occur with great frequency were not substantiated in factquot this felicity however is an artifact first the relatively small set of categories reduces ambiguity second a larger sample would reveal both lowfrequency ambiguities and many long spans as discussed below 12 greene and rubin greene and rubin developed taggit for tagging the brown corpus the palette of 86 tags that taggit uses has with some modifications also been used in both claws and volsunga the rationale underlying the choice of tags is described on pages 321 of greene and rubin francis and kucera report that this algorithm correctly tagged approxithe million words in the brown corpus although this accuracy is substantially lower than that reported by klein and simmons it should be remembered that greene and rubin were the first to attempt so large and varied a sample taggit divides the task of category assignment into initial tagging and disambiguation tagging is carried out as follows first the program consults an exception dictionary of about 3000 words among other items this contains all known closedclass words it then handles various special cases such as words with initial quotquot contractions special symbols and capitalized words the word ending is then checked against a suffix list of about 450 strings the lists were derived from lexicostatistics of the brown corpus if taggit has not assigned some tag after these several steps quotthe word is tagged nn vb or jj that is as being threeways ambiguous in order that the disambiguation routine may have something to work withquot p 25 after tagging taggit applies a set of 3300 context frame rules each rule when its context is satisfied has the effect of deleting one or more candidates from the list of possible tags for one word if the number of candidates is reduced to one disambiguation is considered successful subject to human postediting each rule can include a scope of up to two unambiguous words on each side of the ambiguous word to which the rule is being applied this constraint was determined as follows in order to create the original inventory of context frame tests a 900sentence subset of the brown university corpus was tagged and its ambiguities were resolved manually then a program was run 32 computational linguistics volume 14 number 1 winter 1988 steven j derose grammatical category disambiguation by statistical optimization which produced and sorted all possible context frame rules which would have been necessary to perform this disambiguation automatically the rules generated were able to handle up to three consecutive ambiguous words preceded and followed by two nonambiguous words a constraint similar to klein and simmons however upon examination of these rules it was found that a sequence of two or three ambiguities rarely occurred more than once in a given context consequently a decision was made to examine only one ambiguity at a time with up to two unambiguously tagged words on either side the first rules created were the results of informed intuition p 32 13 claws marshall describes the lob corpus tagging algorithm later named claws as quotsimilar to those employed in the taggit programquot the tag set used is very similar but somewhat larger at about 130 tags the dictionary used is derived from the tagged brown corpus rather than from the untagged it contains 7000 rather than 3000 entries and 700 rather than 450 suffixes claws treats plural possessive and hyphenated words as special cases for purposes of initial tagging the lob researchers began by using taggit on parts of the lob corpus they noticed that while less than 25 of taggit context frame rules are concerned with only the immediately preceding or succeeding word these rules were applied in about 80 of all attempts to apply rules this relative overuse of minimally specified contexts indicated that exploitation of the relationship between successive tags coupled with a mechanism that would be applied throughout a sequence of ambiguous words would produce a more accurate and effective method of word disambiguation p 141 the main innovation of claws is the use of a matrix probabilities the relative likelihood of cooccurrence of all ordered pairs of tags this matrix can be mechanically derived from any pretagged corpus claws used quota large proportion of the brown corpusquot 200000 words pp 141 150 the ambiguities contained within a span of ambiguous words define a precise number of complete sets of mappings from words to individual tags each such of tags is called a path is composed of a number of tag collocations and each such collocation has a probability which may be obtained from the collocation matrix one may thus approximate each path probability by the product of the probabilities of all its collocations each path corresponds to a unique assignment of tags to all words within a span paths constitute a network the path of maximal probability may be taken to contain the quotbestquot tags states that claws the most probable sequence of tags and in the majority of cases the correct tag for each individual word corresponds to the associated tag in the most probable sequence of tagsquot but a more detailed examination of the pascal code for claws revealed that claws has a more complex definition of quotmost probable sequencequot than one might expect a probability called quotsumsuccprobsquot is predicated of each word sumsuccprobs is calculated by looping through all tags for the words immediately preceding at and following a word for each tag triple an increment is added defined by downgrade tagmark get3seqfactor the collocational probability of a tag either 1 or a special value the tagtriple list described below the value of accordance with rtps as described below the claws documentation describes sumsucc probs as quotthe total value of all relationships between the tags associated with this word and the tags associated with the next word found by simulating all accesses to successors and order2vals which will be made quot the probability of each node of the span network is then calculated in the following way as a tree representing all paths through which the span network is built currenttag tagmark get3seqfactor prob although earlier researchers have suggested that spans of length over 5 are rare enough to be of little concern this is not the case the number of spans of a given length is a function of that length and the corpus size so long spans may be obtained merely by examining more text the total numbers of spans in the brown corpus for each length from 3 to 19 are 397111 143447 60224 26515 114095128 2161 903 382 161 58 29 14 6 1 0 1 graphing the logarithms computational linguistics volume 14 number 1 winter 1988 33 steven j derose grammatical category disambiguation by statistical optimization of these quantities versus the span length for each produces a nearperfect straight line second a precise mathematical definition is possible for the fundamental idea of claws whereas earlier efforts were based primarily on ad hoc or subjectively determined sets of rules and descriptions and employed substantial exception dictionaries this algorithm requires no human intervention for setup it is a systematic process third the algorithm is quantitative and analog rather than artificially discrete the various tests and employed by earlier algorithms enforced absolute constraints on particular tags or collocations of tags here relative probabilities are weighed and a series of very likely assignments can make possible a particular a priori unlikely assignment with which they are associated in addition to collocational probabilities claws also takes into account one other empirical quantity tags associated with words can be with a marker or indicates that the tag is infrequently the correct tag for the associated word indicates is highly improbable p 149 thus the independent probability of each possible tag for a given word influences the choice of an optimal such probabilities will be referred to as probabilities other features have been added to the basic algorithm for example a good deal of suffix analysis is used in initial tagging also the program filters its output considering itself to have failed if the optimal tag assignment for a span is not quotmore than 90 probablequot cases it reorders tags rather than actually disambiguating on long spans this criterion is effectively more stringent than on short spans a more significant addition to the algorithm is that a number of tag triples associated with a have been introduced which may either upgrade or downgrade values in the tree computed from the onestep matrix for example the triple 1 2 adverb 3 pasttenseverb has been assigned a factor which downgrades a sequence containing this triple compared with a competing of 1 be 2 adverb 3pastparticipleadjective on the basis that after a form of be past participles and adjectives are more likely than a past tense verb p 146 a similar move was used near conjunctions for which the words on either side though separated are more closely correlated to each other than either is to the conjunction itself pp 146147 for example a verbnoun ambiguity conjoined to a verb should probably be taken as a verb leech garside and atwell describe quotidiomtagquot which is applied after initial tag assignment and before disambiguation it was developed as a means of dealing with sequences which would otherwise because diffifor the automatic tagging for example that tagged as a single conjunction tagging program can look at any combination of words and tags with or without intervening words it can delete tags add tags or change the probability of tags although this program might to be an hoc it is worth bearing in that any fully automatic language analysis syshas to come to with problems of lexical idiosyncrasy idiomtag also accounts for the fact that the probability of a verb being a past participle and not simply past is greater when the following word is quotbyquot as opposed to other prepositions certain cases of this sort may be soluble by making the collocational matrix distinguish classes of ambiguitiesthis question is being pursued approximately 1 of running text is tagged by idiomtag marshall notes the possibility of consulting a complete threedimensional matrix of collocational probabilities such a matrix would map ordered triples of tags into the relative probability of occurrence of each such triple marshall points out that such a table would be too large for its probable usefulness the author has proa table based upon more 85 of the brown corpus it occupies about 2 megabytes also the mean number of examples per triple is very low thus decreasing accuracy claws has been applied to the entire lob corpus with an accuracy of quotbetween 96 and 97quot p 29 without the idiom list the algorithm was 94 accurate on a sample of 15000 words thus the preprocessor tagging of 1 of all tokens resulted in a 3 change in accuracy those particular assignments must therefore have had a substantial effect upon their context resulting in changes of two other words for every one explicitly tagged but claws is timeand storageinefficient in the extreme and in some cases a fallback algorithm is employed to prevent running out of memory as was discovered by examining the pascal program code how often the fallback is employed is not known nor is it known what effect its use has on overall accuracy since claws calculates the probability of every path it operates in time and space proportional to the product of all the degrees of ambiguity of the words in the span thus the time is exponential in the span length for the longest span in the brown corpus of length 18 the number of paths examined would be 1492992 34 computational linguistics volume 14 number 1 winter 1988 steven j derose grammatical category disambiguation by statistical optimization lineartime algorithm the algorithm described here depends on a similar empiricallyderived transitional probability matrix to that of claws and has a similar definition of quotoptimal pathquot the tagset is larger than taggit though smaller than claws containing 97 tags the ultimate assignments of tags are much like those of claws however it embodies several substantive changes those features that can be algorithmically defined have been used to the fullest extent other addons have been minimized the major differences are outlined below first the optimal path is defined to be the one whose component collocations multiply out to the highest probability the more complex definition applied by using the sum of all paths at of the network is not used second volsunga overcomes the nonpolynomial complexity of claws because of this change it is never necessary to resort to a fallback algorithm and the program is far smaller furthermore testing the algorithm on extensive texts is not prohibitively costly third volsunga implements relative tag probabilities in a more quantitative manner based upon counts from the brown corpus where claws scales probabilities by 12 for rtp 01 and by 18 for p 001 volsunga uses the rtp value itself as a factor in the equation which defines probability fourth volsunga uses no tag triples and no idioms because of this manually constructing specialcase lists is not necessary these methods are useful in certain cases as the accuracy figures for claws show but the goal here was to measure the accuracy of a wholly algorithmic tagger on a standard corpus brown university and the summer institute of linguistics 7500 w camp wisdom road dallas tx 75236 several algorithms have been developed in the past that attempt to resolve categorial ambiguities in natural language text without recourse to syntactic or semantic level informationan innovative method was recently developed by those working with the lancaster oslobergen corpus of british englishthis algorithm uses a systematic calculation based upon the probabilities of cooccurrence of particular tagsits accuracy is high but it is very slow and it has been manually augmented in a number of waysthe effects upon accuracy of this manual augmentation are not individually knownthe current paper presents an algorithm for disambiguation that is similar to claws but that operates in linear rather than in exponential time and space and which minimizes the unsystematic augmentstests of the algorithm using the million words of the brown standard corpus of english are reported the overall accuracy is 96this algorithm can provide a fast and accurate front end to any parsing or natural language processing system for englishevery computer system that accepts natural language input must if it is to derive adequate representations decide upon the grammatical category of each input wordin english and many other languages tokens are frequently ambiguousthey may represent lexical items of different categories depending upon their syntactic and semantic contextseveral algorithms have been developed that examine a prose text and decide upon one of the several possible categories for a given wordour focus will be on algorithms which specifically address this task of disambiguation and particularly on a new algorithm called volsunga which avoids syntacticlevel analysis yields about 96 accuracy and runs in far less time and space than previous attemptsthe most recent previous algorithm runs in np time while volsunga runs in linear timethis is provably optimal no improvements in the order of its execution time and space are possiblevolsunga is also robust in cases of ungrammaticalityimprovements to this accuracy may be made perhaps the most potentially significant being to include some higherlevel informationwith such additions the accuracy of statisticallybased algorithms will approach 100 and the few remaining cases may be largely those with which humans also find difficultyin subsequent sections we examine several disambiguation algorithmstheir techniques accuracies and efficiencies are analyzedafter presenting the research carried out to date a discussion of volsunga s application to the brown corpus will followthe brown corpus described in kucera and francis is a collection of 500 carefully distributed samples of english text totalling just over one million wordsit has been used as a standard sample in many studies of englishgenerous advice encouragement and assistance from henry kucera and w nelson francis in this research is gratefully acknowledgedthe problem of lexical category ambiguity has been little examined in the literature of computational linguistics and artificial intelligence though it pervades english to an astonishing degreeabout 115 of types and over 40 of tokens in english prose are categorically ambiguous the vocabulary breaks down as shown in table 1 a search of the relevant literature has revealed only three previous efforts directed specifically to this problemthe first published effort is that of klein and simmons a simple system using suffix lists and limited frame rulesthe second approach to lexical category disambiguation is taggit a system of several thousand contextframe rulesthis algorithm was used to assign initial tags to the brown corpusthird is the claws system developed to tag the lancaster oslobergen corpusthis is a corpus of british written english parallel to the brown corpusparsing systems always encounter the problem of category ambiguity but usually the focus of such systems is at other levels making their responses less relevant for our purposes hereklein and simmons describe a method directed primarily towards the task of initial categorial tagging rather than disambiguationits primary goal is avoiding quotthe labor of constructing a very large dictionaryquot a consideration of greater import then than nowthe klein and simmons algorithm uses a palette of 30 categories and claims an accuracy of 90 in taggingthe algorithm first seeks each word in dictionaries of about 400 function words and of about 1500 words which quotare exceptions to the computational rules usedquot the program then checks for suffixes and special characters as clueslast of all context frame tests are appliedthese work on scopes bounded by unambiguous words as do later algorithmshowever klein and simmons impose an explicit limit of three ambiguous words in a rowfor each such span of ambiguous words the pair of unambiguous categories bounding it is mapped into a listthe list includes all known sequences of tags occurring between the particular bounding tags all such sequences of the correct length become candidatesthe program then matches the candidate sequences against the ambiguities remaining from earlier steps of the algorithmwhen only one sequence is possible disambiguation is successfulthe samples used for calibration and testing were limitedfirst klein and simmons performed quothand analysis of a sample size unspecified of golden book encyclopedia textquot later quotwhen it was run on several pages from that encyclopedia it correctly and unambiguously tagged slightly over 90 of the wordsquot further tests were run on small samples from the encyclopedia americana and from scientific americanklein and simmons assert that quotoriginal fears that sequences of four or more unidentified parts of speech would occur with great frequency were not substantiated in factquot this felicity however is an artifactfirst the relatively small set of categories reduces ambiguitysecond a larger sample would reveal both lowfrequency ambiguities and many long spans as discussed belowgreene and rubin developed taggit for tagging the brown corpusthe palette of 86 tags that taggit uses has with some modifications also been used in both claws and volsungathe rationale underlying the choice of tags is described on pages 321 of greene and rubin francis and kucera report that this algorithm correctly tagged approximately 77 of the million words in the brown corpus although this accuracy is substantially lower than that reported by klein and simmons it should be remembered that greene and rubin were the first to attempt so large and varied a sampletaggit divides the task of category assignment into initial tagging and disambiguationtagging is carried out as follows first the program consults an exception dictionary of about 3000 wordsamong other items this contains all known closedclass wordsit then handles various special cases such as words with initial quotquot contractions special symbols and capitalized wordsthe word ending is then checked against a suffix list of about 450 stringsthe lists were derived from lexicostatistics of the brown corpusif taggit has not assigned some tag after these several steps quotthe word is tagged nn vb or jj that is as being threeways ambiguous in order that the disambiguation routine may have something to work withquot p 25after tagging taggit applies a set of 3300 context frame ruleseach rule when its context is satisfied has the effect of deleting one or more candidates from the list of possible tags for one wordif the number of candidates is reduced to one disambiguation is considered successful subject to human posteditingeach rule can include a scope of up to two unambiguous words on each side of the ambiguous word to which the rule is being appliedthis constraint was determined as follows in order to create the original inventory of context frame tests a 900sentence subset of the brown university corpus was tagged and its ambiguities were resolved manually then a program was run which produced and sorted all possible context frame rules which would have been necessary to perform this disambiguation automaticallythe rules generated were able to handle up to three consecutive ambiguous words preceded and followed by two nonambiguous words a constraint similar to klein and simmonshowever upon examination of these rules it was found that a sequence of two or three ambiguities rarely occurred more than once in a given contextconsequently a decision was made to examine only one ambiguity at a time with up to two unambiguously tagged words on either sidethe first rules created were the results of informed intuition p 32marshall describes the lob corpus tagging algorithm later named claws as quotsimilar to those employed in the taggit programquotthe tag set used is very similar but somewhat larger at about 130 tagsthe dictionary used is derived from the tagged brown corpus rather than from the untaggedit contains 7000 rather than 3000 entries and 700 rather than 450 suffixesclaws treats plural possessive and hyphenated words as special cases for purposes of initial taggingthe lob researchers began by using taggit on parts of the lob corpusthey noticed that while less than 25 of taggit context frame rules are concerned with only the immediately preceding or succeeding word these rules were applied in about 80 of all attempts to apply rulesthis relative overuse of minimally specified contexts indicated that exploitation of the relationship between successive tags coupled with a mechanism that would be applied throughout a sequence of ambiguous words would produce a more accurate and effective method of word disambiguation p 141the main innovation of claws is the use of a matrix of collocational probabilities indicating the relative likelihood of cooccurrence of all ordered pairs of tagsthis matrix can be mechanically derived from any pretagged corpusclaws used quota large proportion of the brown corpusquot 200000 words pp141 150the ambiguities contained within a span of ambiguous words define a precise number of complete sets of mappings from words to individual tagseach such assignment of tags is called a patheach path is composed of a number of tag collocations and each such collocation has a probability which may be obtained from the collocation matrixone may thus approximate each path probability by the product of the probabilities of all its collocationseach path corresponds to a unique assignment of tags to all words within a spanthe paths constitute a span network and the path of maximal probability may be taken to contain the quotbestquot tagsmarshall states that claws calculates the most probable sequence of tags and in the majority of cases the correct tag for each individual word corresponds to the associated tag in the most probable sequence of tagsquot but a more detailed examination of the pascal code for claws revealed that claws has a more complex definition of quotmost probable sequencequot than one might expecta probability called quotsumsuccprobsquot is predicated of each wordsumsuccprobs is calculated by looping through all tags for the words immediately preceding at and following a word for each tag triple an increment is added defined by getsucc returns the collocational probability of a tag pair get3seqfactor returns either 1 or a special value from the tagtriple list described belowdowngrade modifies the value of getsucc in accordance with rtps as described belowthe claws documentation describes sumsuccprobs as quotthe total value of all relationships between the tags associated with this word and the tags associated with the next wordfound by simulating all accesses to successors and order2vals which will be madequot the probability of each node of the span network is then calculated in the following way as a tree representing all paths through which the span network is built it appears that the goal is to make each tag probability be the summed probability of all paths passing through itat the final word of a span pointers are followed back up the chosen path and tags are chosen en routewe will see below that a simpler definition of optimal path is possible nevertheless there are several advantages of this general approach over previous onesfirst spans of unlimited length can be handled although earlier researchers have suggested that spans of length over 5 are rare enough to be of little concern this is not the casethe number of spans of a given length is a function of that length and the corpus size so long spans may be obtained merely by examining more textthe total numbers of spans in the brown corpus for each length from 3 to 19 are 397111 143447 60224 26515 114095128 2161 903 382 161 58 29 14 6 1 0 1graphing the logarithms computational linguistics volume 14 number 1 winter 1988 33 steven j derose grammatical category disambiguation by statistical optimization of these quantities versus the span length for each produces a nearperfect straight linesecond a precise mathematical definition is possible for the fundamental idea of clawswhereas earlier efforts were based primarily on ad hoc or subjectively determined sets of rules and descriptions and employed substantial exception dictionaries this algorithm requires no human intervention for setup it is a systematic processthird the algorithm is quantitative and analog rather than artificially discretethe various tests and frames employed by earlier algorithms enforced absolute constraints on particular tags or collocations of tagshere relative probabilities are weighed and a series of very likely assignments can make possible a particular a priori unlikely assignment with which they are associatedin addition to collocational probabilities claws also takes into account one other empirical quantity tags associated with words can be associated with a marker or indicates that the tag is infrequently the correct tag for the associated word indicates that it is highly improbablethe word disambiguation program currently uses these markers top devalue transition matrix values when retrieving a value from the matrix results in the value being halved in the value being divided by eight p 149thus the independent probability of each possible tag for a given word influences the choice of an optimal pathsuch probabilities will be referred to as relative tag probabilities or rtpsother features have been added to the basic algorithmfor example a good deal of suffix analysis is used in initial taggingalso the program filters its output considering itself to have failed if the optimal tag assignment for a span is not quotmore than 90 probablequotin such cases it reorders tags rather than actually disambiguatingon long spans this criterion is effectively more stringent than on short spansa more significant addition to the algorithm is that a number of tag triples associated with a scaling factor have been introduced which may either upgrade or downgrade values in the tree computed from the onestep matrixfor example the triple 1 be 2 adverb 3 pasttenseverb has been assigned a scaling factor which downgrades a sequence containing this triple compared with a competing sequence of 1 be 2 adverb 3pastparticipleadjective on the basis that after a form of be past participles and adjectives are more likely than a past tense verb p 146a similar move was used near conjunctions for which the words on either side though separated are more closely correlated to each other than either is to the conjunction itself pp146147for example a verbnoun ambiguity conjoined to a verb should probably be taken as a verbleech garside and atwell describe quotidiomtagquot which is applied after initial tag assignment and before disambiguationit was developed as a means of dealing with idiosyncratic word sequences which would otherwise because difficulty for the automatic tagging for example in order that is tagged as a single conjunctionthe idiom tagging program can look at any combination of words and tags with or without intervening wordsit can delete tags add tags or change the probability of tagsalthough this program might seem to be an ad hoc device it is worth bearing in mind that any fully automatic language analysis system has to come to terms with problems of lexical idiosyncrasyidiomtag also accounts for the fact that the probability of a verb being a past participle and not simply past is greater when the following word is quotbyquot as opposed to other prepositionscertain cases of this sort may be soluble by making the collocational matrix distinguish classes of ambiguitiesthis question is being pursuedapproximately 1 of running text is tagged by idiomtag marshall notes the possibility of consulting a complete threedimensional matrix of collocational probabilitiessuch a matrix would map ordered triples of tags into the relative probability of occurrence of each such triplemarshall points out that such a table would be too large for its probable usefulnessthe author has produced a table based upon more than 85 of the brown corpus it occupies about 2 megabytes also the mean number of examples per triple is very low thus decreasing accuracyclaws has been applied to the entire lob corpus with an accuracy of quotbetween 96 and 97quot p 29without the idiom list the algorithm was 94 accurate on a sample of 15000 words thus the preprocessor tagging of 1 of all tokens resulted in a 3 change in accuracy those particular assignments must therefore have had a substantial effect upon their context resulting in changes of two other words for every one explicitly taggedbut claws is time and storageinefficient in the extreme and in some cases a fallback algorithm is employed to prevent running out of memory as was discovered by examining the pascal program codehow often the fallback is employed is not known nor is it known what effect its use has on overall accuracysince claws calculates the probability of every path it operates in time and space proportional to the product of all the degrees of ambiguity of the words in the spanthus the time is exponential in the span lengthfor the longest span in the brown corpus of length 18 the number of paths examined would be 1492992the algorithm described here depends on a similar empiricallyderived transitional probability matrix to that of claws and has a similar definition of quotoptimal pathquotthe tagset is larger than taggit though smaller than claws containing 97 tagsthe ultimate assignments of tags are much like those of clawshowever it embodies several substantive changesthose features that can be algorithmically defined have been used to the fullest extentother addons have been minimizedthe major differences are outlined belowfirst the optimal path is defined to be the one whose component collocations multiply out to the highest probabilitythe more complex definition applied by claws using the sum of all paths at each node of the network is not usedsecond volsunga overcomes the nonpolynomial complexity of clawsbecause of this change it is never necessary to resort to a fallback algorithm and the program is far smallerfurthermore testing the algorithm on extensive texts is not prohibitively costlythird volsunga implements relative tag probabilities in a more quantitative manner based upon counts from the brown corpuswhere claws scales probabilities by 12 for rtp 01 and by 18 for p 001 volsunga uses the rtp value itself as a factor in the equation which defines probabilityfourth volsunga uses no tag triples and no idiomsbecause of this manually constructing specialcase lists is not necessarythese methods are useful in certain cases as the accuracy figures for claws show but the goal here was to measure the accuracy of a wholly algorithmic tagger on a standard corpusinterestingly if the introduction of idiom tagging were to make as much difference for volsunga as for claws we would have an accuracy of 99this would be an interesting extensioni believe that the reasons for volsunga 96 accuracy without idiom tagging are the change in definition of quotoptimal pathquot and the increased precision of rtpsthe difference in tagset size may also be a factor but most of the difficult cases are major class differences such as noun versus verb rather than the fine distinction which the claws tagset adds such as several subtypes of proper nounongoing research with volsunga may she would more light on the interaction of these factorslast the current version of volsunga is designed for use with a complete dictionary thus unknown words are handled in a rudimentary fashionthis problem has been repeatedly solved via affix analysis as mentioned above and is not of substantial interest heresince the number of paths over a span is an exponential function of the span length it may not be obvious how one can guarantee finding the best path without examining an exponential number of paths the insight making fast discovery of the optimal path possible is the use of a dynamic programming solution dreyfus and law the two key ideas of dynamic programming have been characterized as quotfirst the recognition that a given whole problem can be solved if the values of the best solutions of certain subproblems can be determined and secondly the realization that if one starts at or near the end of the whole problem the subproblems are so simple as to have trivial solutionsquot p 5dynamic programming is closely related to the study of graph theory and of network optimization and can lead to rapid solutions for otherwise intractable problems given that those problems obey certain structural constraintsin this case the constraints are indeed obeyed and a lineartime solution is availableconsider a span of length n 5 with the words in the path denoted by v w x y zassume that v and z are the unambiguous bounding words and that the other three words are each three ways ambiguoussubscripts will index the various tags for each word w1 will denote the first tag in the set of possible tags for word w every path must contain v1 and z1 since v and z are unambiguousnow consider the partial spans beginning at v and ending at each of the four remaining wordsthe partial span network ending at w contains exactly three pathsone of these must be a portion of the optimal path for the entire spanso we save all three one path to each tag under w the probability of each path is the value found in the collocation matrix entry for its tagpair namely p for i ranging from one to threenext consider the three tags under word xone of these tags must lie on the optimal pathassume it is xlunder this assumption we have a complete span of length 3 for x is unambiguousonly one of the paths to xi can be optimaltherefore we can disambiguate v w xi under this assumption namely as max for all winow of course the assumption that x1 is on the optimal path is unacceptablehowever the key to volsunga is to notice that by making three such independent assumptions namely for xl x2 and x3 we exhaust all possible optimal pathsonly a path which optimally leads to one of x tags can be part of the optimal paththus when examining the partial span network ending at word y we need only consider three possibly optimal paths namely those leading to x1 x2 and x3 and how those three combine with the tags of yat most one of those three paths can lie along the optimal path to each tag of y so we have 32 or 9 comparisonsbut only three paths will survive namely the optimal path to each of the three tags under yeach of those three is then considered as a potential path to z and one is chosenthis reduces the algorithm from exponential complexity to linearthe number of paths retained at any stage is the same as the degree of ambiguity at that stage and this value is bounded by a very small value established by independent facts about the english lexiconno faster order of speed is possible if each word is to be considered at allas an example we will consider the process by which volsunga would tag quotthe man still saw herquotwe will omit a few ambiguities reducing the number of paths to 24 for ease of expositionthe tags for each word are shown in table 2the notation is fairly mnemonic but it is worth clarifying that ppo indicates an objective personal pronoun and pp the possessive thereof while vbd is a pasttense verbexamples of the various collocational probabilities are illustrated in table 3 the product of 123221 ambiguities gives 24 paths through this spanin this case a simple process of choosing the best successor for each word in order would produce the correct tagging but of course this is often not the caseusing volsunga method we would first stack quotthequot with certainty for the tag at certainquotnext we stack quotmanquot and look up the collocational probabilities of all tag pairs between the two words at the top of the stackin this case they will be p 186 and p 1we save the best path to each of mannn and manvbit is sufficient to save a pointer to the tag of quotthequot which ends each of these paths making backwardlinked lists now we stack quotstillquotfor each of its tags we choose either the nn or the vb tag of quotmanquot as better p is the best of p p 186 40 744 p p 1 22 22 thus the best path to stillnn is at nn nnsimilarly we find that the best path to stillrb is at nn rb and the best path to stillvb is at nn rbthis shows the overwhelming effect of an article on disambiguating an immediately following nounverb ambiguityat this point only the optimal path to each of the tags for quotstillquot is savedwe then go on to match each of those paths with each of the tags for quotsawquot discovering the optimal paths to sawnn and to sawvbthe next iteration reveals the optimal paths to herppo and herpp and the final one picks the optimal path to the period which this example treats as unambiguousnow we have the best path between two certain tags and can merely pop the stack following pointers to optimal predecessors to disambiguate the sequencethe period becomes the start of the next spaninitial testing of the algorithm used only transitional probability informationrtps had no effect upon choosing an optimal pathfor example in deciding whether to consider the word quottimequot to be a noun or a verb environments such as a preceding article or proper noun or a following verb or pronoun were the sole criteriathe fact that quottimequot is almost always a noun rather than a verb was not consideredaccuracy averaged 9293 with a peak of 937there are clear examples for which the use of rtps is importantone such case which arises in the brown corpus is quotso thatquotquotsoquot occurs 932 times as a qualifier 479 times as a subordinating conjunction and once as an interjection the standard tagging for quotso thatquot is quotcs csquot but this is an extremely lowfrequency collocation lower than the alternative quotuh csquot barring strong contextual counterevidence quotuh csquot is the preferred assignment if rtp information is not usedby weighing the rtps for quotsoquot however the quotuhquot assignment can be avoidedthe lob corpus would use quotcs csquot in this case employing a special quotditto tagquot to indicate that two separate orthographic words constitute a single syntactic wordanother example would be quotso as toquot tagged to to toquotblackwell comments that quotit was difficult to know where to draw the line in defining what constituted an idiom and some such decisions seemed to have been influenced by semantic factorsnonetheless idiomtag had played a significant part in increasing the accuracy of the tagging suite ie clawsquot p 7it may be better to treat this class of quotidiomsquot as lexical items which happen to contain blanks but rtps permit correct tagging in some of these casesthe main difficulty in using rtps is determining how heavily to weigh them relative to collocational informationat first volsunga multiplied raw relative frequencies into the path probability calculations but the ratios were so high in some cases as to totally swamp collocational datathus normalization is requiredthe present solution is a simple one all ratios over a fixed limit are truncated to that limitimplementing rtps increased accuracy by approximately 4 to the range 9597 with a peak of 975 on one small samplethus about half of the residual errors were eliminatedit is likely that tuning the normalization would improve this figure slightly morevolsunga was not designed with psychological reality as a goal though it has some plausible characteristicswe will consider a few of these brieflythis section should not be interpreted as more than suggestivefirst consider dictionary learning the program currently assumes that a full dictionary is availablethis assumption is nearly true for mature language users but humans have little trouble even with novel lexical items and generally speak of quotcontextquot when asked to describe how they figure out such wordsas ryder and walker note the use of structural analysis based on contextual clues allows speakers to compute syntactic structures even for a text such as jabberwocky where lexical information is clearly insufficientthe immediate syntactic context severely restricts the likely choices for the grammatical category of each neologismvolsunga can perform much the same task via a minor modification even if a suffix analysis failsthe most obvious solution is simply to assign all tags to the unknown word and find the optimal path through the containing span as usualsince the algorithm is fast this is not prohibitivebetter one can assign only those tags with a nonminimal probability of being adjacent to the possible tags of neighboring wordsprecisely calculating the mean number of tags remaining under this approach is left as a question for further research but the number is certainly very lowabout 3900 of the 9409 theoretically possible tag pairs occur in the brown corpusalso all tags marking closed classes may be eliminated from considerationalso since volsunga operates from left to right it can always decide upon an optimum partial result and can predict a set of probable successorsfor these reasons it is largely robust against ungrammaticalityshannon performed experiments of a similar sort asking human subjects to predict the next character of a partially presented sentencethe accuracy of their predictions increased with the length of the sentence fragment presentedthe fact that volsunga requires a great deal of persistent memory for its dictionary yet very little temporary space for processing is appropriateby contrast the space requirements of claws would overtax the shortterm memory of any language useranother advantage of volsunga is that it requires little inherent linguistic knowledgeprobabilities may be acquired simply through counting instances of collocationthe results will increase in accuracy as more input text is seenprevious algorithms on the other hand have included extensive manually generated lists of rules or exceptionsan obvious difference between volsunga and humans is that volsunga makes no use whatsoever of semantic informationno account is taken of the high probability that in a text about carpentry quotsawquot is more likely a noun than in other types of textthere may also be genre and topicdependent influences upon the frequencies of various syntactic and hence categorial structuresbefore such factors can be incorporated into volsunga however more complete dictionaries including semantic information of at least a rudimentary kind must be availablevolsunga requires a tagged corpus upon which to base its tables of probabilitiesthe calculation of transitional probabilities is described by marshall the entire brown corpus was analyzed in order to produce the tables used in volsungaa complete dictionary was therefore available when running the program on that same corpussince the statistics comprising the dictionary and probability matrix used by the program were derived from the same corpus analyzed the results may be considered optimalon the other hand the corpus is comprehensive enough so that use of other input text is unlikely to introduce statistically significant changes in the program performancethis is especially true because many of the unknown words would be capitalized proper names for which tag assignment is trivial modulo a small percentage at sentence boundaries or regular formations from existing words which are readily identified by suffixesgreene and rubin note that their suffix list quotconsists mainly of romance endings which are the source of continuing additions to the languagequot a natural relationship exists between the size of a dictionary and the percentage of words in an average text which it accounts fora complete table showing the relationship appears in kucera and francis pp300307a few representative entries are shown in table 4the quottypesquot column indicates how many vocabulary items occur at least quotfreq limitquot times in the corpusthe quottokensquot column shows how many tokens are accounted for by those types and the quottokensquot column converts this number to a percentagetable 5 lists the accuracy for each genre from the brown corpusthe total token count differs from table 4 due to inclusion of nonlexical tokens such as punctuationthe figure shown deducts from the error count those particular instances in which the corpus tag indicates by an affix that the word is part of a headline title etcsince the syntax of such structures is often deviant such errors are less significantthe difference this makes ranges from 009 up to 064 with an unweighted mean of 031detailed breakdowns of the particular errors made for each genre exist in machinereadable formthe high degree of lexical category ambiguity in languages such as english poses problems for parsingspecifically until the categories of individual words have been established it is difficult to construct a unique and accurate syntactic structuretherefore a method for locally disambiguating lexical items has been developedearly efforts to solve this problem relied upon large libraries of manually chosen context frame rulesmore recently however work on the lob corpus of british english led to a more systematic algorithm based upon combinatorial statisticsthis algorithm operates entirely from left to right and has no inherent limit upon the number of consecutive ambiguities which may be processedits authors report an accuracy of 9697however claws falls prey to other problemsfirst the probabilistic system has been augmented in several ways such as by pretagging of categorially troublesome quotidiomsquot second it was not based upon the most complete statistics availablethird and perhaps most significant it requires nonpolynomially large time and spacethe algorithm developed here called volsunga addresses these problemsfirst the various additions to claws have been deletedsecond the program has been calibrated by reference to 100 instead of 20 of the brown corpus and has been applied to the entire corpus for testingthis is a particularly important test because the brown corpus provides a longestablished standard against which accuracy can be measuredthird the algorithm has been completely redesigned so that it establishes the optimal tag assignments in linear time as opposed to exponentialtests on the one million words of the brown corpus show an overall accuracy of approximately 96 despite the nonuse of auxiliary algorithmssuggestions have been given for several possible modifications which might yield even higher accuraciesthe accuracy and speed of volsunga make it suitable for use in preprocessing natural language input to parsers and other language understanding systemsits systematicity makes it suitable also for work in computational studies of language learning
J88-1003
grammatical category disambiguation by statistical optimizationseveral algorithms have been developed in the past that attempt to resolve categorial ambiguities in natural language text without recourse to syntactic or semantic level informationan innovative method was recently developed by those working with the lancasteroslobergen corpus of british englishthis algorithm uses a systematic calculation based upon the probabilities of cooccurrence of particular tagsits accuracy is high but it is very slow and it has been manually augmented in a number of waysthe effects upon accuracy of this manual augmentation are not individually knownthe current paper presents an algorithm for disambiguation that is similar to claws but that operates in linear rather than in exponential time and space and which minimizes the unsystematic augmentstests of the algorithm using the million words of the brown standard corpus of english are reported the overall accuracy is 96this algorithm can provide a fast and accurate front end to any parsing or natural language processing system for english
temporal ontology and temporal reference ampquottwo weeks later bonadea had already been his lover for a fortnightquot musil mann ohne eigenschaften a semantics of temporal categories in language and a theory of their use in defining the temporal relations between events both require a more complex structure on the domain underlying the meaning representations than is commonly assumed this paper proposes an ontology based on such notions as causation and consequence rather than on purely temporal primitives a central notion in the ontology is that of an elementary eventcomplex called a quotnucleusquot a nucleus can be thought of as an association of a goal event or quotculminationquot with a quotpreparatory processquot by which it is accomplished and a quotconsequent statequot which ensues naturallanguage categories like aspects futurates adverbials and whenclauses are argued to change the temporalaspectual category of propositions under the control of such a nucleic knowledge representation structure the same concept of a nucleus plays a central role in a theory of temporal reference and of the semantics of tense which we follow mccawley partee and isard in regarding as an anaphoric category we claim that any manageable formalism for naturallanguage temporal descriptions will have to embody such an ontology as will any usable temporal database for knowledge about events which is to be interrogated using natural language a semantics of temporal categories in language and a theory of their use in defining the temporal relations between events both require a more complex structure on the domain underlying the meaning representations than is commonly assumedthis paper proposes an ontology based on such notions as causation and consequence rather than on purely temporal primitivesa central notion in the ontology is that of an elementary eventcomplex called a quotnucleusquot a nucleus can be thought of as an association of a goal event or quotculminationquot with a quotpreparatory processquot by which it is accomplished and a quotconsequent statequot which ensuesnaturallanguage categories like aspects futurates adverbials and whenclauses are argued to change the temporalaspectual category of propositions under the control of such a nucleic knowledge representation structurethe same concept of a nucleus plays a central role in a theory of temporal reference and of the semantics of tense which we follow mccawley partee and isard in regarding as an anaphoric categorywe claim that any manageable formalism for naturallanguage temporal descriptions will have to embody such an ontology as will any usable temporal database for knowledge about events which is to be interrogated using natural languageit is often assumed that the semantics of temporal expressions is directly related to the linear time concept familiar from highschool physicsthat is to a model based on the number linehowever there are good reasons for suspecting that such a conception is not the one that our linguistic categories are most directly related towhenclauses provide an example of the mismatch between linguistic temporal categories and a semantics based on such an assumptionconsider the following examples suggested by ritchie 1979 to map the temporal relations expressed in these examples onto linear time and to try to express the semantics of when in terms of points or intervals would appear to imply either that when is multiply ambiguous allowing these points or intervals to be temporally related in at least three different ways or that the relation expressed between main and whenclauses is one of approximate coincidencehowever neither of these tactics explains the peculiarity of utterances like the following the unusual character of this statement seems to arise because the whenclause predicates something more than mere temporal coincidence that is some contingent relation such as a causal link or an enablement relation between the two eventsour knowledge of the world does not easily support such a link for at least if we do not indulge in the fiction that the natural universe is conspiring against the speakernor is the relation predicated between the two events by when the one that we normally think of as scientifically causal for when seems to predicate an intransitive relationconsider from and it would be unwarranted to conclude the state of affairs that is described in and this causal aspect of the sentence meaning must stem from the sensemeaning of when because parallel utterances using while just after at approximately the same time as and the like which predicate purely temporal coincidence are perfectly felicitouswe shall claim that the different temporal relations conveyed in examples and do not arise from any senseambiguity of when or from any quotfuzzinessquot in the relation that it expresses between the times referred to in the clauses it conjoins but from the fact that the meaning of when is not primarily temporal at allnor is it simply causal as example 3 showswe will argue instead that when has a single sensemeaning reflecting its role of establishing a temporal focus which we follow isard and longuethiggins in relating to reichenbach reference time the apparent diversity of meanings arises from the nature of this referent and the organisation of events and states of affairs in episodic memory under a relation we shall call contingency a term related but not identical to a notion like causality rather than mere temporal sequentialitythis contingent nontemporal relation on the representation of events in episodic memory also determines the ontology of propositions associated with linguistic expressions denoting events and statesit is to these that we turn firstpropositions conveyed by english sentences uttered in context can following vendler be classified into temporal or aspectual types partly on the basis of the tenses aspects and adverbials with which they can cooccur the term aspectual type refers to the relation that a speaker predicates of the particular happening that their utterance describes relative to other happenings in the domain of the discoursewhat the speaker says about those relations is of course quite distinct from what those relations objectively arein particular the speaker predications about events will typically be coloured by the fact that those events are involved in sequences that are planned predicted intended or otherwise governed by agencies of one kind or anotherfor want of some established term to cover this very general class of dependencies between events we will use the term contingencythus an utterance of is usually typical of what we will call a culmination informally an event which the speaker views as punctual or instantaneous and as accompanied by a transition to a new state of the worldthis new state we will refer to as the consequent state of the eventit does not necessarily include all events that are objectively and in fact consequencesit rather includes only those consequences that the speaker views as contingently related to other events that are under discussion say by causing them or by permitting them to occurfor reasons that are discussed in section 32 below expressions like these readily combine with the perfect as in the point may perhaps best be made by noting that there is another class of punctual expressions that is not normally associated with a consequent statefor exampleis not usually viewed as leading to any relevant change in the state of the worldit typifies what we call a point expressiona point is an event that is viewed as an indivisible whole and whose consequences are not at issue in the discoursewhich of course does not mean that de facto consequences do not existsuch expressions are evidently not the same as culminations for they are rather odd in combination with the perfect as in 7 harry has hiccuppedthe reasons for this will also be discussed belowsentences like 8harry climbed typify a third aspectual category which we will call for obvious reasons a processmost utterances of such sentences describe an event as extended in time but not characterised by any particular conclusion or culminationas was pointed out by vendler expressions like these can be combined with aforadverbial but not with an inadverbial in contrast 10harry climbed to the top typically describes a state of affairs that also extends in time but that does have a particular culmination associated with it at which a change of state takes placewe classify most utterances of such sentences as a fourth aspectual type called a culminated processculminated processes in contrast to ordinary processes combine readily with an inadverbial but not with aforadverbial11harry climbed all the way to the top in less than 45 minutesharry climbed all the way to the top for less than 45 minutesall of the above categories describe what common sense suggests we call eventsthat is happenings with defined beginnings and endswe distinguish these quothardedgedquot categories from a class of indefinitely extending states of affairs which equally commonsensically we call statesexample 12 typically describes one kind of state 12harry is at the toppart of the appeal of vendler account and such descendants as the present proposal is that it suggests that part of the meaning of any utterance of a sentence is one of a small number of temporalaspectual profiles distinguished on a small number of dimensionsin present terms the eventtypes can be distinguished on just two dimensions one concerned with the contrast between punctuality and temporal extension the other with the association with a consequent statethis subcategorisation can be summarized as in figure 1events states atomic extended understand love know resemble conseq culmination culminated recognize spot process win the race build a house eat a sandwich conseq point process hiccup run swim walk tap wink play the piano we have included in figure 1 examples of verbs which typically yield propositions of the relevant types and we shall assume that such verbs are lexically specified as bearing that typehowever it cannot be stressed too often that these aspectual profiles are properties of sentences used in a context sensemeanings of sentences or verbs in isolation are usually compatible with several vendlerian profiles as dowty and verkuyl have pointed out hence the frequent use of words like quottypicallyquot and quotreadilyquot abovethe details of this taxonomy and the criteria according to which utterances can be categorised are less important than the observation that each primitive entity of a given type such as the culmination event of harry reaching the top carries intimations of other associated events and states such as the process by which the culmination was achieved and the consequent state that followedwhat linguistic devices like tenses aspects and temporalaspectual adverbials appear to do is to transform entities of one type into these other contingently related entities or to turn them into composites with those related entitiesfor example we shall argue below that the progressive auxiliary demands that its argument be a process which it predicates as ongoingif it is combined with an event type that is not a process say with a punctual event as in harry was hiccupping then it will cause that original event to be reinterpreted as a process in this case the process of iteration or repetition of the basic eventsimilarly we shall argue that a perfect auxiliary demands a culmination predicating of the time referred to that the associated consequent state holdsthe notion of quottime referred toquot is related to reichenbach reference time in section 41 belowif the perfect is combined with an event description for which world knowledge provides no obvious culmination then the ensemble will tend to be anomalousso for example harry has reached the top is fine but the clock has ticked and harry has hummed to the extent that they are acceptable at all seem to demand rather special scenarios in which the tick of the clock and the mere act of humming have a momentousness that they usually lackthe phenomenon of change in the aspectual type of a proposition under the influence of modifiers like tenses temporal adverbials and aspectual auxiliaries is of central importance to the present accountwe shall talk of such modifiers as functions which quotcoercequot their inputs to the appropriate type by a loose analogy with typecoercion in programming languages thus the effect on meaning of the combination of the progressive with an expression denoting an atomic punctual event as in sandra was hiccupping occurs in two stages first the point proposition is coerced into a process of iteration of that pointonly then can this process be defined as ongoing and hence as a progressive statethese two stages might be represented as in the following diagram computational linguistics volume 14 number 2 june 1988 17 13 the temporalaspectual ontology that underlies the phenomenon of aspectual type coercion can be defined in terms of the transition network shown in figure 2 in which each transition is associated with a change in the content and where in addition the felicity of any particular transition for a given proposition is conditional on support from knowledge and context of discrete steps of climbing resting having lunch or whateverthe consequent state may also be compound most importantly it includes the further events if any that are in the same sequence of contingently related events as the culminationsimilarly the culmination itself may be a complex eventfor example we shall see below that the entire culminated process of climbing mteverest can be treated as a culmination in its own rightin this case the associated preparatory process and consequent state will be different ones to those internal to the culminated process itselfrather than attempting to explain this diagram from first principles we present below a number of examples of each transitionhowever it is worth noting first that many of the permissible transitions between aspectual categories illustrated in figure 2 appear to be related to a single elementary contingencybased event structure which we call a nucleusa nucleus is defined as a structure comprising a culmination an associated preparatory process and a consequent state2 it can be represented pictorially as in figure 3 any or all of these elements may be compound for example the preparation leading to the culmination of reaching the top of mteverest may consist of a number according to the present theory progressive auxiliaries are functions that require their input to denote a processtheir result is a type of state that we shall call a progressive state which describes the process as ongoing at the reference timethus the following sentence among other meanings that we shall get to in a moment can simply predicate of a present reference time that the process in question began at some earlier time and has not yet stopped 14the president is speakingif the input to a progressive is atomic then by definition it cannot be described as ongoinghowever as was noted in the introduction it may be coerced into a process by being iterated as in 15harry is hiccuppingthere is another route through the network in figure 2 where the point is coerced into a culmination ie as constituting an atomic event that does have consequences associated with itin this case the interpretation for parallels the one given for harry was reaching the top belowhowever this particular example is deliberately chosen in order to make that interpretation unlikelyif a progressive combines with a culminated process as in 16roger was running a mile then the latter must also first be coerced to become a processthe most obvious way to do this is to strip off the culmination and leave the preparatory process behindit is this process that is stated to be ongoing at the past reference timeanother possible coercion is to treat the entire culminated process as a point and to iterate itthis interpretation appears to be the one that is forced by continuing as in 17roger was running a mile last weekthis week he is up to threewhen a culmination expression like reach the top is used with a progressive it must be coerced to become a process in a slightly more complicated waythe most obvious path through the network in figure 2 from the culmination node to the process node involves first adding a preparatory process to the culmination to make it a culminated process then stripping off the culmination point as beforethus sentences like the following describe this preparatory process as ongoing at the past reference time 18harry was reaching the topagain an iterated reading is possible in principle but pragmatically unlikely hereas a result of the coercions implicit in the last two examples it is no longer asserted that the culminations in question ever in fact occurred but only that the associated preparatory processes didthus there is no contradiction in continuations that explicitly deny the culmination like 19 a harry was running a mile but he gave up after two laps b harry was reaching the top when he slipped and fell to the bottomthe fact that according to the present theory progressives coerce their input to be a process so that any associated culmination is stripped away and no longer contributes to truth conditions provides a resolution of the imperfective paradox without appealing to theoryexternal constructs like inertia worldsa perfect as in 20harry has reached the top is a function that requires its input category to be a culminationits result is the corresponding consequent statethe most obvious of these consequences for is that harry still be at the top although as usual there are other possibilitiesinformal evidence that this indeed is the function of the perfect can be obtained by noticing that perfects are infelicitous if the salient consequences are not in forcethus when i am on my way to get a cloth to clean up the coffee i accidentally spilled i can say 21i have spilled my coffeeafter cleaning up the mess however all the obvious consequences associated with this event seem to be overin that context it would be infelicitous to utter if the input to a perfect is not a culmination then the perfect will do its best to coerce it to be one subject to the limitations imposed by contextual knowledgeif the hearer cannot identify any relevant consequences as seems likely for the following example then coercion may simply fail in which case a perfect will be infelicitous as was noted earlier 22the star has twinkledto be able to use a culminated process expression like climbing mount everest with a perfect auxiliary it first has to be coerced into a culminationrequiring such a transition might seem unnecessary since a culminated process already implies the existence of a culmination with consequences to which the perfect could referbut consider figure 4 as a possible rendering of the nucleus associated with climbing mteverest climbing the mountain being at the top if a perfect could be used to single out the consequences of a nucleus associated with a culminated process expression then having climbed mteverest could be used to refer to the state of having reached the summit or being at the tophowever this does not seem to be the casea reporter who has managed to establish radio contact with a mountaineer who has just reached the top of mteverest is unlikely to ask 23have you climbed mteverest yetthe question rather seems to concern consequences of the culminated process as a wholewe capture this fact by making the perfect coerce the culminated process to become a culminationthe transition network allows this to happen if the entire event of climbing mteverest is treated as a single unit by making it a point so that it can become a culmination in its own rightthe perfect then delivers a rather different kind of consequent statea process like work in the garden can be coerced by a perfect auxiliary in essentially the same way the process of working possibly associated with a culmination point is treated as a single unitthis pointlike entity can then be used as the starting point for the construction of a new nucleus by treating it as a culmination in its own right provided that there are associated consequencesas a result a question like 24 can only be used felicitously if john working in the garden was part of a prearranged plan or a particular task john had to finish before something else could happen 24has john worked in the gardenthis account also explains the infelicity of a sentence like 25they have married yesterdaythe sentence could only refer to the consequences of getting married yesterday as opposed to getting married computational linguistics volume 14 number 2 june 1988 19 marc moens and mark steedman temporal ontology and temporal reference some other timebut most of what we think of as consequences of events are independent of the specific time at which the event occurredif a certain situation is a consequence of an event taking place at a particular time then a perfect auxiliary may be used to describe that eventthus a superstitious person believing that disastrous consequences are likely to result from actions performed on an unpropitious date can say 26they have married on friday the 13thbut even on saturday the 14th such a person still cannot use for it would not provide the essential information about the date thus flouting grice maxim of quantitythe account given here also explains the wellknown contrast between the infelicitous and its felicitous counterpart whatever causal sequence of events and their consequences associated with the individual we take to be the one we are currently talking about cannot be used felicitously to refer to a part of that sequence since all such causal sequences seem to be to do with his enduring consciousness and are therefore by definition overhowever can be uttered felicitously to refer to that same event because the relevant causal sequence must be one whose event and consequences apply to the institution of princeton university and many such consequences are still in trainthe hypothesis we advance that the perfect has only one temporal meaning has a precedent in the work of inoue 1979moens 1987 has extended the present analysis to show that the distinctions mccawley 1971 1981 and comrie 1976 draw between different kinds of perfects are nothing but different consequent states depending on the nature of the verbal expression and the particular core event it expresses and the specific kind of episodes in which our general knowledge tells us such core events typically occurforadverbials can only be used felicitously with process expressions 28john worked in the garden for five hoursthe resulting combination is a culminatedprocess expressionevidence for this can be found in the ease with which an expression like can be combined with a perfect unlike its process counterpart 29john has worked in the gardenjohn has worked in the garden for five hoursan expression like playing the sonata can readily occur with aforadverbial suggesting that its basic category by which we mean the type assigned in the lexicon and inherited by the proposition in the absence of any coercionis that of a processas a result carries no implication that sue finished playing the sonata a similar transition path is needed to make sense of examples like the following in which a culmination is coerced to become a point and then in turn coerced to become a process by being iterated the aspectual network would wrongly predict the existence of aforadverbial paradox parallel to the imperfective paradox if foradverbials were permitted to freely coerce culminated processes to be processesthe theory might seem to wrongly predict that below would mean roughly the same as however it is hard to find a context in which means anything at allthe reason for this lies in the way english syntax and morphology control coercion in the aspectual transition networkthe transition from culmination to consequent state for example demands the presence of a perfectsimilarly the arc from process to progressive state may be traversed only if a progressive auxiliary is present in the sentencefor other transitions such as the one resulting in an iterated process or an habitual state english has no explicit markers and they can be made freelythe transition from culminated process to process is not one that can be made freely in english but seems to require the presence of a progressive ingformas a result turning the culmination in into a process by first adding a preparatory process and then stripping off the culmination point is not allowedit is allowed in but only because the example contains the required progressive ingformthe only other transition path in the aspectual network that can account for the combination of a culmination with a foradverbial is the one that turns the culmination into a point and then iterates it to be a processthis interpretation is not felicitous for either given our knowledge about what constitutes winning a racehowever as with it is acceptable for 34nikki lauda won the monaco grand prix for several yearssometimes a foradverbial in combination with a culmination seems to describe a time period following the culmination rather than an iterated process 35john left the room for a few minutesthis adverbial is of a different kind however expressing intention rather than durationit is merely by accident that english uses the same device to convey these different meaningsin french or german for example the two constructions are clearly distinct as shown in the following translations of and not all aspectualtemporal adverbials expressing a time span have the same functional typeinadverbials for example coerce their input to be a culminated process expression as do related phrases like quotit took me two days to quot this means that combination with a culmination expression requires a transition to the culminated process nodeaccording to the aspectual network in figure 2 this transition is felicitous if the context allows a preparatory process to be associated with the culmination as in 38laura reached the top in two hoursthe inadverbial then defines the length of this preparatory periodsince the arcs describe how one must be able to view the world for transitions to be made felicitously it is obvious that there are expressions that will resist certain changesfor example it will be hard to find a context in which an inadverbial can be combined with a culmination expression like harry accidentally spilled his coffee since it is hard to imagine a context in which a preparatory process can be associated with an involuntary actindeed sentences like the following only seem to be made tolerable to the extent that it is possible to conjure up contexts in which the event only appears to be accidental 39in fifteen minutes harry accidentally spilled his coffeea similar problem arises in connection with the following example 40john ran in a few minutesthe process expression john ran has to be changed into a culminatedprocess expression before combination with the inadverbial is possibleone way in which the network in figure 2 will permit the change from a process to a culminated process is if the context allows a culmination point to be associated with the process itselfgeneral world knowledge makes this rather hard for a sentence like john ran except in the case where john habitually runs a particular distance such as a measured mileif the inadverbial had conveyed a specific duration such as in four minutes then the analysis would make sense as dowty has pointed outhowever the unspecific in a few minutes continues to resist this interpretationhowever another route is also possible for the process of john running can be made into an atomic point and thence into a culmination in its own rightthis culmination can then acquire a preparatory process of its ownwhich we can think of as preparing to run to become the culminated process which the adverbial requiresthis time there is no conflict with the content of the adverbial so this reading is the most accessible of the twosince the transition network includes loops it will allow us to define indefinitely complex temporalaspectual categories like the one evoked by the following sentence 41it took me two days to play the quotminute waltzquot in less than sixty seconds for more than an hourthe process expression play the minute waltz is coerced by the inadverbial into a culminated process including a culmination of finishing playing the minute waltzcombination with the foradverbial requires this expression to be turned into a processthe only possible route through the network being that through the point node and iteratingthe resulting culminatedprocess expression describes the iterated process of playing the minute waltz in less than sixty seconds as lasting for more than an hourthe expression it took me finally is like an inadverbial in that it is looking for a culminatedprocess expression to combine withit would find one in the expression to play the minute waltz in less than sixty seconds for more than an hour but combination is hampered by the fact that there is a conflict in the length of time the adverbials describein the case of the whole culminated process is instead viewed as a culmination in its own right knowledge concerning such musical feats then supplies an appropriate preparatory process that we can think of as practicingthe phrase it took me two days then defines the temporal extent of this preparatory process needed to reach the point at which repeatedly playing that piece of music so fast for such a considerable length of time became a newly acquired skillwe assume that the ordering of these successive coercions like others computational linguistics volume 14 number 2 june 1988 21 marc moens and mark steedman temporal ontology and temporal reference induced by the perfect and the progressive are under the control of syntaxthe aspects and temporalaspectual adverbials considered above all act to modify or change the aspectual class of the core proposition subject to the limits imposed by the network in figure 2 and by contextual knowledgehowever tenses and certain other varieties of adverbial adjuncts have a rather different charactertense is widely regarded as an anaphoric category requiring a previously established temporal referentthe referent for a present tense is usually the time of speech but the referent for a past tense must be explicitly establishedthis is done by using a second type of quottemporalquot adjunct such as once upon a time at five of the clock last saturday while i was cleaning my teeth or when i woke up this morningmost accounts of the anaphoric nature of tense have invoked reichenbach trinity of underlying times and his concept of the positional use of the reference timeunder these accounts temporal adjuncts establish a referent to which the reference time of a main clause and subsequent sametensed clauses may attach or refer in much the same way that various species of full noun phrases establish referents for pronouns and definite anaphors reichenbach account is somewhat inexplicit as far as extended noninstantaneous events goin particular he makes it look as though the reference time is always an instanthowever we believe that the following account is the obvious generalisation of his and probably what he intended anywayin reichenbach system a simple past tense of an atomic event is such that reference time and event time are identical while progressives and perfects are such that r and e are not identical3 the only coherent generalisation of his scheme to durative events is to maintain this pattern and assume that r and e are coextensive for an utterance like 42harry ran a mileit follows that r may be an extended period r may also be an extended period for a state such as a progressive although in this case the corresponding event time is still quite separate of coursewhat is the nature of this referent and how is it establishedthe anaphoric quality of tense has often been specifically compared to pronominal anaphora however in one respect the past tense does not behave like a pronoun use of a pronoun such as quotshequot does not change the referent to which a subsequent use of the same pronoun may refer whereas using a past tense mayin the following example the temporal reference point for the successive conjoined main clauses seems to move on from the time originally established by the adjunct 43at exactly five of the clock harry walked in sat down and took off his bootsnor is this just a matter of pragmatic inference other orders of the clauses are not allowed 44at exactly five of the clock harry took off his boots sat down and walked inthis fact has caused theorists such as dowty 1986 hinrichs 1984 and partee 1984 to stipulate that the reference time autonomously advances during a narrativehowever such a stipulation seems to be unnecessary since the amount by which the reference time advances still has to be determined by contextthe concept of a nucleus that was invoked above to explain the varieties of aspectual categories offers us exactly what we need to explain both the fact that the reference time advances and by how muchwe simply need to assume that a mainclause event such as harry walked in is interpreted as an entire nucleus complete with consequent state for by definition the consequent state comprises whatever other events were contingent upon harry walking in including whatever he did nextprovided that the context supports the idea that a subsequent main clause identifies this next contingent event then it will provide the temporal referent for that main clauseif the context does not support this interpretation then the temporal referent will be unchanged as in 45at five of the clock my car started and the rain stoppedin its ability to refer to temporal entities that have not been explicitly mentioned but whose existence has merely been implied by the presence of an entity that has been mentioned tense appears more like a definite np than like a pronoun as webber 1987 points out46i went to a party last nightthe music was wonderfulthe definite nature of tense together with the notion of the nucleus as the knowledge structure that tensed expressions conjure up explain the apparent ambiguity of whenclauses with which this paper begana whenclause behaves rather like one of those phrases that are used to explicitly change topic such as and your father in the following example a whenclause does not require a previously established temporal focus but rather brings into focus a novel temporal referent whose unique identifiability in the hearer memory is presupposedagain the focused temporal referent is associated with an entire nucleus and again an event main clause can refer to any part of this structure conditional on support from general or discourse specific knowledgefor example consider again example 1 with which we began once the core event of the whenclause has been identified in memory the hearer has two alternative routes to construct a complete nucleus a to decompose the core event into a nucleus and to make a transition to one of the components such as the preparatory activity of building or to the consequent state of having built the bridge or b to treat the entire event as a single culmination and compose it into a nucleus with whatever preparation and consequences the context provides for the activity of building a bridge and to make the transition to either one of thoseeither way once the nucleus is established the reference time of the main clause has to be situated somewhere within itthe exact location being determined by knowledge of the entities involved and the episode in questionso in example 48a the entire culminated process of building the bridge tends to become a culmination which is associated in a nucleus with preparations for and consequences of the entire business as in figure 5 they prepare they have built to build the bridge the drawing up of the plans is then for reasons to do with knowledge of the world situated in the preparatory phasein example b in contrast people tend to see the building of the bridge as decomposed into a quite different preparatory process of building a quite different culmination of completing the bridge and some consequences that we take to be also subtly distinct from those in the previous case as was argued in section 32the resulting nucleus is given in figure 6the use of the best materials is then as in situated in the preparatory processbut it is a different one this timethus a main clause event can potentially be situated anywhere along this nucleus subject to support from knowledge about the precise events involvedbut example 2 repeated here is still strange because it is so hard to think of any relation that is supported in this way 49when my car broke down the sun setthe whenclause defines a nucleus consisting of whatever process we can think of as leading up to the car breakdown the breakdown itself and its possible or actual consequencesit is not clear where along this nucleus the culmination of the sun set could be situated it is not easy to imagine that it is a functional part of the preparatory process typically associated with a breakdown and it is similarly hard to imagine that it can be a part of the consequent state so under most imaginable circumstances the utterance remains bizarrethe constraints when places on possible interpretations of the relation between subordinate and main clause are therefore quite strongfirst general and specific knowledge about the event described in the whenclause has to support the association of a complete nucleus with itsecondly world knowledge also has to support the contingency relation between the events in subordinate and main clausesas a result many constructed examples sound strange or are considered to be infelicitous because too much context has to be imported to make sense of themin all of the cases discussed so far the main clause has been an event of some varietywith stative main clauses as in the following examples the interpretation strategy is somewhat differentstatives show no sign of being related under what we are calling contingency presumably because contingency is by definition a relation over eventsin particular they do not enter in a causal or contingent relation with a whenclause the way corresponding sentences with events as main computational linguistics volume 14 number 2 june 1988 23 marc moens and mark steedman temporal ontology and temporal reference clauses dothey therefore merely predicate that the state in question holds at the time of the culmination 50when they built that bridge i was still a young lad my grandfather had been dead for several years my aunt was having an affair with the milkman my father used to play squashhowever a stative main clause can be turned into an event expression in that case a contingency relation is predicated to exist between the two eventsthus the following example seems to involve an inceptive event which begins the state of knowing 51when pete came in i knew that something was wrongsuch changes of type are similar to others discussed above but are not treated in the present paper5 referring to future events bennett and partee 1972 speaking of the difference between the present perfect and the simple past remark that one might expect a similar distinction among future tensesone could conceive of a construction parallel to the perfect whose event time would be in the future and whose reference time would be the time of speech conveying a notion of current relevance and there could be a construction parallel to the simple past with both reference and event times in the futurebennett and partee suggest that english is not as one would expect and follow reichenbach in saying that these two functions are conflated in a single device the modal future using willalthough it is true that the modal future shares features of both perfect and simple past it is nevertheless also the case that there are two classes of futurate expressions with properties parallel to each of the two past expressionsthe candidate for the role parallel to the perfect is the socalled futurate progressive 52robert was working on the speech project until he got a job offer from sussexas dowty 1979 1986 argues examples like can be both a past imperfective progressive and a past futurate progressive however the difference between the two interpretations seems to be a matter of pragmatic world knowledge rather than sensesemantics corresponding to the two different ways of constructing a nucleus the imperfective progressive decomposes the core event into a nucleus and makes a transition to the preparatory process indicating that it is in progress at the time of referencethe futurate progressive through the use of an adverbial signaling an event time posterior to the reference forces the whole event to be treated as a single unit which is then composed into a new nucleusthe progressive then indicates that the preparation leading up to the event as a whole was in progress at the time of reference the futurate progressive thus resembles the perfect in saying something about a reference time that is entirely separate from the event timethe candidate for the role parallel to the simple past among the futurates is to be found in the simple or nonmodal future sometimes called the tenseless future 53he leaves on tuesdaywhile the futurate progressive shares with the perfect the property of needing no nonpresent adverbial the nonmodal future cannot be used in this wayfor example in response to a question about the current state of affairs as specific as why are you being so rude to your boss these days or as general as what is new one may respond with an unanchored progressive much as with a perfect but one may not reply with an unanchored nonmodal future although an anchored one is quite all rightin its requirement for an established nonpresent reference time the nonmodal future resembles the past tensethe resemblance is supported by the following further observationsa when question concerning the past progressive is ambiguous reflecting the separation of reference time and event timeby contrast the nonmodal future does not really seem to occur in the past at all except of course in reported or indirect speech it just becomes indistinguishable from the simple pastit follows that can be answered with or but can only be answered with not with these similarities suggest the symmetry depicted informally in figure 7 between the perfect the simple past the futurate progressive and the nonmodal futurethe hatching again informally indicates the extent of the consequent state and the preparatory process associated with the perfect and the futurate progressive respectivelythat is not to imply that the two are the same sort of entity they are both states but of a different kindthe perfect is a consequent state the futurate progressive is a state derived from a preparatory processthis difference is indicated by the presence of a defined upper bound on the latterthe reichenbach diagram in figure 7 for the nonmodal future is of course the one that is ascribed to the modal future a construction to which we will return in a momentbefore doing so there are some problems remaining to be disposed ofif the futurate progressive is the true counterpart of the perfectwhy is it not subject to the same restriction against nonpresent adverbialsthe answer lies in the differences between preparatory processes and consequent states rather than in the aspects themselvesin both cases the adverbial must associate with the core event of leaving rather than the present reference timethus concerns the preparations for leaving tomorrow while concerns the consequences of leaving yesterday as was pointed out in section 32 most of what we think of as consequences of events are independent of absolute timethis makes it hard to think of consequences associated with john leaving yesterday as opposed to those associated with john leaving generallypreparatory processes do not share this property the preparatory process associated with john leaving tomorrow is conceivably very different from that associated with john leaving next weekeare john leftfuturate eare john is leaving john leaves tomorrowone other difference between the futurate categories and the past categories should be mentionedif the nonmodal future is the correlate of the simple past it should be possible to have nonmodal futures of perfects just as with pasts of perfectsbut vetter 1973 has pointed out that the following is odd 58the dodgers have finished for the season next sundaynevertheless such futurates do appear in the context of futurate temporal adjuncts as in the following example 59once the dodgers play the red sox next sunday they have finished for the seasonthe other english futurate expressions also fit into the scheme of figure 7the quotbe going toquot construction typified by 60i am going to buy a guitar clearly belongs with the progressives being distinguished from them by the nature of the processes that it implicates the quotbe toquot construction typified by 61i am to be queen of the may also seems to belong with the progressives although its modal character has been remarked by leech and palmerfinally where does the modal future fit into this schemea full analysis of the modals would go beyond the scope of this paper so the following remarks will be sketchythe modal future clearly has a reference time not coincident with speech time like the nonmodal future but unlike the futurate progressivenevertheless bennett and partee are quite right that the modal future says something about the present as well as the pastthe source of its relevance to the time of speech must therefore have to do with the relation between modals and the time of speechwe make the following tentative suggestion about this relationpalmer 1974 pointed out a systematic ambiguity within the epistemic modals as between a futurate and a strictly present meaning and steedman 1977 related this to the similar ambiguity of a presenttensed sentencewhat needs to be added seems to be the idea that these modals define properties of the time of speech and do not of themselves have anything to do with reference time and event time unlike the true tensed and aspectual auxiliariesmore specifically will says of the time of speech that it leads the speaker to infer a proposition must says something very similar but seems to leave the speaker out of it and says that the proposition follows from the state of the world at speech timemay says that the proposition is permitted by the i sare john has leftcomputational linguistics volume 14 number 2 june 1988 25 marc moens and mark steedman temporal ontology and temporal reference state of the world at speech timethese senses are exhibited below62 ayou will be my longlost brother willy ayou will marry a tall dark stranger byou must be my longlost brother willy byou must marry a tall dark stranger c you may be my longlost brother willy cyou may marry a tall dark strangerbut as has often been suggested before the future epistemic modals have nothing to do with future tense in the strict sense of the word4we have argued in this paper that a principled and unified semantics of naturallanguage categories like tense aspect and aspectualtemporal adverbials requires an ontology based on contingency rather than temporalitythe notion of nucleus plays a crucial role in this ontologythe process of temporal reference involves reference to the appropriate part of a nucleus where appropriateness is a function of the inherent meaning of the core expression of the coercive nature of cooccurring linguistic expressions and of particular and general knowledge about the area of discoursethe identification of the correct ontology is also a vital preliminary to the construction and management of temporal databaseseffective exchange of information between people and machines is easier if the datastructures that are used to organise the information in the machine correspond in a natural way to the conceptual structures people use to organise the same informationin fact the penalties for a bad fit between datastructures and human concepts are usually crippling for any attempt to provide natural language interfaces for database systemsinformation extracted from naturallanguage text can only be stored to the extent that it fits the preconceived formats usually resulting in loss of informationconversely such datastructures cannot easily be queried using natural language if there is a bad fit between the conceptual structure implicit in the query and the conceptual structure of the databasethe contingencybased ontology that we are advocating here has a number of implications for the construction and management of such temporal databasesrather than a homogeneous database of dated points or intervals we should partition it into distinct sequences of causally or otherwise contingently related sequences of events which we might call episodes each leading to the satisfaction of a particular goal or intentionthis partition will quite incidentally define a partial temporal ordering on the events but the primary purpose of such sequences is more related to the notion of a plan of action or an explanation of an event occurrence than to anything to do with time itselfit follows that only events that are contingently related necessarily have welldefined temporal relations in memorya first attempt to investigate this kind of system was reported in steedman 1982 using a program that verified queries against a database structured according to some of the principles outlined above a more recent extension of this work was reported in moens 1987events are stored as primitives in the database possibly but not necessarily associated with a time pointextended events are represented in terms of a pair of punctual events identifying their starting point as well as the point at which they end or culminate apart from the obvious accessibility relations of temporal precedence and simultaneity events can also enter into the relation of contingency introduced aboveit is significant that the relation used in the implementation is identical to the notion of causality used by lansky 1986 in an entirely different problem areashe developed a knowledge representation scheme for use in planners in which events are reified and modeled with an explicit representation of their temporal as well as causal relationsin this scheme a mechanism is provided for structuring events into socalled quotlocations of activityquot the boundaries of which are boundaries of quotcausalquot accessas a result two events with no causal relation between them cannot belong to the same location of activityas in the episodes introduced abovebecause we follow lansky in making the contingency relation intransitive we avoid certain notorious problems in the treatment of whenclauses and perfects which arise because the search for possible consequences of an event has to be restricted to the first event on the chain of contingenciesthus when is asserted repeated here as and it would be wrong to infer 63 awhen john left sue cried bwhen sue cried her mother got upset c when john left sue mother got upsetthe reason is exactly the same as the reason that it would be wrong to infer that sue mother got upset because john left and has nothing to do with the purely temporal relations of these eventsit should also be noted that the notion of contingency used here is weaker than the notion of causality used in other representation schemes if event a stands in a contingent relation to event b then an occurrence of a will not automatically lead to an occuirence of b john laying the foundations of the house is a prerequisite for or enables him to build the walls and roof but does not because it in the more traditional sense of the word and does not automatically or inevitably lead to him building the wallsthe transitions in the network are implemented as inference procedures in the databaseanswering a query involving the aspectual auxiliaries and adverbials discussed before consists of finding a matching event description in the database and checking its aspectual type if the event description is found not to have the required aspectual type it can be changed by means of the inference procedures provided such a change is supported by knowledge in the database about the event in questionmany of the apparent anomalies and ambiguities that plague current semantic accounts of temporal expressions in natural language stem from the assumption that a linear model of time is the one that our linguistic categories are most directly related toa more principled semantics is possible on the assumption that the temporal categories of tense aspect aspectual adverbials and of propositions themselves refer to a mental representation of events that is structured on other than purely temporal principles and to which the notion of a nucleus or contingently related sequence of preparatory process goal event and consequent state is centralwe thank jon oberlander ethel schuster and bonnie lynn webber for reading and commenting upon draftsparts of the research were supported by an edinburgh university graduate studentship an esprit grant to ccs univedinburgh a sloan foundation grant to the cognitive science program univpennsylvania and nsf grant iri10413 a02 aro grant daa629 84k006f and darpa grant n001485k0018 to cis univpennsylvaniaan earlier version of some parts of this paper was presented as moens and steedman 1987
J88-2003
temporal ontology and temporal referencea semantics of temporal categories in language and a theory of their use in defining the temporal relations between events both require a more complex structure on the domain underlying the meaning representations than is commonly assumedthis paper proposes an ontology based on such notions as causation and consequence rather than on purely temporal primitives a central notion in the ontology is that of an elementary eventcomplex called a nucleusa nucleus can be thought of as an association of a goal event or culmination with a preparatory process by which it is accomplished and a consequent state which ensues naturallanguage categories like aspects futurates adverbials and whenclauses are argued to change the temporalaspectual category of propositions under the control of such a nucleic knowledge representation structurethe same concept of a nucleus plays a central role in a theory of temporal reference and of the semantics of tense which we follow mccawley partee and isard in regarding as an anaphoric categorywe claim that any manageable formalism for natural language temporal descriptions will have to embody such an ontology as will any usable temporal database for knowledge about events which is to be interrogated using natural languagewe describe temporal expressions relating to changes of state
tense as discourse anaphor in this paper i consider a range of english expressions and show that their contextdependency can be characterized in terms of two properties 1 they specify entities in an evolving model of the discourse that the listener is constructing 2 the particular entity specified depends on another entity in that part of the evolving quotdiscourse modelquot that the listener is currently attending to expressions have been called show how tensed clauses share these characteristics usually just attributed to anaphoric noun phrases this not only allows us to capture in a simple way the but difficulttoprove intuition that is anaphoric also contributes to our knowledge of what is needed for understanding narrative text philadelphia pa 191046389 in this paper i consider a range of english expressions and show that their contextdependency can be characterized in terms of two properties 1they specify entities in an evolving model of the discourse that the listener is constructing 2the particular entity specified depends on another entity in that part of the evolving quotdiscourse modelquot that the listener is currently attending tosuch expressions have been called anaphorsi show how tensed clauses share these characteristics usually just attributed to anaphoric noun phrasesthis not only allows us to capture in a simple way the oftstated but difficulttoprove intuition that tense is anaphoric but also contributes to our knowledge of what is needed for understanding narrative textin this paper i consider a range of english expressions and show that their contextdependency can be characterized in terms of two properties entity in that part of the evolving quotdiscourse modelquot that the listener is currently attending totwo types of expressions have previously been described in these terms definite pronouns and certain definite noun phrases researchers in computational linguistics and in artificial intelligence have called these expressions anaphors linguists however have used this term somewhat differentlymany have restricted its use to expressions that can be treated analogously to variables in a logical language a view in linguistics that comes somewhat closer to the al model can be found in a paper by sag and hankamer who distinguish what they call deep anaphora from what they call surface anaphora under the former they include personal pronouns sentential quotitquot and nullcomplement anaphora and under the latter verb phrase ellipsis sluicing gapping and strippingthe two types are distinguished by whether they make reference to the interpretation of an antecedentie some object in a model of the world constructed by the interpreter of the sentence of discourse or whether they are interpreted with respect to a previous logical form while their deep anaphors include pronouns hankamer and sag do not consider other expressions like nps in discourse that might also be described in similar modelinterpretive terms nor do they describe in any detail how model interpretation works for the expressions they considerto avoid confusion then i will use the term discourse anaphors for expressions that have these two propertiesmy main point will be that tensed clauses share these properties as well and hence should also be considered discourse anaphorsthis will capture in a simple way the oftstated but difficulttoprove intuition that tense is anaphoricto begin with in section 2 i characterize the dependency of an anaphoric expression xb on a discourse entity ea in terms of an anaphoric function a that itself depends on 1 the ontology of the specified entity ea and 2 discourse structure and its focusing effect on which ea entities the listener is attending towith respect to definite pronouns and nps this will essentially be a review of previous researchhowever i will argue that some indefinite nps should also be considered discourse anaphors in just this same wayin section 3 i will move on to tensed clauses and the notion of tense as anaphor a notion that goes back to at least leech in his monograph meaning and the english verb i will review previous attempts to make the notion precise attempts that require specialpurpose machinery to get them to workthen i will show in contrast that the notion can more simply be made precise in terms of a set of similar anaphoric functions that again depend on ontology and discourse structuremaking clear these dependencies contributes to our knowledge of what is needed for understanding narrative textthe notion specify that i am using in my definition of discourse anaphora is based on the notion of a discourse model earlier described in webber my basic premise is that in processing a narrative text a listener is developing a model of at least two things 1 the entities under discussion along with their properties and relationships to one another and 2 the events and situations under discussion along with their relationships to one another the representation as a whole i call the listener discourse model2 in this section i will focus on npsnps may evoke entities into the listener discourse model corresponding to individuals sets abstract individuals classes etc 3 an np which evokes a discourse entity also specifies itone way an np would be considered anaphoric by the above definition would be if it specified an entity ea in the model that had already been evoked by some other npthis basic arrangement is illustrated in examples 13 above and is shown in figure la5 formally one could say that there is an anaphoric function a whose value given the anaphoric noun phrase npb and the discourse entity ea is ea thatis a eathis can also be read as npb specifies ea by virtue of eadefinite pronouns are most often anaphoric in just this waythe other way an np would be considered a discourse anaphor would be if it used some existing discourse entity ea to evoke and specify a new discourse entity eb as in where npbthe drivermakes use of the entity associated with the bus mentioned in 5a to specify a new entitythe driver of that bushere the anaphoric function is of the form a ebin cooperative discourse there have to be constraints on the value of a since only npb is given explicitlyin short a cooperative speaker must be able to assume that the listener is able to both infer a possible a and single out ea in hisher evolving discourse mode16 i will consider each of these two types of constraints in turnspeakers assume listeners will have no problem with a when a eainferring a in other cases follows in large part from the ontology of the entities specified by npsie the ontology of our concepts of individuals sets mass terms generics etcwe view these as having parts having functional relations having roles etcthese need not be necessary parts relations roles etcour ontology includes possible parts relations etc and these too make it possible for the listener to infer an a such that a eb such inferences are discussed at length in the literature including clark and marshall 1981 and hobbs 19877 before closing this section there are two more things to say about npsfirst the above definition of discourse anaphor does not apply to all definite nps a definite np can be used to refer to something unique in the speaker and listener shared spatiotemporal context or their shared culture to the unique representative of a class to an entire class or set or to a functionally defined entity none of these would be considered discourse anaphoric by the above definitionsecondly though the definition implies that one must consider some indefinite nps to be discourse anaphors since they are essentially parasitic on a corresponding anaphoric definite np as in the following example bthe driver stopped the bus when the passengers began to sing quotaidaquotthe indefinite np a passenger in can be paraphrased as some one of the passengers and thus is parasitic on the anaphoric definite np the passengers mentioned explicitly in this does not imply that all indefinite nps are discourse anaphorsin mary met a boy with green hair or fred built an oak desk the indefinite nps do not need to be interpreted with respect to another discourse entity and some inferrable relationship with that entity in order to characterize the discourse entity they specifyin the next section i will discuss the second kind of constraint on the function a necessary for cooperative use of an anaphorconstraints on identifiable easthese involve notions of discourse structure and discourse focusbefore i close though i want to point to where i am going visavis the anaphoric character of tense and tensed clausesin contrast with previous accounts of tense as pronoun or tense as loosely contextdependent i am going to claim that like an anaphoric definite np the ideas presented in this section have been formulated and developed by barbara grosz and candy sidner originally independently and later in joint researchit is not a summary of their work8 it is limited to those of their ideas that are necessary to the concept of anaphor that i am advancing here and the concept of tense as anaphor in particularsidner thesis presents an account of understanding definite pronouns and anaphoric definite nps that reflects the ease with which people identify the intended specificand of definite pronouns as well as the intended specificand of anaphoric definite npswith respect to noun phrases sidner makes the same assumption about evoking specifying and cospecifying in a discourse model that i have made hereto understand anaphoric expressions sidner postulates three mechanisms the df corresponds to that entity the listener is most attending topronouns can most easily specify the current df slightly less easily a member of the pfl and with slightly more difficulty a stacked focusspecifying an entity pronominally can shift the listener attention to it thereby promoting it to be the next dfanything else specified in the clause ends up on the pfl ordered by its original syntactic positionas for anaphoric definite nps they can specify anything previously introduced or anything related in a mutually inferrable way with the current df or a member of the pflin terms of the constraints i mentioned above it is only those discourse entities that are either the df or on the pfl that can serve as ea for an anaphoric definite np9 in sidner dfs always are stacked for possible resumption laterin grosz and sidner it is an entire focus space that gets stacked but only when the 9purpose of the current ds is taken to dominate that of the one upcomingdominance relations are also specified further according to the type of discoursein grosz and sidner they are defined for taskrelated dialogues and argumentsfor example in arguments one ds purpose dominates another if the second provides evidence for a point made in the firstwhen the dominated dsp is satisfied its corresponding fs is poppedthis stack mechanism models the listener attentional statethe relations between dsps constitute the intentional structure of the textgetting a listener to resume a ds via the stack mechanism is taken to require less effort on a speaker part than returning to elaborate an argument or subtask description later onthe significance of sidner and grosz and sidner for the current enterprise is that computational linguistics volume 14 number 2 june 1988 63 bonnie lynn webber tense as discourse anaphor i reinterpret this in the current framework in terms of the anaphoric function awithin a discourse segment the entity that is the df is the most likely eaover the discourse segment other discourse entities in the segment focus space may in turn become dfwith a change in discourse segment however the df can change radically to an entity in the focus space associated with the new segmentto hint again at what is to come in section 32 i will propose a temporal analogue of df which i have called temporal focus in section 33 i will show how gradual movements of the tf are tied in with the ontology of what a tensed clause specifiesie an ontology of events and situationswhile more radical movements reflect the effect of discourse structure on tftense may not seem prima facie anaphoric an isolated sentence like john went to bed or i met a man who looked like a basset hound appears to make sense in a way that a standalone he went to bed or the man went to bed does noton the other hand if some time or event is established by the context tense will invariably be interpreted with respect to it as in in each case the interpretation of john going to bed is linked to an explicitly mentioned time or eventthis is what underlies all discussion of the anaphoric quality of tensethe assumption that tense is anaphoric goes back many years although it is not a universally held belief leech seems to express this view in his meaning and the english verb 63 indefinite time whereas the present perfect in its indefinite past sense does not name a specific point of time a definite point of orientation in the past is normally required for the appropriate use of the simple past tensethe point of orientation may be specified in one of three ways by an adverbial express of timewhen by a preceding use of a past or perfect tense and by implicit definition ie by assumption of a particular time from context73 the past perfect tense has the meaning of pastinthepast or more accurately a time further in the past seen from the viewpoint of a definite point of time already in the pastthat is like the simple past tense the past perfect demands an already established past point of reference leech did not elaborate further on how reference points are used in the interpretation of simple past tense and past perfect tense or on what has become the main problem in the semantics and pragmatics of tense reconciling the forward movement of events in narratives with a belief in the anaphoric character of tensethe first explicit reference i have to tense being anaphoric like a definite pronoun is in an article by mccawley who said however the tense morpheme does not just express the time relationship between the clause it is in and the next higher clauseit also refers to the time of the clause that it is in and indeed refers to it in a way that is rather like the way in which personal pronouns refer to what they stand formccawley also tried to fit in his view of tense as pronoun with the interpretation of tense in simple narrativeshere he proposed that the event described in one clause serves as the antecedent of the event described in the next but that it may be related to that event by being either at the same time or quotshortly afterquot ithe did not elaborate on when one relation would be assumed and when the otherpartee also noted the similarities between tense and definite pronounshowever she subsequently recognized that taking simple past tense as directly analogous with pronouns was incompatible with the usual forward movement of time in the interpretation in a sequence of sentences denoting events her response was a modification of the claim that tense is anaphoric saying i still believe it is reasonable to characterize tense as anaphoric or more broadly as contextdependent but i would no longer suggest that this requires them to be viewed as referring to times as pronouns refer to entities or to treat times as arguments of predicates the particular contextdependent process she proposes for interpreting tensed clauses follows that of hinrichs 1986 briefly described belowthe examples presented above to illustrate the anaphoric quality of tense were all simple pasthowever as leech notes the past perfect also makes demands on having some reference point already estabbonnie lynn webber tense as discourse anaphor lished in the contextthus it cannot be in terms of the event described in a tensed clause that tense is anaphoricinstead several people have argued that it is that part of tense called by reichenbach the point of reference that is anaphoricthis can be seen by considering the following example 8 a john went to the hospital bhe had twisted his ankle on a patch of iceit is not the point of the event of john twisting his ankle that is interpreted anaphorically with respect to his going to the hospitalrather it is the rt of the second clause its et is interpreted as prior to that because the clause is in the past perfect i will now review briefly hinrichs proposal as to how tensed clauses are interpreted in context in order to contrast it with the current proposalin hinrichs 1986 hinrichs makes the simplifying assumption that in a sequence of simple past sentences the temporal order of events described cannot contradict the order of the sentencesthis allows him to focus on the problem of characterizing those circumstances in which the event described by one sentence follows that described by the previous one and when it overlaps it 9the elderly gentleman wrote out the check tore it from the book and handed it to costain10mr darby slapped his forehead then collected himself and opened the door againthe brush man was smiling at him hesitantlyhinrichs bases his account on the aktionsart of a tensed clause assuming an initial reference point in a discourse the event described by a tensed clause interpreted as an accomplishment or achievement will be included in that reference point and will also introduce a new reference point ordered after the old oneevents associated with the other aktionsarten include the current reference point in the event timethis means that given a sequence of two clauses interpreted as accomplishments or achievements their corresponding events will follow one another on the other hand given a sequence with at least one tensed clause interpreted as an activity or state their corresponding events will be interpreted as overlapping each other hinrichs relates his reference point to that of reichenbachhowever hinrichs notion and reichenbach differ with respect to the time of the event described in the tensed clausewhile reichenbach talks about et and rt being the same for nonprogressive pasttense clauses in hinrichs account the reference point can fall after the event if a nonprogressive past is interpreted as an accomplishment or an achievementthis is necessary to achieve the forward movement of narrative that hinrichs assumes is always the case but it is not the same as reichenbach rtit also leads to problems in cases where this simplifying assumption is just wrongwhere in a sequence of simple past tenses there is what appears to be a quotbackwardquot movement of time as in 11 afor an encore john played the quotmoonlight sonataquot bthe opening movement he took rather tentatively but then where the second clause should be understood as describing the beginning of the playing event in more detail not as describing a subsequent eventin the account given below both forward and backward movement of time fall out of the anaphoric character of tensed clauses and the dependency of discourse anaphora on discourse structurequot with that background i will now show how tensed clauses share the two properties i set out in section 1 and hence are further examples of discourse anaphora to do this i need to explain the sense in which tensed clauses specify and the way in which that specification can depend on another element in the current contextrecall that i presume that a listener developing discourse model represents both the entities being discussed along with their properties and relations and the events and situations being discussed along with their relationships with anotherfor the rest of this paper i want to ignore the former and focus on the latterthis i will call eventsituation structure or es structureit represents the listener best effort at interpreting the speaker ordering of those events and situations in time and spaceone problem in text understanding then is that of establishing where in the evolving es structure to integrate the event or situation description in the next clausein this framework a tensed clause cb provides two pieces of semantic information a description of an event or situation and a particular configuration of et rt and point of speech from cb particular configuration of et rt and stboth the characteristics of eb and the configuration of et rt and st are critical to my account of tense as discourse anaphorthe event ontology i assume follows that of moens and steedman and of passonneau both propose that people interpret events as having a tripartite structure consisting of a preparatory phase a culmination and a consequent phase as in figure 2this tripartite structure permits a uniform account to be given of aspectual types in english and of how the interpretation of temporal adverbials interacts with the interpretation of tense and aspectfor example the coercion of clauses from one interpretation to another is defined in terms of which parts of a nucleus they select and how those parts are describedi2 the etrtst configuration is significant in that like steedman 1982 dowty 1986 hinrichs 1986 and partee 1984 i take rt as the basis for anaphorato indicate this i single it out as an independent argument to anaphoric functions here labelled 0in particular the following schema holds of a clause cb linked anaphorically to an event ea through its rt the relationship between eb and ea then falls out as a consequence of 1 the particular etrtst configuration of cb and 2 the particular function 0 involvedin this case the relationship between eb and e then depends on the configuration of rtb and etbif etb rtb then eb is taken to coincide in some way with eathis is shown in figure 3aif etb rtb eb is taken to precede eathis is shown in figure 3dalternatively 0 may embody part of the tripartite ontology of events mentioned earlier 13prep links rtb to the preparatory phase of ea ie pprep eb while b conseq links rtb to the consequent phase of ea ie is simple past etb rtbgiven 130 eb then eb is interpreted as coextensive with eathis is illustrated in figure 4example 8 illustrates the case 30 where etb rtb be described as having bought some flowersthis is shown in figure 78 a john went to the hospital bhe had twisted his ankle on a patch of iceclause 8a evokes an entity ea describable as john going to the hospitalsince 8b is past perfect etb rtbthus if po eb the event eb described by 8b is taken to be prior to eaas moens steedman point out the consequences of an event described with a perfect tense are still assumed to holdhence the overlap shown in figure 5 the next example illustrates conseq 13 a john went into the florist shop bhe picked out three red roses two white ones and one pale pinkclause 13a evokes an entity ea describable as john going into a flower shopsince clause 13b is simple past etb rtbthus given pconseq eb event eb is taken as being part of the consequent phase of eathat is john picking out the roses is taken as happening after his going into the florist shopthis is shown in figure 6the next example illustrates the case of pprep to summarize i have claimed that 1 the notion of specification makes sense with respect to tensed clauses 2 one can describe the anaphoric relation in terms of the rt of a tensed clause cb its etrt configuration and an existing event or situation entity eathat is p eb and 3 there are three 13 functionsone po linking rtb to ea itself the other two embodying parts of a tripartite ontology of eventsin the next section i will discuss constraints on the second argument to pthat is constraints on which entities in the evolving es structure the specification of a tensed clause can depend onrecall from section 22 that sidner introduced the notion of a dynamically changing discourse focus to capture the intuition that at any point in the discourse there is one discourse entity that is the prime focus of attention and that is the most likely specificand of a definite pronounin parallel i propose a dynamically changing temporal focus to capture a similar intuition that at any point in the discourse there is one entity in es structure that is most attended to and hence most likely to stand in an anaphoric relation with the rt of the next clausethat is 13 ebif cb is interpreted as part of the current discourse segment after its interpretation there are three possibilities these relationships which i will call maintenance and local movement of the tf correspond to sidner df moving gradually among the discourse entities in a discourse segmentmore radical movement of tf correspond to changes in discourse structurein cases involving movements into and out of an embedded discourse segment either 1 the tf will shift to a different entity in es structureeither an existing entity or one created in recognition of an embedded narrative or 2 it will return to the entity previously labeled tf after completing an embedded narrativesuch movements are described in section 332other movements signaled by temporal adverbials and when clauses are not discussed in this paper14the following pair of examples illustrate maintenance and local movement of tf within a discourse segment and its link with es structure constructionthe first i discussed in the previous section to illustrate el conseq the second is a variation on that example first consider example 13the first clause evokes an event entity ea describable as john going into the florist shopsince its tense is simple past ea is interpreted as prior to stsince it begins the discourse its status is special visavis both definite nps and tensed clausesthat is since no previous tf will have been established yet the listener takes that entity ea to serve as tf5 this is shown in figure 8 partee and dowty were out to achievehere it falls out simply from the discourse notion of a tf and from the particular anaphoric function pconseq 16 now consider example 15 whose first clause is the same as example 13a and hence would be processed in the same waythe tense of the next clause is past perfectas i noted above the only anaphoric function on rtisb and an event entity that makes sense for perfect tenses is 00that is given that perfect tenses imply et rt the event eb specified by will be interpreted as being prior to eamoreover since is past perfect the consequent phase of eb is assumed to still hold with respect to rt5bhence the consequent phase of eb overlaps eafinally since tf is associated with the event entity at rtb it remains at eaes structure at this point resembles figure 10 now if clause 13b is interpreted as being part of the same discourse segment as it must be the case that 13assume the listener takes p to be pconseq on the basis of world knowledgethat is 13conseqsince the tense of is simple past its rt and et coincidethus specifies a new entity eb located within the consequent phase of the tfthat is eaand hence after iti assume that following the computation of the anaphoric function tf becomes associated with the event entity located at rtbin this case it is eb and tf thereby moves forward as noted this is the gradual forward movement of simple narratives that hinrichs now clause 15c is the same as and tf is the same as it was at the point of interpreting thus not surprisingly 15c produces the same change in es now structure and in the tf as resulting in the diagram shown in figure 11to illustrate the effect of discourse structure on tf consider the following variation on example 15 which had the same structure visavis sequence of tensesthe first two clauses and are the same as in example 15 and lead to the same configuration of event entities in es structure but the most plausible interpretation of is where the quotsayingquot event is interpreted anaphorically with respect to the quotpromisingquot eventthat is where are taken together as an embedded discourse describing an event prior to john going to the floristto handle this i assume following grosz and sidner 1986 that when the listener recognizes an embedded discourse segment she stores the current tf for possible resumption later17 however i also assume the listener recognizes the embedding not when she first encounters a perfecttensed clause cb since it need not signal an embedded discourse but later when an immediately following simple past tense clause cc is most sensibly interpreted with respect to the event entity ei that cb evoked18 at this point the listener moves tf from its current position to eb caching the previous value for possible resumption laterfollowing this gross movement 13 will be computedif is then interpreted as b there will be conseq or pprepl a second movement of tf9 coming back to example 16 if clause 16c is taken as being part of a single discourse segment with she saying something would have to be interpreted with respect to the current tf john going to the floristthis is implausible under all possible interpretations of 020 however under the assumption that et is part of an embedded narrative the listener can a posteriori shift tf to el and consider the anaphoric relation with et as tfat this point the listener can plausibly take p to be b conseq based on world knowledgesince is simple past etc rt the quotsayingquot event ec is viewed as part of the consequent phase the quotpromisingquot event ebas in the first case tf moves to the event located at rtie to e this is shown roughly in figure 12notice that this involved two movements of tfonce in response to a perceived embedded segment and a second time in response to interpreting 3 as b rconseq now consider the following extension to d so he picked out three red roses two white ones and one pale pinkas before clauses 17bc form an embedded narrative but here the main narrative of john visit to the florist shop started at is continued at to handle this i again assume that tf behaves much like sidner df in response to the listener recognition of the end of an embedded narrative that is the cached tf is resumed and processing continues21 under this assumption clauses 17ac are interpreted as in the previous example recognizing clause 17d as resuming the embedding segment22 the previously cached tf is resumedagain assume that the listener takes the anaphoric function to be pconseq ed on the basis of world knowledgesince clause 17d is simple past the picking out roses event ed is viewed as part of the consequent phase and hence following the going into the florist shop eventthis is shown roughly in figure 13 now getting the listener to interpret a text as an embedded narrative requires providing himher with another event or situation that tf can move toone way in english is via a perfecttensed clause which computational linguistics volume 14 number 2 june 1988 69 bonnie lynn webber tense as discourse anaphor explicitly evokes another event temporally earlier than the one currently in focusanother way is by lexical indications of an embedded narrative such as verbs of telling and nps that themselves denote events or situations this is illustrated in example 18even though all its clauses are simple past clauses 18cd are most plausibly interpreted as indirect speech describing an event that has occurred prior to the quottellingquot eventi assume that in response to recognizing this kind of embedded narrative the listener creates a new node of es structure and shifts tf there caching the previous value of tf for possible resumption laterthe temporal location of this new node visavis the previous tf will depend on information in the tensed clause and on the listener world knowledgenotice that as with embedded narratives cued by the use of a perfect tense caching the previous tf for resumption later enables the correct interpretation of clause 18e which is most plausibly interpreted as following the telling about her sister eventan np denoting an event or situation can also signal the upcoming possibility of an embedded narrative that will elaborate that event or situation in more detail as in example 19in this case the original np and the subsequent clause will be taken as cospecifying the same thingthe question here is how and when tf moves c she spent five weeks above the arctic circle with two friends d the three of them climbed mtmckinleyafter interpreting clause 19b the tf is at the quottellingquot eventi claim that the np her trip to alaska while evoking a discourse entity does not affect the tfif clause 19c is interpreted as the start of an embedded narrative tf moves to the event entity ec it evokes at this point using additional reasoning the listener may recognize an anaphoric relation between clause 19c and the discourse entity evoked by her trip to alaskasupport for this rather than assuming that an eventdenoting np sets up a potential focus just as i claim a perfecttensed clause does comes from the reasoning required to understand the following parallel example where i would claim tf does not movei was talking with mary yesterdayshe told me about her trip to alaskashe had spent five weeks above the arctic circle with two friendsthe three of them had climbed mtmckinleyshe said that next year they would go for aconcaguathe event described in clause 20c is the same as that described in clause 19c and should be interpreted anaphorically with respect to the entity her trip to alaska in the same wayif this is the case however then the anaphoric link does not follow from the movement of tfexample 20 above illustrates one case of an anaphoric function on an np and a tensed clause specifically b where the entity ea has been evoked by an np rather than a clauseanother possibility is that a eb where npb is definite by virtue of an entity evoked by a clause rather than an npthat is eb is associated with either the preparatoryculminationconsequent structure of ea as in 21 a mary climbed mtmckinley bthe preparations took her longer than the ascent or its associated role structure as in 22 a john bought a television balthough he had intended to buy a 13quot bw set the salesman convinced him to buy a 25quot color backprojection job where the salesman fills a particular role in the buying eventnext notice that ambiguities arise when there is more than one way to plausibly segment the discourse as in the following example 23 a i told frank about my meeting with ira bwe talked about ordering a butterflyhere it is plausible to take clause 23b as the beginning of an embedded narrative whereby the quottalking aboutquot event is interpreted against a new node of es structure situated prior to the quottelling frankquot eventit is also plausible to take as continuing the current narrative whereby the quottalking aboutquot event is interpreted with respect to the quottelling frankquot eventfinally consider things from the point of view of generationif some event eb is part of the preparatory phase of some event ea and a description of ea has just been generated using the simple past tense then eb could be described using either the simple past as in example 24 or past perfect as in example 25in the case of example 24 the listenerreader recognizes that eb is part of the preparatory phase of ea and that eb therefore precedes eain the case of example 25 the listener would first recognize that eb precedes ea because of the past perfect but then recognize eb as part of the preparatory phase of eaon the other hand if eb simply precedes ea but a description of ea has been generated first then eb must be described with a past perfect simple past would not be sufficient 26 a john went to the hospital bhe had broken his ankle walking on a patch of ice27 a john went to the hospital bhe broke his ankle walking on a patch of icein this paper i have presented a uniform characterization of discourse anaphora in a way that includes definite pronouns definite nps and tensed clausesin doing so i have argued that the successful use of discourse anaphors depends on two different things 1 speakers and listeners beliefs about the ontology of the things and events being discussed and 2 speakers and listeners focus of attentionthe former implicates semantics in the explanation of discourse anaphora the latter discourse itselfit is important that we as researchers recognize these as two separate systems as the properties of discourse as an explanatory device are very different from those of semanticsthis work was partially supported by aro grant daa2988490027 nsf grant mcs8219116cer and darpa grant n0001485k0018 to the university of pennsylvania by darpa grant n0001485c0012 to unisys paoli research center and an alvey grant to the centre for speech technology research university of edinburghmy thanks to becky passonneau debby dahl mark steedman ethel schuster candy sidner barbara grosz ellen bard anne anderson tony sanford simon garrod and rich thomason for their helpful comments on the many earlier versions of this paper
J88-2006
tense as discourse anaphorin this paper i consider a range of english expressions and show that their contextdependency can be characterized in terms of two properties 1 they specify entities in an evolving model of the discourse that the listener is constructing 2 the particular entity specified depends on another entity in that part of the evolving discourse model that the listener is currently attending tosuch expressions have been called anaphorsi show how tensed clauses share these characteristics usually just attributed to anaphoric noun phrasesthis not only allows us to capture in a simple way the oftstated but difficulttoprove intuition that tense is anaphoric but also contributes to our knowledge of what is needed for understanding narrative textwe improve upon the above work by specifying rules for how events are related to one another in a discourse and sing and sing defined semantic constraints through which events can be related
word association norms mutual information and lexicography 1982 for constructing language models for applications in speech recognition 2 smadja discusses the separation between collocates in a very similar way this definition y a rectangular window it might be interesting to consider alternatives that would weight words less and less as they are separated by more and more words other windows are also possible for example hindle has used a syntactic parser to select words in certain constructions of interest 4 although the goodturing method is more than 35 years old it is still heavily cited for example katz uses the in order to estimate trigram probabilities in the recognizer the goodturing method is helpful for trigrams that have not been seen very often in the training corpus the last unclassified line shoppers anywhere from 50 raises interesting problems syntactic quotchunkingquot shows that in spite its cooccurrence of line does not belong here an intriguing exercise given the lookup table we are trying construct is how to guard against false inferences such as that since tagged person here count as either a location accidental coincidences of this kind do not have a significant effect on the measure however although they do serve as a reminder of the probabilistic nature of the findings the word also occurs significantly in the table but on closer it is clear that this use of to time as something like a commodity or resource not as part of a time adjunct such are the pitfalls of lexicography the term word association is used in a very particular sense in the psycholinguistic literaturewe will extend the term to provide the basis for a statistical description of a variety of interesting linguistic phenomena ranging from semantic relations of the doctornurse type to lexicosyntactic cooccurrence constraints between verbs and prepositions this paper will propose an objective measure based on the information theoretic notion of mutual information for estimating word association norms from computer readable corporathe proposed measure the association ratio estimates word association norms directly from computer readable corpora making it possible to estimate norms for tens of thousands of wordsit is common practice in linguistics to classify words not only on the basis of their meanings but also on the basis of their cooccurrence with other wordsrunning through the whole firthian tradition for example is the theme that quotyou shall know a word by the company it keepsquot on the one hand bank cooccurs with words and expression such as money notes loan account investment clerk official manager robbery vaults working in a its actions first national of england and so forthon the other hand we find bank cooccurring with river swim boat east on top of the and of the rhine the search for increasingly delicate word classes is not newin lexicography for example it goes back at least to the quotverb patternsquot described in hornby advanced learner dictionary what is new is that facilities for the computational storage and analysis of large bodies of natural language have developed significantly in recent years so that it is now becoming possible to test and apply informal assertions of this kind in a more rigorous way and to see what company our words do keepthe proposed statistical description has a large number of potentially important applications including constraining the language model both for speech recognition and optical character recognition providing disambiguation cues for parsing highly ambiguous syntactic structures such as noun compounds conjunctions and prepositiona 1 phrases retrieving texts from large databases enhancing the productivity of computational linguists in compiling lexicons of lexicosyntactic facts and enhancing the productivity of lexicographers in identifying normal and conventional usageconsider the optical character recognizer applicationsuppose that we have an ocr device as in kahan et al and it has assigned about equal probability to having recognized farm and form where the context is either federal credit or some ofthe proposed association measure can make use of the fact that farm is much more likely in the first context and form is much more likely in the second to resolve the ambiguitynote that alternative disambiguation methods based on syntactic constraints such as part of speech are unlikely to help in this case since both form and farm are commonly used as nounsword association norms are well known to be an important factor in psycholinguistic research especially in the area of lexical retrievalgenerally speaking subjects respond quicker than normal to the word nurse if it follows a highly associated word such as doctorsome results and implications are summarized from reactiontime experiments in which subjects either classified successive strings of letters as words and nonwords or pronounced the stringsboth types of response to words were consistently faster when preceded by associated words rather than unassociated words much of this psycholinguistic research is based on empirical estimates of word association norms as in palermo and jenkins perhaps the most influential study of its kind though extremely small and somewhat datedthis study measured 200 words by asking a few thousand subjects to write down a word after each of the 200 words to be measuredresults are reported in tabular form indicating which words were written down and by how many subjects factored by grade level and sexthe word doctor for example is reported on pp98100 to be most often associated with nurse followed by sick health medicine hospital man sickness lawyer and about 70 more wordswe propose an alternative measure the association ratio for measuring word association norms based on the information theoretic concept of mutual informationthe proposed measure is more objective and less costly than the subjective method employed in palermo and jenkins the association ratio can be scaled up to provide robust estimates of word association norms for a large portion of the languageusing the association ratio measure the five most associated words are in order dentists nurses treating treat and hospitalswhat is quotmutual informationquot according to fano if two points x and y have probabilities p and p then their mutual information i is defined to be informally mutual information compares the probability of observing x and y together with the probabilities of observing x and y independently if there is a genuine association between x and y then the joint probability p will be much larger than chance p p and consequently i 0if there is no interesting relationship between x and y then p p p and thus i 0if x and y are in complementary distribution then p will be much less than p p forcing i 0in our application word probabilities p and p are estimated by counting the number of observations of x and y in a corpus f and f and normalizing by n the size of the corpusjoint probabilities p are estimated by counting the number of times that xis followed by y in a window of w wordsf and normalizing by n the window size parameter allows us to look at different scalessmaller window sizes will identify fixed expressions and other relations that hold over short ranges larger window sizes will highlight semantic concepts and other relationships that hold over larger scalestable 1 may help show the contrast2 in fixed expressions such as bread and butter and drink and drive the words of interest are separated by a fixed number of words and there is very little variancein the 1988 ap it was found that the two words are always exactly two words apart whenever they are found near each other that is the mean separation is two and the variance is zerocompounds also have very fixed word order but the average separation is closer to one word rather than twoin contrast relations such as manwoman are less fixed as indicated by a larger variance in their separationlexical relations come in several varietiesthere are some like refraining from that are fairly fixed others such as coming from that may be separated by an argument and still others like keeping from that are almost certain to be separated by an argumentthe ideal window size is different in each casefor the remainder of this paper the window size w will be set to five words as a compromise this setting is large enough to show some of the constraints between verbs and arguments but not so large that it would wash out constraints that make use of strict adjacency3 since the association ratio becomes unstable when the counts are very small we will not discuss word pairs with f 5an improvement would make use of tscores and throw out pairs that were not significantunfortunately this requires an estimate of the variance off which goes beyond the scope of this paperfor the remainder of this paper we will adopt the simple but arbitrary threshold and ignore pairs with small countstechnically the association ratio is different from mutual information in two respectsfirst joint probabilities are supposed to be symmetric p p and thus mutual information is also symmetric i ihowever the association ratio is not symmetric since f encodes linear precedence denotes the number of times that word x appears before y in the window of w words not the number of times the two words appear in either orderalthough we could fix this problem by redefining f to be symmetric we have decided not to do so since order information appears to be very interestingnotice the asymmetry in the pairs in table 2 illustrating a wide variety of biases ranging from sexism to syntaxsecond one might expect f f and f f but the way we have been counting this need not be the case if x and y happen to appear several times in the windowfor example given the sentence quotlibrary workers were prohibited from saving books from this heap of ruinsquot which appeared in an ap story on april 1 1988 f 1 and f 2this problem can be fixed by dividing f by w 1 2 from our association ratio scoresthis adjustment has the addif f doctors nurses 99 10 man woman 256 56 doctors lawyers 29 19 bread butter 15 1 save life 129 11 save money 187 11 save from 176 18 supposed to 1188 25 tional benefit of assuring that f f f n when is large the association ratio produces very credible results not unlike those reported in palermo and jenkins as illustrated in table 3in contrast when 0 the pairs are less interesting 3 tend to be interesting and pairs with smaller are generally notone can make this statement precise by calibrating the measure with subjective measuresalternatively one could make estimates of the variance and then make statements about confidence levels eg with 95 confidence p p pif 0 we would predict that x and y are in complementary distributionhowever we are rarely able to observe i 0 because our corpora are too small suppose for example that both x and y appear about 10 times per million words of textthen p p 105 and chance is p p 101thus to say that is much less than 0 we need to say that p is much less than 101 a statement that is hard to make with much confidence given the size of presently available corporain fact we cannot observe a probability less than 1n 107 and therefore it is hard to know if is much less than chance or not unless chance is very large and we need to estimate the standard deviation using a method such as good 4although the psycholinguistic literature documents the significance of nounnoun word associations such as doctor nurse in considerable detail relatively little is said about associations among verbs function words adjectives and other nonnounsin addition to identifying semantic relations of the doctornurse variety we believe the association ratio can also be used to search for interesting lexicosyntactic relationships between verbs and typical argumentsadjunctsthe proposed association ratio can be viewed as a formalization of sinclair argument how common are the phrasal verbs with setset is particularly rich in making combinations with words like about in up out on off and these words are themselves very commonhow likely is set off to occurboth are frequent words set occurs approximately 250 times in a million words and off occurs approximately 556 times in a million words t he question we are asking can be roughly rephrased as follows how likely is off to occur immediately after setthis is 000025 x 000055 p p which gives us the tiny figure of 00000001375 the assumption behind this calculation is that the words are distributed at random in a text at chance in our terminologyit is obvious to a linguist that this is not so and a rough measure of how much set and off attract each other is to compare the probability with what actually happens set off occurs nearly 70 times in the 73 million word corpus p 70 p pthat is enough to show its main patterning and it suggests that in currentlyheld corpora there will be found sufficient evidence for the description of a substantial collection of phrases using sinclair estimates p 250 x 106 p 556 x 106 and p 70 we would estimate the mutual information to be i log2 p1 p 61in the 1988 ap corpus we estimate p 13046n p 20693n and p 463ngiven these estimates we would compute the mutual information to be i 62in this example at least the values seem to be fairly comparable across corporain other examples we will see some differences due to samplingsinclair corpus is a fairly balanced sample of text the ap corpus is an unbalanced sample of american journalesethis association between set and off is relatively strong the joint probability is more than 26 64 times larger than chancethe other particles that sinclair mentions have association ratios that can be seen in table 4the first three set up set off and set out are clearly associated the last three are not so clearas sinclair suggests the approach is well suited for identifying the phrasal verbs at least in certain casesphrasal verbs involving the preposition to raise an interesting problem because of the possible confusion with the infinitive marker towe have found that if we first tag every word in the corpus with a part of speech using a method such as church and then measure associations between tagged words we can identify interesting contrasts between verbs associated with a following preposition toin and verbs associated with a following infinitive marker toto in preposition to infinitive marker vb bare verb vbg verb ing vbd verb ed vbz verb s vbn verb enthe association ratio identifies quite a number of verbs associated in an interesting way with to restricting our attention to pairs with a score of 30 or more there are 768 verbs associated with the preposition toin and 551 verbs with the infinitive marker totothe ten verbs found to be most associated before toin are thus we see there is considerable leverage to be gained by preprocessing the corpus and manipulating the inventory of tokenshindle has found it helpful to preprocess the input with the fidditch parser to identify associations between verbs and arguments and postulate semantic classes for nouns on this basishindle method is able to find some very interesting associations as tables 5 and 6 demonstrateafter running his parser over the 1988 ap corpus hindle found n 4112943 subjectverb object triplesthe mutual information between a verb and its object was computed from these 4 million triples by counting how often the verb and its object were found in the same triple and dividing by chancethus for example disconnect v and telephone 0 have a joint probability of 7inin this case chance is 84n x 481n because there are 84 svo triples with the verb disconnect and 481 svo triples with the object telephonethe mutual information is log2 7n 948similarly the mutual information for drink v beer 0 is 99 log2 29n or on small corpora of only a million words or so which are reliably informative for only the most common uses of the few most frequent words of englishthe computational tools available for studying machinereadable corpora are at present still rather primitivethese are concordancing programs which are basically kwic indexes with additional features such as the ability to extend the context sort leftward as well as rightward and so onthere is very little interactive softwarein a typical situation in the lexicography of the 1980s a lexicographer is given the concordances for a word marks up the printout with colored pens to identify the salient senses and then writes syntactic descriptions and definitionsalthough this technology is a great improvement on using human readers to collect boxes of citation index cards it works well if there are no more than a few dozen concordance lines for a word and only two or three main sense divisionsin analyzing a complex word such as take save or from the lexicographer is trying to pick out significant patterns and subtle distinctions that are buried in literally thousands of concordance lines pages and pages of computer printoutthe unaided human mind simply cannot discover all the signifi195 svo triples respectively they are found together in 29 of these triplesthis application of hindle parser illustrates a second example of preprocessing the input to highlight certain constraints of interestfor measuring syntactic constraints it may be useful to include some part of speech information and to exclude much of the internal structure of noun phrasesfor other purposes it may be helpful to tag items andor phrases with semantic labels such as person place time body part bad and so onlarge machinereadable corpora are only just now becoming available to lexicographersup to now lexicographers have been reliant either on citations collected by human table 6what can you do to a telephoneverb object mutual info joint freq sit_bylv telephone0 1178 7 disconnectiv telephone0 948 7 answeriv telephone0 880 98 hang_up1v telephone0 787 3 tap1v telephone0 769 15 pick_upiv telephone0 563 11 returnv telephone0 501 19 be_bylv telephone0 493 2 spotiv telephone0 443 2 repeat1v telephone0 439 3 placelv telephone0 423 7 receivelv telephone0 422 28 installiv telephone0 420 2 be_oniv telephone0 405 15 come_tolv telephone0 363 6 uselv telephone0 359 29 operatelv telephone0 316 4 rs sunday calling for greater economic reforms to maniac ion asserted that quot the postal service could then she mid the family hopes to e outofwork steelworkerquot because that does not quot we suspend reality when we say we will scientists has won the first round in an effort to about three children ma mining town who plot to gm executives say the shutdowns will innen as receiver instructed officials to try to the package which is to newly enhanced image as the moderate who moved to million offer from chaimun victor posner to help after telling a deliveryroom doctor not to try to h birthday tuesday cheered by those who fought to at he had formed an alliance with moslem rebels to basically we could we worked for a year to their estimative mirrors just like in wanime to ant of many who risked their own lives in order to we must increase the amount americans save china front poverty save enormous sums of money in contracting out individual c save enough for a down payment on a home save jobs that costs jobsquot save money by spending 10000 in wages for a public workt save one of egypt great treasures the decaying tomb of ft save the quotpit ponies quotdoomed lobe slaughtered save the automaker 500 million a year in operating costs a save the company rather than liquidate it and then declared save the country nearly 2 billion also includes a program save the country save the financially troubled company but said posner till save the infant by inserting a tube in its throat to help i save the majestic beaux arts architectural masterpiece save the nation from communism save the operating costs of the persbings and groundlaunch save the site at enormous expense to at quotmid leveillee save diem from drunken yankee brawlers quottam sank save those who were passengersquot cannot patterns let alone group them and rank them in order of importancethe ap 1987 concordance to save is many pages long there are 666 lines for the base form alone and many more for the inflected forms saved saves saving and savingsin the discussion that follows we shall for the sake of simplicity not analyze the inflected forms and we shall only look at the patterns to the right of save it is hard to know what is important in such a concordance and what is notfor example although it is easy to see from the concordance selection in figure 1 that the word quottoquot often comes before quotsavequot and the word quotthequot often comes after quotsavequot it is hard to say from examination of a concordance alone whether either or both of these cooccurrences have any significancetwo examples will illustrate how the association ratio measure helps make the analysis both quicker and more accuratethe association ratios in table 7 show that association norms apply to function words as well as content wordsfor example one of the words significantly associated with save is frommany dictionaries for example webster ninth new collegiate dictionary make no explicit mention of from in the entry for save although british learners dictionaries do make specific mention of from in connection with savethese learners dictionaries pay more attention to language structure and collocation than do american collegiate dictionaries and lexicographers trained in the british tradition are often fairly skilled at spotting these generalizationshowever teasing out such facts and distinguishing true intuitions from false intuitions takes a lot of time and hard work and there is a high probability of inconsistencies and omissionswhich other verbs typically associate with from and where does save rank in such a listthe association ratio identified 1530 words that are associated with from 911 of them were tagged as verbsthe first 100 verbs are refrainvb gleanedvbn stemsvbz stemmedvbd stemmingvbg rangingvbg stemmedvbn ranged vbn derivedvbn rangedvbd extortvb graduated vbd barredvbn benefitingvbg benefittedvbn benefitedvbn excusedvbd arisingvbg rangevb exempts vbz suffersvbz exemptingvbg benefitedvbd preventedvbd seepingvbg barredvbd prevents vbz sufferingvbg excludedvbn marksvbz profiting vbg recoveringvbg dischargedvbn reboundingvbg varyvb exemptedvbn separatevb banishedvbn withdrawingvbg ferryvb preventedvbn profitvb barvb excusedvbn barsvbz benefitvb emerges vbz emergevb variesvbz differvb removedvbn exemptvb expelledvbn withdrawvb stemvb separatedvbn judgingvbg adaptedvbn escapingvbg inheritedvbn differedvbd emergedvbd withheldvbd leakedvbn stripvb resultingvbg discouragevb preventvb withdrewvbd prohibitsvbz borrowingvbg preventingvbg prohibitvb resultedvbd precludevb divertvb distinguishvb pulledvbn fell vbn variedvbn emergingvbg suffervb prohibiting vbg extractvb subtractvb recovervb paralyzed vbn stolevbd departingvbg escapedvbn prohibited vbn forbidvb evacuatedvbn reapvb barringvbg removingvbg stolenvbn receivesvbzsave from is a good example for illustrating the advantages of the association ratiosave is ranked 319th in this list indicating that the association is modest strong enough to be important but not so strong that it would pop out at us in a concordance or that it would be one of the first things to come to mindif the dictionary is going to list save from then for consistency sake it ought to consider listing all of the more important associations as wellof the 27 bare verbs in the list above all but seven are listed in collins cobuild english language dictionary as occurring with fromhowever this dictionary does not note that vary ferry strip divert forbid and reap occur with fromif the cobuild lexicographers had had access to the proposed measure they could possibly have obtained better coverage at less costhaving established the relative importance of save from and having noted that the two words are rarely computational linguistics volume 16 number 1 march 1990 27 kenneth church and patrick hanks word association norms mutual information and lexicography adjacent we would now like to speed up the laborintensive task of categorizing the concordance linesideally we would like to develop a set of semiautomatic tools that would help a lexicographer produce something like figure 2 which provides an annotated summary of the 65 concordance lines for save from5 the save from pattern occurs in about 10 of the 666 concordance lines for savetraditionally semantic categories have been only vaguely recognized and to date little effort has been devoted to a systematic classification of a large corpuslexicographers have tended to use concordances impressionistically semantic theorists aiers and others have concentrated on a few interesting examples eg bachelor and have not given much thought to how the results might be scaled upwith this concern in mind it seems reasonable to ask how well these 65 lines for savefrom fit in with all other uses of save a laborious concordance analysis was undertaken to answer this questionwhen it was nearing completion we noticed that the tags that we were inventing to capture the generalizations could in most cases have been suggested by looking at the lexical items listed in the association ratio table for savefor example we had failed to notice the significance of time adverbials in our analysis of save and no dictionary records thisyet it should be rescuers who helped save the toddlerperson from an abandoned well member states to help save the pecinst from possible bankruptcyecom walnut and ash trees to save them from the axes and saws of a logging company after the attack to save the ship from a terriblebad fire navy reports concluded thursday certificates that would save shoppersperson anywhere from 50 to analyze lines such as the trend to save the forests env it is our turn to save the lake env joined a fight to save their forests env can we get busy to save the planet env if we had looked at the association ratio tables before labeling the 65 lines for save from we might have noticed the very large value for save forests suggesting that there may be an important pattern herein fact this pattern probably subsumes most of the occurrences of the quotsave animalquot pattern noticed in figure 2thus these tables do not provide semantic tags but they provide a powerful set of suggestions to the lexicographer for what needs to be accounted for in choosing a set of semantic tagsit may be that everything said here about save and other words is true only of 1987 american journaleseintuitively however many of the patterns discovered seem to be good candidates for conventions of general englisha future step would be to examine other more balanced corpora and test how well the patterns hold upwe began this paper with the psycholinguistic notion of word association norm and extended that concept toward the information theoretic definition of mutual informationthis provided a precise statistical calculation that could be applied to a very large corpus of text to produce a table of associations for tens of thousands of wordswe were then able to show that the table encoded a number of very interesting patterns ranging from doctor nurse to save fromwe finally concluded by showing how the patterns in the association ratio table might help a lexicographer organize a concordancein point of fact we actually developed these results in basically the reverse orderconcordance analysis is still extremely laborintensive and prone to errors of omissionthe ways that concordances are sorted do not adequately support current lexicographic practicedespite the fact that a concordance is indexed by a single word often lexicographers actually use a second word such as from or an equally common semantic concept such as a time adverbial to decide how to categorize concordance linesin other words they use two words to triangulate in on a word sensethis triangulation approach clusters concordance lines together into word senses based primarily on usage as opposed to intuitive notions of meaningthus the question of what is a word sense can be addressed with syntactic methods and need not address semantics even though the inventory of tags may appear to have semantic valuesthe triangulation approach requires quotartquot how does the lexicographer decide which potential cut points are quotinterestingquot and which are merely due to chancethe proposed association ratio score provides a practical and objective measure that is often a fairly good approximation to the quotartquot since the proposed measure is objective it can be applied in a systematic way over a large body of material steadily improving consistency and productivitybut on the other hand the objective score can be misleadingthe score takes only distributional evidence into accountfor example the measure favors set for over set down it does not know that the former is less interesting because its semantics are compositionalin addition the measure is extremely superficial it cannot cluster words into appropriate syntactic classes without an explicit preprocess such as church parts program or hindle parserneither of these preprocesses though can help highlight the quotnaturalquot similarity between nouns such as picture and photographalthough one might imagine a preprocess that would help in this particular case there will probably always be a class of generalizations that are obvious to an intelligent lexicographer but lie hopelessly beyond the objectivity of a computerdespite these problems the association ratio could be an important tool to aid the lexicographer rather like an index to the concordancesit can help us decide what to look for it provides a quick summary of what company our words do keep
J90-1003
word association norms mutual information and lexicographythe term word association is used in a very particular sense in the psycholinguistic literaturewe will extend the term to provide the basis for a statistical description of a variety of interesting linguistic phenomena ranging from semantic relations of the doctornurse type to lexicosyntactic cooccurrence constraints between verbs and prepositions this paper will propose an objective measure based on the information theoretic notion of mutual information for estimating word association norms from computer readable corporathe proposed measure the association ratio estimates word association norms directly from computer readable corpora making it possible to estimate norms for tens of thousands of wordsin our work the significance of an association is measured by the mutual information i ie the probability of observing x and y together compared with the probability of observing x and y independently
semanticheaddriven generation present algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicable in particular unlike a previous bottomup generator it allows use of semantically nonmonotonic grammars yet unlike topdown methods it also permits leftrecursion the enabling design feature of the algorithm is its implicit traversal of the analysis tree for the string being generated in a semanticheaddriven fashion we present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicablein particular unlike a previous bottomup generator it allows use of semantically nonmonotonic grammars yet unlike topdown methods it also permits leftrecursionthe enabling design feature of the algorithm is its implicit traversal of the analysis tree for the string being generated in a semanticheaddriven fashionthe problem of generating a wellformed natural language expression from an encoding of its meaning possesses properties that distinguish it from the converse problem of recovering a meaning encoding from a given natural language expressionthis much is axiomaticin previous work however one of us attempted to characterize these differing properties in such a way that a single uniform architecture appropriately parameterized might be used for both natural language processesin particular we developed an architecture inspired by the earley deduction work of pereira and warren but which generalized that work allowing for its use in both a parsing and generation mode merely by setting the values of a small number of parametersas a method for generating natural language expressions the earley deduction method is reasonably successful along certain dimensionsit is quite simple general in its applicability to a range of unificationbased and logic grammar formalisms and uniform in that it places only one restriction on the form of the linguistic analyses allowed by the grammars used in generationin particular generation from grammars with recursions whose wellfoundedness relies on lexical information will terminate topdown generation regimes such as those of wedekind or dymetman and isabelle lack this property further discussion can be found in section 21unfortunately the bottomup lefttoright processing regime of earley generationas it might be calledhas its own inherent frailtiesefficiency considerations require that only grammars possessing a property of semantic monotonicity can be effectively used and even for those grammars processing can become overly nondeterministicthe algorithm described in this paper is an attempt to resolve these problems in a satisfactory manneralthough we believe that this algorithm could be seen as an instance of a uniform architecture for parsing and generationjust as the extended earley parser and the bottomup generator were instances of the generalized earley deduction architectureour efforts to date have been aimed foremost toward the development of the algorithm for generation alonewe will mention efforts toward this end in section 5as does the earleybased generator the new algorithm assumes that the grammar is a unificationbased or logic grammar with a phrase structure backbone and complex nonterminalsfurthermore and again consistent with previous work we assume that the nonterminals associate to the phrases they describe logical expressions encoding their possible meaningsbeyond these requirements common to logicbased formalisms the methods are generally applicablea variant of our method is used in van noord bug system part of mimo2 an experimental machine translation system for translating international news items of teletext which uses a prolog version of patrii similar to that of hirsh according to martin kay the strep machine translation project at the center for the study of language and information uses a version of our algorithm to generate with respect to grammars based on headdriven phrase structure grammar finally calder et al report on a generation algorithm for unification categorial grammar that appears to be a special case of oursdespite the general applicability of the algorithm we will for the sake of concreteness describe it and other generation algorithms in terms of their implementation for definiteclause grammars for ease of exposition the encoding will be a bit more cumbersome than is typically found in prolog dcg interpretersthe standard dcg encoding in prolog uses the notation where the are terms representing the grammatical category of an expression and its subconstituentsterminal symbols are introduced into rules by enclosing them in list brackets for example sbars that sssuch rules can be translated into prolog directly using a difference list encoding of string positions we assume readers are familiar with this technique because we concentrate on the relationship between expressions in a language and their logical forms we will assume that the category terms have both a syntactic and a semantic componentin particular the infix function symbol will be used to form categories of the form synsem where syn is the syntactic category of the expression and sem is an encoding of its semantics as a logical form the previous rule uses this notation for examplefrom a dcg perspective all the rules involve the single nonterminal with the given intended interpretationfurthermore the representation of grammars that we will postulate includes the threading of string positions explicitly so that a node description will be of the form node the first argument of the node functor is the category divided into its syntactic and semantic components the second argument is the difference list encoding of the substring it coversin summary a dcg grammar rule will be encoded as the clause node1 pop node i pop1 node i p_1pwe use the functor to distinguish this node encoding from the standard onethe righthandside elements are kept as a prolog list for easier manipulation by the interpreters we will buildwe turn now to the issue of terminal symbols on the righthand sides of rules in the node encodingduring the compilation process from the standard encoding to the node encoding the righthand side of a rule is converted from a list of categories and terminal strings to a list of nodes connected together by the differencelist threading technique used for standard dcg compilationat that point terminal strings can be introduced into the string threading and need never be considered furtherfor instance the previous rule becomes node nodethroughout we will alternate between the two encodings using the standard one for readability and the node encoding as the actual data for grammar interpretationas the latter more cumbersome representation is algorithmically generable from the former no loss of generality ensues from using bothexisting generation algorithms have efficiency or termination problems with respect to certain classes of grammarswe review the problems of both topdown and bottomup regimes in this sectionconsider a naive topdown generation mechanism that takes as input the semantics to generate from and a corresponding syntactic category and builds a complete tree topdown lefttoright by applying rules of the grammar nondeterministically to the fringe of the expanding treethis control regime is realized for instance when running a dcg quotbackwardsquot as a generatorconcretely the following dcg interpreterwritten in prolog and taking as its data the grammar in encoded formimplements such a generation methodclearly such a generator may not terminatefor example consider a grammar that includes the rules computational linguistics volume 16 number 1 march 1990 31 shieber et atsemantic headdriven grammar this grammar admits sentences like quotjohn leftquot and quotjohn father leftquot with logical form encodings left and left respectivelythe technique used here to build the logical forms is wellknown in logic grammarsgeneration with the goal gen sent using the generator above will result in application of the first rule to the node node sentha subgoal for the generation of a node node will resultto this subgoal the second rule will apply leading to a subgoal for generation of the node nodenp sentp1 which itself by virtue of the third rule leads to another instance of the np node generation subgoalof course the loop may now be repeated an arbitrary number of timesgraphing the tree being constructed by the traversal of this algorithm as in figure 1 immediately exhibits the potential for nontermination in the control structurethis is an instance of the general problem familiar from logic programming that a logic program may not terminate when called with a goal less instantiated than what was intended by the program designerseveral researchers have noted that a different ordering of the branches in the topdown traversal would in the case at hand remedy the nontermination problemfor the example above the solution is to generate the vp firstusing the goal generate left p1 in the course of which the variable np will become bound so that the generation from node will terminatewe might allow for reordering of the traversal of the children by sorting the nodes before generating themthis can be simply done by modifying the first clause of generatehere we have introduced a predicate sort_children to reorder the child nodes before generatingdymetman and isabelle propose a nodeordering solution to the topdown nontermination problem they allow the grammar writer to specify a separate goal ordering for parsing and for generation by annotating the rules by handstrzalkowski develops an algorithm for generating such annotations automaticallyin both of these cases the node ordering is known a priori and can be thought of as applying to the rules at compile timewedekind achieves the reordering by first generating nodes that are connected that is whose semantics is instantiatedsince the np is not connected in this sense but the vp is the latter will be expanded firstin essence the technique is a kind of goal freezing or implicit wait declaration this method is more general as the reordering is dynamic the ordering of child nodes can in principle at least be different for di fferent uses of the same rulethe generality seems necessary for cases in which the a priori ordering of goals is insufficient dymetman and isabelle also introduce goal freezing to control expansionalthough vastly superior to the naive topdown algorithm even this sort of amended topdown approach to generation based on goal freezing under one guise or another is insufficient with respect to certain linguistically plausible analysesthe symptom is an ordering paradox in the sortingfor example the quotcomplementsquot rule given by shieber in the patrii formalism can be encoded as the dcg rule topdown generation using this rule will be forced to expand the lower vp before its complement since lf is uninstantiated initiallyany of the reordering methods must choose to expand the child vp node firstbut in that case application of the rule can recur indefinitely leading to nonterminationthus no matter what ordering of subgoals is chosen nontermination resultsof course if one knew ahead of time that the subcategorizat ion list being built up as the value for syncat was bounded in size then an ad hoc solution would be to limit recursive use of this rule when that limit had been reachedbut even this ad hoc solution is problematic as there may be no principled bound on the size of the subcategorization listfor instance in analyses of dutch crossserial verb constructions subcategorization lists may be concatenated by syntactic rules many deadlockprone rules can be replaced by rules that allow reordering however he states that quotthe general solution to this normalization problem is still under investigationquot we think that such a general solution is unlikely because of cases like the one above in which no finite amount of partial execution can necessarily bring sufficient information to bear on the rule to allow orderingthe rule would have to be partially executed with respect to itself and all verbs so as to bring the lexical information that wellfounds the ordering to bear on the ordering problemin general this is not a finite process as the previous dutch example revealsthis does not deny that compilation methods may be able to convert a grammar into a program that generates without termination problemsin fact the partial execution techniques described by two of us could form the basis of a compiler built by partial execution of the new algorithm we propose below relative to a grammarhowever the compiler will not generate a program that generates topdown as strzalkowski does helpen voeren help feed in summary topdown generation algorithms even if controlled by the instantiation status of goals can fail to terminate on certain grammarsthe critical property of the example given above is that the wellfoundedness of the generation process resides in lexical information unavailable to topdown regimesthis property is the hallmark of several linguistically reasonable analyses based on lexical encoding of grammatical information such as are found in categorial grammar and its unificationbased and combinatorial variants in headdriven phrasestructure grammar and in lexicalfunctional grammarthe bottomup earleydeduction generator does not fall prey to these problems of nontermination in the face of recursion because lexical information is available immediatelyhowever several important frailties of the earley generation method were noted even in the earlier workfor efficiency generation using this earley deduction method requires an incomplete search strategy filtering the search space using semantic informationthe semantic filter makes generation from a logical form computationally feasible but preserves completeness of the generation process only in the case of semantically monotonic grammarsthose grammars in which the semantic component of each righthandside nonterminal subsumes some portion of the semantic component of the lefthandsidethe semantic monotonicity constraint itself is quite restrictiveas stated in the original earley generation paper quotperhaps the most immediate problem raised by earley generation is the strong requirement of semantic monotonicityfinding a weaker constraint on grammars that still allows efficient processing is thus an important research objectivequot although it is intuitively plausible that the semantic content of subconstituents ought to play a role in the semantics of their combinationthis is just a kind of compositionality claimthere are certain cases in which reasonable linguistic analyses might violate this intuitionin general these cases arise when a particular lexical item is stipulated to occur the stipulation being either lexical or grammatical second the lefttoright scheduling of earley parsing geared as it is toward the structure of the string rather than that of its meaning is inherently more appropriate for parsing than generation3 this manifests itself in an overly high degree of nondeterminism in the generation processfor instance various nondeterministic possibilities for generating a noun phrase might be entertained merely because the np occurs before the verb which would more fully specify and therefore limit the optionsthis nondeterminism has been observed in practicewe can think of a parsing or generation process as discovering an analysis tree4 one admitted by the grammar and zag saw computational linguistics volume 16 number 1 march 1990 33 shieber et atsemantic headdriven grammar satisfying certain syntactic or semantic conditions by traversing a virtual tree and constructing the actual tree during the traversalthe conditions to be satisfied possessing a given yield in the parsing case or having a root node labeled with given semantic information in the case of generationreflect the different premises of the two types of problemsthis perspective purposely abstracts issues of nondeterminism in the parsing or generation process as it assumes an oracle to provide traversal steps that happen to match the ethereal virtual tree being constructedit is this abstraction that makes it a useful expository device but should not be taken literally as a description of an algorithmfrom this point of view a naive topdown parser or generator performs a depthfirst lefttoright traversal of the treecompletion steps in earley algorithm whether used for parsing or generation correspond to a postorder traversal the lefttoright traversal order of both of these methods is geared towards the given information in a parsing problem the string rather than that of a generation problem the goal logical formit is exactly this mismatch between structure of the traversal and structure of the problem premise that accounts for the profligacy of these approaches when used for generationthus for generation we want a traversal order geared to the premise of the generation problem that is to the semantic structure of the sentencethe new algorithm is designed to reflect such a traversal strategy respecting the semantic structure of the string being generated rather than the string itselfgiven an analysis tree for a sentence we define the pivot node as the lowest node in the tree such that it and all higher nodes up to the root have the same semanticsintuitively speaking the pivot serves as the semantic head of the root nodeour traversal will proceed both topdown and bottomup from the pivot a sort of semanticheaddriven traversal of the treethe choice of this traversal allows a great reduction in the search for rules used to build the analysis treeto be able to identify possible pivots we distinguish a subset of the rules of the grammar the chain rules in which the semantics of some righthandside element is identical to the semantics of the lefthandsidethe righthandside element will be called the rule semantic headthe traversal then will work topdown from the pivot using a nonchain rule for if a chain rule were used the pivot would not be the lowest node sharing semantics with the rootinstead the pivot semantic head would beafter the nonchain rule is chosen each of its children must be generated recursivelythe bottomup steps to connect the pivot to the root of the analysis tree can be restricted to chain rules only as the pivot has the same semantics as the root and must therefore be the semantic headagain after a chain rule is chosen to move up one node in the tree being constructed the remaining children must be generated recursivelythe topdown base case occurs when the nonchain rule has no nonterminal children that is it introduces lexical material onlythe bottomup base case occurs when the pivot and root are trivially connected because they are one and the same nodean interesting side issue arises when there are two righthandside elements that are semantically identical to the lefthandsidethis provides some freedom in choosing the semantic head although the choice is not without ramificationsfor instance in some analyses of np structure a rule such as npnp detnp nbarnp is postulatedin general a chain rule is used bottomup from its semantic head and topdown on the nonsemantichead siblingsthus if a nonsemantichead subconstituent has the same semantics as the lefthandside a recursive topdown generation with the same semantics will be invokedin theory this can lead to nontermination unless syntactic factors eliminate the recursion as they would in the rule above regardless of which element is chosen as semantic headin a rule for relative clause introduction such as the following nbarn nbarn sbarn we can choose the nominal as semantic head to effect terminationhowever there are other problematic cases such as verbmovement analyses of verbsecond languageswe discuss this topic further in section 43to make the description more explicit we will develop a prolog implementation of the algorithm for dcgs along the way introducing some niceties of the algorithm previously glossed overas before a term of the form node represents a phrase with the syntactic and semantic information given by cat starting at position po and ending at position p in the string being generatedas usual for dcgs a string position is represented by the list of string elements after the positionthe generation process starts with a goal category and attempts to generate an appropriate node in the process instantiating the generated string gen generateto generate from a node we nondeterministically choose a nonchain rule whose lefthandside will serve as the pivotfor each righthandside element we recursively generate and then connect the pivot to the rootthe connection of a pivot to the root as noted before requires choice of a chain rule whose semantic head matches the pivot and the recursive generation of the remainder of its righthand sidewe assume a predicate applicable_ chain_ rule that holds if there is a chain rule admitting a node lhs as the lefthand side semhead as its semantic head and rhs as the remaining righthandside nodes such that the lefthandside node and the root node root can themselves be connectedthe base case occurs when the root and the pivot are the sameto implement the generator correctly identity checks like this one must use a sound unification algorithm with the occurs checkthe reason is simpleconsider for example a grammar with a gapthreading treatment of whmovement which might include the rule npseminxsem stating that an np with agreement agr and semantics sem can be empty provided that the list of gaps in the np can be represented as the difference list npseminx that is the list containing an np gap with the same agreement features agrbecause the above rule is a nonchain rule it will be considered when trying to generate any nongap np such as the proper noun npjohnthe base case of connect will try to unify that term with the head of the rule above leading to the attempted unification of x with npsemix an occurscheck failure that would not be caught by the default prolog unification algorithmthe base case incorporating the explicit call to a sound unification algorithm is therefore as follows connect trivially connect pivot to root unifynow we need only define the notion of an applicable chain or nonchain rulea nonchain rule is applicable if the semantics of the lefthand side of the rule matches that of the rootfurther we require a topdown check that syntactically the pivot can serve as the semantic head of the rootfor this purpose we assume a predicate chained_ nodes that codifies the transitive closure of the semantic head relation over categoriesthis is the correlate of the link relation used in leftcorner parsers with topdown filtering we direct the reader to the discussion by matsumoto et al or pereira and shieber for further informationa chain rule is applicable to connect a pivot to a root if the pivot can serve as the semantic head of the rule and the lefthand side of the rule is appropriate for linking to the root applicable_ chain_ rule choose a chain rule chain_ rule whose sem head matches pivot unify make sure the categories can connect chained_ nodesthe information needed to guide the generation can be computed automatically from the grammara program to compile a dcg into these tables has in fact been implementedthe details of the process will not be discussed further interested readers may write to the first author for the required prolog codewe turn now to a simple example to give a sense of the order of processing pursued by this generation algorithmas in previous examples the grammar fragment in figure 3 uses the infix operator to separate syntactic and semantic category information and subcategorization for complements is performed lexicallyconsider the generation from the category sentence declthe analysis tree that we will be implicitly traversing in the course of generation is given computational linguistics volume 16 number 1 march 1990 35 shieber et al semantic headdriven grammar in figure 4the rule numbers are keyed to the grammarthe pivots chosen during generation and the branches corresponding to the semantic head relation are shown in boldfacewe begin by attempting to find a nonchain rule that will define the pivotthis is a rule whose lefthandside semantics matches the root semantics decl in fact the only such nonchain rule is we conjecture that the pivot is labeled sentence declin terms of the tree traversal we are implicitly choosing the root node a as the pivotwe recursively generate from the child node b whose category is scall_upfor this category the pivot will be defined by the nonchain rule again we recursively generate for all the nonterminal elements of the righthand side of this rule of which there are nonewe must therefore connect the pivot f to the root ba chain rule whose semantic head matches the pivot must be chosenthe only choice is the rule unifying the pivot in we find that we must recursively generate the remaining rhs element npfriends and then connect the lefthandside node e with category vpjohnpcall_ up to the same root bthe recursive generation yields a node covering the string quotfriendsquot following the previously generated string quotcallsquotthe recursive connection will use the same chain rule generating the particle quotupquot and the new node to be connected dthis node requires the chain rule for connectionagain the recursive generation for the subject yields the string quotjohnquot and the new node to be connected scall_upthis last node connects to the root b by virtue of identitythis completes the process of generating topdown from the original pivot sentencedeclall that remains is to connect this pivot to the original rootagain the process is trivial by virtue of the base case for connectionthe generation process is thus completed yielding the string quotjohn calls friends upquotthe drawing in figure 4 summarizes the generation process by showing which steps were performed topdown or bottomup by arrows on the analysis tree branchesthe grammar presented here was forced for expository reasons to be trivialnonetheless several important properties of the algorithm are exhibited even in the preceding simple examplefirst the order of processing is not lefttorightthe verb was generated before any of its complementsbecause of this full information about the subject including agreement information was available before it was generatedthus the nondeterminism that is an artifact of lefttoright processing and a source of inefficiency in the earley generator is eliminatedindeed the example here was completely deterministic all rule choices were forcedin addition the semantic information about the particle quotupquot was available even though this information appears nowhere in the goal semanticsthat is the generator operated appropriately despite a semantically nonmonotonic grammarfinally even though much of the processing is topdown leftrecursive rules even deadlockprone rules are handled in a constrained manner by the algorithmfor these reasons we feel that the semanticheaddriven algorithm is a significant improvement over topdown methods and the previous bottomup method based on earley deductionwe will outline here how the new algorithm can generate from a quantified logical form sentences with quantified nps one of whose readings is the original logical form that is how it performs quantifier lowering automaticallyfor this we will associate a quantifier store with certain categories and add to the grammar suitable store manipulation ruleseach category whose constituents may create store elements will have a store featurefurthermore for each such category whose semantics can be the scope of a quantifier there will be an optional nonchain rule to take the top element of an ordered store and apply it to the semantics of the categoryfor example here is the rule for sentences squant sistoresthe term quant represents a quantified formula with quantifier q bound variable x restriction r and scope s qterm is the corresponding store elementin addition some mechanism is needed to combine the stores of the immediate constituents of a phrase into a store for the phrasefor example the combination of subject and complement stores for a verb into a clause store is done in one of our test grammars by lexical rules such as vp0 nps scgen generates which states that the store sc of a clause with main verb quotlovequot and the stores ss and so of the subject and object the verb subcategorizes for satisfy the constraint shuffle meaning that sc is an interleaving of elements of ss and so in their original order5 constraints in grammar rules such as the one above are handled in the generator by the clause generate which passes the conditions to prolog for executionthis extension must be used with great care because it is in general difficult to know the instantion state of such goals when they are called from the generator and as noted before underinstantiated goals may lead to nonterminationa safer scheme would rely on delaying the execution of goals until their required instantiation patterns are satisfied finally it is necessary to deal with the noun phrases that create store elementsignoring the issue of how to treat quantifiers from within complex noun phrases we need lexical rules for determiners of the form stating that the semantics of a quantified np is simply the variable bound by the store element arising from the npfor rules of this form to work properly it is essential that distinct bound logicalform variables be represented as distinct constants in the terms encoding the logical formsthis is an instance of the problem of coherence discussed in section 41figure 5 shows the analysis tree traversal for generating the sentence quotno program generates every sentencequot from the logical form deol quantgen the numbers labeling nodes in the figure correspond to tree traversal orderwe will only discuss the aspects of the traversal involving the new grammer rules given abovethe remaining rules are like the ones in figure 3 except that nonterminals have an additional store argument where necessarypivot nodes b and c result from the application of rule to reverse the unstoring of the quantifiers in the goal logical formthe next pivot node is node j where rule is appliedfor the application of this rule to terminate it is necessary that at least either the first two or the last argument of the shuffle condition be instantiatedthe pivot node must obtain the required store instantiation from the goal node being generatedthis happens automatically in the rule applicability check that identified the pivot since the table chained_ nodes identifies the store variables for the goal and pivot nodesgiven the sentence store the shuffle predicate nondeterministically generates every the substores for the constituents subcategorized for by the verbthe next interesting event occurs at pivot node i where rule is used to absorb the store for the object quantified noun phrasethe bound variable for the stored quantifier in this case s must be the same as the meaning of the noun phrase and determiner6 this condition was already used to filter out inappropriate shuffle results when node l was selected as pivot for a noun phrase goal again through the nonterminal argument identifications included in the chained_ nodes tablethe rules outlined here are less efficient than they might be because during the distribution of store elements among the subject and complements of a verb no check is performed as to whether the variable bound by a store element actually appears in the semantics of the phrase to which it is being assigned leading to many dead ends in the generation processalso the rules are sound for generation but not for analysis because they do not enforce the constraint that every occurrence of a variable in logical form be outscoped by the variable binderadding appropriate side conditions to the rules following the constraints discussed by hobbs and shieber would not be difficultthe basic semanticheaddriven generation algorithm can be augmented in various ways so as to encompass some important analyses and constraintsin particular we discuss the incorporation of wedekind defines completeness and coherence of a generation algorithm as followssuppose a generator derives a string w from a logical form s and the grammar assigns to w the logical form athe generator is complete if s always subsumes a and coherent if a always subsumes s the generator defined in section 31 is not coherent or complete in this sense it requires only that a and s be compatible that is unifiableif the logicalform language and semantic interpretation system provide a sound treatment of variable binding and scope abstraction and application then completeness and coherence will be irrelevant because the logical form of any phrase will not contain free variableshowever neither semantic projections in lexicalfunctional grammar nor definiteclause grammars provide the means for such a sound treatment logicalform variables or missing arguments of predicates are both encoded as unbound variables at the description levelunder such conditions completeness and coherence become importantfor example suppose a grammar associated the following strings and logical formsjohn ate a nice yellow banana the generator of section 31 would generate any of these sentences for the logical form eat and would generate quotjohn atequot for the logical form eat coherence can be achieved by removing the confusion between objectlevel and metalevel variables mentioned above that is by treating logicalform variables as constants at the description levelin practice this can be achieved by replacing each variable in the semantics from which we are generating by a new distinct constant these new constants will not unify with any augmentations to the semanticsa suitable modification of our generator would be this leaves us with the completeness problemthis problem arises when there are phrases whose semantics are not ground at the description level but instead subsume the goal logical form or generationfor instance in our hypothetical example the string quotjohn eatsquot will be generated for semantics eatthe solution is to test at the end of the generation procedure whether the feature structure that is found is complete with respect to the original feature structurehowever because of the way in which topdown information is used it is unclear what semantic information is derived by the rules themselves and what semantic information is available because of unifications with the original semanticsfor this reason quotshadowquot variables are added to the generator that represent the feature structure derived by the grammar itselffurthermore a copy of the semantics of the original feature structure is made at the start of the generation processcompleteness is achieved by testing whether the semantics of the shadow is subsumed by the copyas it stands the generation algorithm chooses particular lexical forms onlinethis approach can lead to a certain amount of unnecessary nondeterminismthe choice of a particular form depends on the available semantic and syntactic informationsometimes there is not enough information available to choose a form deterministicallyfor instance the choice of verb form might depend on syntactic features of the verb subject available only after the subject has been generatedthis nondeterminism can be eliminated by deferring lexical choice to a postprocessinflectional and orthographical rules are only applied when the generation process is finished and all syntactic features are knownin short the generator will yield a list of lexical items instead of a list of wordsto this list the inflectional and orthographical rules are appliedthe mimo2 system incorporates such a mechanism into the previous generation algorithm quite successfullyexperiments with particular grammars of dutch spanish and english have shown that the delay mechanism results in a generator that is faster by a factor of two or three on short sentencesof course the same mechanism could be added to any of the other generation techniques discussed in this paper it is independent of the traversal orderthe particular approach to delaying lexical choice found in the mimo2 system relies on the structure of the system morphological component as presented in figure 6the figure shows how inflectional rules orthographical rules morphology and syntax are related orthographical rules are applied to the results of inflectional rulesthese infectional rules are applied to the results of the morphological rulesthe result of the orthographical part are then input for the syntaxgrammar of syntax and semantics twolevel orthography paradigmatic inflection morphological unification grammar for derivations compounds and lexical rules lexicon of stems computational linguistics volume 16 number 1 march 1990 39 shieber et atsemantic headdriven grammar however in the lexicaldelayed scheme the inflectional and orthographical rules are delayedduring the generation process the results of the morphological grammar are used directlywe emphasize that this is possible only because the inflectional and orthographical rules are monotonic in the sense that they only further instantiate the feature structure of a lexical item but do not change itthis implies for example that a rule that relates an active and a passive variant of a verb will not be an inflectional rule but rather a rule in the morphological grammar although the rule that builds a participle from a stem may in fact be an inflectional rule if it only instantiates the feature vformwhen the generation process proper is finished the delayed rules are applied and the correct forms can be chosen deterministicallythe delay mechanism is useful in the following two general cases first the mechanism is useful if an inflectional variant depends on syntatic features that are not yet availablethe particular choice of whether a verb has singular or plural inflection depends on the syntactic agreement features of its subject these are only available after the subject has been generatedother examples may include the particular choice of personal and relative pronouns and so forthsecond delaying lexical choice is useful when there are several variants for some word that are equally possible because they are semantically and syntactically identicalfor example a word may have several spelling variantsif we delay orthography then the generation process computes with only one quotabstractquot variantafter the generation process is completed several variants can be filled in for this abstract oneexamples from english include words that take both regular and irregular tense forms and variants such as quottravellertravelerquot realizerealisequot etcthe success of the generation algorithm presented here comes about because lexical information is available as soon as possiblereturning to the dutch examples in section 21 the list of subcategorization elements is usually known in timesemantic heads can then deterministically pick out their argumentsan example in which this is not the case is an analysis of german and dutch where the position of the verb in root sentences is different from its position in subordinates in most traditional analyses it is assumed that the verb in root sentences has been quotmovedquot from the final position to the second positionkoster argues for this analysis of dutchthus a simple root sentence in german and dutch is analyzed as in the following examples vandaag kust de man de vrouw today kisses the man the woman vandaag heeft de man de vrouw e gekust today has the man the woman kissed vandaag ziet en hoortli de man de vrouw ei today sees and hears the man the woman in dcg such an analysis can easily be defined by unifying the information on the verb in second position to some empty verb in final position as exemplified by the simple grammar for a dutch fragment in figure 7in this grammar a special empty element is defined corresponding to the missing verball information on the verb in second position is percolated through the rules to this empty verbtherefore the definition of the several vp rules is valid for both root and subordinate clauses7 the problem comes about because the generator can at some point predict the empty verb as the pivot of the constructionhowever in the definition of this empty verb no information will get instantiatedtherefore the vp complement rule can be applied an unbounded number of timesthe length of the lists of complements now is not known in advance and the generator will not terminatevan noord proposes an ad hoc solution that assumes that the empty verb is an inflectional variant of a verbas inflection rules are delayed the generation process acts as if the empty verb is an ordinary verb thereby circumventing the problemhowever this solution only works if the head that is displaced is always lexicalthis is not the case in generalin dutch the verb second position can not only be filled by lexical verbs but also by a conjunction of verbssimilarly spanish clause structure can be analyzed by assuming the quotmovementquot of complex verbal constructions to the second positionfinally in german it is possible to topicalize a verbal headnote that in these problematic cases the head that lacks sufficient information is overtly realized in a position where there is enough information thus it appears that the problem might be solved if the antecedent is generated before the anaphorthis is the case if the antecedent is the semantic head of the clause the anaphor will then be instantiated via topdown information through the chained_nodes predicatehowever in the example grammar the antecedent is not necessarily the semantic head of the clause because of the vp modifier rule typically there is a relation between the empty anaphor and some antecedent expressed implicitly in the grammar in the case at hand it comes about by percolating the information through different rules from the antecedent to the anaphorwe propose to make this relation explicit by defining an empty head with a prolog clause using the predicate head_gap head _ gapsem vsemsemsuch a definition can intuitively be understood as follows once there is some node x then there could just as well have been the empty node y note that a lot of information is shared between the two nodes thereby making the relation between anaphor and antecedent explicitsuch rules can be incorporated in the generator by adding the following clause for connect connect head_ gap connectnote that the problem is now solved because the gap will only be selected after its antecedent has been builtsome parts of this antecedent are then unified with some parts of the gapthe subcategorization list for example will thus be instantiated in timewe mentioned earlier that although the algorithm as stated is applicable specifically to generation we expect that it could be thought of as an instance of a uniform architecture for parsing and generation as the earley generation algorithm wastwo pieces of evidence point this wayfirst martin kay has developed a parsing algorithm that seems to be the parsing correlate to the generation algorithm presented hereits existence might point the way toward a uniform architecturesecond one of us has developed a general proof procedure for horn clauses that can serve as a skeleton for both a semanticheaddriven generator and a leftcorner parserhowever the parameterization is much more broad than for the uniform earley architecture further enhancements to the algorithm are envisionedfirst any system making use of a tabular link predicate over complex nonterminals is subject to a problem of spurious redundancy in processing if the elements in the link table are not mutually exclusivefor instance a single chain rule might be considered to be applicable twice because of the nondeterminism of the call to chained_nodesthis general problem has to date received little attention and no satisfactory solution is found in the logic grammar literaturemore generally the backtracking regimen of our implementation of the algorithm may lead to recomputation of resultsagain this is a general property of backtrack methods and is not particular to our applicationthe use of dynamic programming techniques as in chart parsing would be an appropriate augmentation to the implementation of the algorithmhappily such an augmentation would serve to eliminate the redundancy caused by the linking relation as wellfinally to incorporate a general facility for auxiliary conditions in rules some sort of delayed evaluation triggered by appropriate instantiation would be desirable as mentioned in section 34none of these changes however constitutes restructuring of the algorithm rather they modify its realization in significant and important waysthe research reported herein was primarily completed while shieber and pereira were at the artificial intelligence center sri internationalthey and moore were supported in this work by a contract with the nippon telephone and telegraph corporation and by a gift from the systems development foundation as part of a coordinated research effort with the center for the study of language and information stanford university van noord was supported by the european community and the nederlands bureau voor bibliotheekwezen en informatieverzorgin through the eurotra projectwe would like to thank mary dalrymple and louis des tombe for their helpful discussions regarding this work the artificial intelligence center for their support of the research and the participants in the mimo2 project a research machine translation project of some members of eurotrautrecht
J90-1004
semanticheaddriven generationwe present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicablein particular unlike a previous bottomup generator it allows use of semlantically nonmonotonic grammars yet unlike topdown methods it also permits leftrecursionthe enabling design feature of the algorithm is its implicit traversal of the analysis tree for the string being generated in a semanticheaddriven fashionwe introduce a headdriven algorithm for generating from logical forms
a statistical approach to machine translation this paper we present a statistical to machine translation we describe the application of our approach to translation from french to english and give preliminary results in this paper we present a statistical approach to machine translationwe describe the application of our approach to translation from french to english and give preliminary resultsthe field of machine translation is almost as old as the modern digital computerin 1949 warren weaver suggested that the problem be attacked with statistical methods and ideas from information theory an area which he claude shannon and others were developing at the time although researchers quickly abandoned this approach advancing numerous theoretical objections we believe that the true obstacles lay in the relative impotence of the available computers and the dearth of machinereadable text from which to gather the statistics vital to such an attacktoday computers are five orders of magnitude faster than they were in 1950 and have hundreds of millions of bytes of storagelarge machinereadable corpora are readily availablestatistical methods have proven their value in automatic speech recognition and have recently been applied to lexicography and to natural language processing we feel that it is time to give them a chance in machine translationthe job of a translator is to render in one language the meaning expressed by a passage of text in another languagethis task is not always straightforwardfor example the translation of a word may depend on words quite far from itsome english translators of proust seven volume work a la recherche du temps perdu have striven to make the first word of the first volume the same as the last word of the last volume because the french original begins and ends with the same word thus in its most highly developed form translation involves a careful study of the original text and may even encompass a detailed analysis of the author life and circumstanceswe of course do not hope to reach these pinnacles of the translator artin this paper we consider only the translation of individual sentencesusually there are many acceptable translations of a particular sentence the choice among them being largely a matter of tastewe take the view that every sentence in one language is a possible translation of any sentence in the otherwe assign to every pair of sentences a probability pr to be interpreted as the probability that a translator will produce t in the target language when presented with s in the source languagewe expect pr to be very small for pairs like and relatively large for pairs like we view the problem of machine translation then as followsgiven a sentence t in the target language we seek the sentence s from which the translator produced t we know that our chance of error is minimized by choosing that sentence s that is most probable given t thus we wish to choose s so as to maximize prusing bayes theorem we can write the denominator on the right of this equation does not depend on s and so it suffices to choose the s that maximizes the product prprcall the first factor in this product the language model probability of s and the second factor the translation probability of t given s although the interaction of these two factors can be quite profound it may help the reader to think of the translation probability as suggesting words from the source language that might have produced the words that we observe in the target sentence and to think of the language model probability as suggesting an order in which to place these source wordsthus as illustrated in figure 1 a statistical translation system requires a method for computing language model probabilities a method for computing translation probabilities and finally a method for searching among possible source sentences s for the one that gives the greatest value for prprin the remainder of this paper we describe a simple version of such a system that we have implementedin the a source language model and a translation model furnish a probability distribution over sourcetarget sentence pairs the joint probability pr of the pair is the product of the probability pr computed by the language model and the conditional probability pr computed by the translation modelthe parameters of these models are estimated automatically from a large database of sourcetarget sentence pairs using a statistical algorithm which optimizes in an appropriate sense the fit between the models and the datadecoder argmax pr argmax pr a decoder performs the actual translationgiven a sentence t in the target language the decoder chooses a viable translation by selecting that sentence in the source language for which the probability pr is maximumsystem next section we describe our language model for pr and in section 3 we describe our translation model for prin section 4 we describe our search procedurein section 5 we explain how we estimate the parameters of our models from a large database of translated textin section 6 we describe the results of two experiments we performed using these modelsfinally in section 7 we conclude with a discussion of some improvements that we intend to implementgiven a word string s is2 s we can without loss of generality write pr pr pr pr thus we can recast the language modeling problem as one of computing the probability of a single word given all of the words that precede it in a sentenceat any point in the sentence we must know the probability of an object word sr given a history sis2 s3_because there are so many histories we cannot simply treat each of these probabilities as a separate parameterone way to reduce the number of parameters is to place each of the histories into an equivalence class in some way and then to allow the probability of an object word to depend on the history only through the equivalence class into which that history fallsin an ngram model two histories are equivalent if they agree in their final n 1 wordsthus in a bigram model two histories are equivalent if they end in the same word and in a trigram model two histories are equivalent if they end in the same two wordswhile ngram models are linguistically simpleminded they have proven quite valuable in speech recognition and have the redeeming feature that they are easy to make and to usewe can see the power of a trigram model by applying it to something that we call bag translation from english into englishin bag translation we take a sentence cut it up into words place the words in a bag and then try to recover the sentence given the bagwe use the ngram model to rank different arrangements of the words in the bagthus we treat an arrangement s as better than another arrangement s if pr is greater than prwe tried this scheme on a random sample of sentencesfrom a collection of 100 sentences we considered the 38 sentences with fewer than 11 words eachwe had to restrict the length of the sentences because the number of possible rearrangements grows exponentially with sentence lengthwe used a trigram language model that had been constructed for a speech recognition systemwe were able to recover 24 of the sentences exactlysometimes the sentence that we found to be most probable was not an exact reproduction of the original but conveyed the same meaningin other cases of course the most probable sentence according to our model was just garbageif we count as correct all of the sentences that retained the meaning of the original then 32 of the 38 were correctsome examples of the original sentences and the sentences recovered from the bags are shown in figure 2we have no doubt that if we had been able to handle longer sentences the results would have been worse and that the probability of error grows rapidly with sentence lengthfor simple sentences it is reasonable to think of the french translation of an english sentence as being generated from the english sentence word by wordthus in the sentence pair we feel that john produces jean loves produces aime and mary produces please give me your response as soon as possibleplease give me your response as soon as possiblereconstruction preserving meaning now let me mention some of the disadvantageslet me mention some of the disadvantages nowgarbage reconstruction in our organization research has two missionsin our missions research organization has twomariewe say that a word is aligned with the word that it producesthus john is aligned with jean in the pair that we just discussedof course not all pairs of sentences are as simple as this examplein the pair we can again align john with jean and loves with aime but now nobody aligns with both n and personnesometimes words in the english sentence of the pair align with nothing in the french sentence and similarly occasionally words in the french member of the pair do not appear to go with any of the words in the english sentencewe refer to a picture such as that shown in figure 3 as an alignmentan alignment indicates the origin in the english sentence of each of the words in the french sentencewe call the number of french words that an english word produces in a given alignment its fertility in that alignmentif we look at a number of pairs we find that words near the beginning of the english sentence tend to align with words near the beginning of the french sentence and that words near the end of the english sentence tend to align with words near the end of the french sentencebut this is not always the casesometimes a french word will appear quite far from the english word that produced itwe call this effect distortiondistortions will for example allow adjectives to precede the nouns that they modify in english but to follow them in frenchit is convenient to introduce the following notation for alignmentswe write the french sentence followed by the english sentence and enclose the pair in parentheseswe separate the two by a vertical barfollowing each of the english words we give a parenthesized list of the positions of the words in the french sentence with which it is alignedif an english word is aligned with no french words then we omit the listthus loves mary is the simple alignment with which we began this discussionin the alignment does beat the dog john produces jean does produces nothing beat produces est battu the produces le dog produces chien and par is not produced by any of the english wordsrather than describe our translation model formally we present it by working an exampleto compute the probability of the alignment does beat the dog begin by multiplying the probability that john has fertility 1 by prles propositions ne seront pas mises en application maintenant then multiply by the probability that does has fertility 0next multiply by the probability that beat has fertility 2 times prpr and so onthe word par is produced from a special english word which is denoted by the result is finally factor in the distortion probabilitiesour model for distortions is at present very simplewe assume that the position of the target word depends only on the length of the target sentence and the position of the source wordtherefore a distortion probability has the form pr where i is a target position j a source position and i the target lengthin summary the parameters of our translation model are a set of fertility probabilities pr for each english word e and for each fertility n from 0 to some moderate limit in our case 25 a set of translation probabilities pr one for each element f of the french vocabulary and each member e of the english vocabulary and a set of distortion probabilities pr for each target position i source position j and target length 1we limit i j and i to the range 1 to 25in searching for the sentence s that maximizes pr pr we face the difficulty that there are simply too many sentences to tryinstead we must carry out a suboptimal searchwe do so using a variant of the stack search that has worked so well in speech recognition in a stack search we maintain a list of partial alignment hypothesesinitially this list contains only one entry corresponding to the hypothesis that the target sentence arose in some way from a sequence of source words that we do not knowin the alignment notation introduced earlier this entry might be where the asterisk is a place holder for an unknown sequence of source wordsthe search proceeds by iterations each of which extends some of the most promising entries on the listan entry is extended by adding one or more additional words to its hypothesisfor example we might extend the initial entry above to one or more of the following entries the search ends when there is a complete alignment on the list that is significantly more promising than any of the incomplete alignmentssometimes the sentence s that is found in this way is not the same as the sentence s that a translator might the proposal will not now be implemented computational linguistics volume 16 number 2 june 1990 81 peter f brown et ata statistical approach to machine translation have been working onwhen s itself is not an acceptable translation then there is clearly a problemif prpr is greater than prpr then the problem lies in our modeling of the language or of the translation processif however prpr is less than prpr then our search has failed to find the most likely sentencewe call this latter type of failure a search errorin the case of a search error we can be sure that our search procedure has failed to find the most probable source sentence but we cannot be sure that were we to correct the search we would also correct the errorwe might simply find an even more probable sentence that nonetheless is incorrectthus while a search error is a clear indictment of the search procedure it is not an acquittal of either the language model or the translation modelboth the language model and the translation model have many parameters that must be specifiedto estimate these parameters accurately we need a large quantity of datafor the parameters of the language model we need only english text which is available in computerreadable form from many sources but for the parameters of the translation model we need pairs of sentences that are translations of one anotherby law the proceedings of the canadian parliament are kept in both french and englishas members rise to address a question before the house or otherwise express themselves their remarks are jotted down in whichever of the two languages is usedafter the meeting adjourns a collection of translators begins working to produce a complete set of the proceedings in both french and englishthese proceedings are called hansards in remembrance of the publisher of the proceedings of the british parliament in the early 1800sall of these proceedings are available in computerreadable form and we have been able to obtain about 100 million words of english text and the corresponding french text from the canadian governmentalthough the translations are not made sentence by sentence we have been able to extract about three million pairs of sentences by using a statistical algorithm based on sentence lengthapproximately 99 of these pairs are made up of sentences that are actually translations of one anotherit is this collection of sentence pairs or more properly various subsets of this collection from which we have estimated the parameters of the language and translation modelsin the experiments we describe later we use a bigram language modelthus we have one parameter for every pair of words in the source languagewe estimate these parameters from the counts of word pairs in a large sample of text from the english part of our hansard data using a method described by jelinek and mercer in section 3 we discussed alignments of sentence pairsif we had a collection of aligned pairs of sentences then we could estimate the parameters of the translation model by counting just as we do for the language modelhowever we do not have alignments but only the unaligned pairs of sentencesthis is exactly analogous to the situation in speech recognition where one has the script of a sentence and the time waveform corresponding to an utterance of it but no indication of just what in the time waveform corresponds to what in the scriptin speech recognition this problem is attacked with the them algorithm we have adapted this algorithm to our problem in translationin brief it works like this given some initial estimate of the parameters we can compute the probability of any particular alignmentwe can then reestimate the parameters by weighing each possible alignment according to its probability as determined by the initial guess of the parametersrepeated iterations of this process lead to parameters that assign ever greater probability to the set of sentence pairs that we actually observethis algorithm leads to a local maximum of the probability of the observed pairs as a function of the parameters of the modelthere may be many such local maximathe particular one at which we arrive will in general depend on the initial choice of parametersin our first experiment we test our ability to estimate parameters for the translation modelwe chose as our english vocabulary the 9000 most common words in the english part of the hansard data and as our french vocabulary the 9000 most common french wordsfor the purposes of this experiment we replaced all other words with either the unknown english word or the unknown french word as appropriatewe applied the iterative algorithm discussed above in order to estimate some 81 million parameters from 40000 pairs of sentences comprising a total of about 800000 words in each languagethe algorithm requires an initial guess of the parameterswe assumed that each of the 9000 french words was equally probable as a translation of any of the 9000 english words we assumed that each of the fertilities from 0 to 25 was equally probable for each of the 9000 english words and finally we assumed that each target position was equally probable given each source position and target lengththus our initial choices contained very little information about either french or englishfigure 4 shows the translation and fertility probabilities we estimated for the english word thewe see that according to the model the translates most frequently into the french articles le and lathis is not surprising of course but we emphasize that it is determined completely automatically by the estimation processin some sense this correspondence is inherent in the sentence pairs themselvesfigure 5 shows these probabilities for the english word notas expected the french word pas appears as a highly probable translationalso the fertility probabilities indicate that not translates most often into two french words a situation consistent with the fact that negative french sentences contain the auxiliary word ne in addition to a primary negative word such as pas or rienfor both of these words we could easily have discovered the same information from a dictionaryin figure 6 we see the trained parameters for the english word hearas we would expect various forms of the french word entendre appear as possible translations but the most probable translation is the french word bravowhen we look at the fertilities here we see that the probability is about equally divided between fertility 0 and fertility 1the reason for this is that the english speaking members of parliament express their approval by shouting hear hear while the french speaking ones say bravothe translation model has learned that usually two hears produce one bravo by having one of them produce the bravo and the other produce nothinga given pair of sentences has many possible alignments since each target word can be aligned with any source worda translation model will assign significant probability only to some of the possible alignments and we can gain further insight about the model by examining the alignments that it considers most probablewe show one such alignment in figure 3observe that quite reasonably not is aligned with ne and pas while implemented is aligned with the phrase mises en applicationwe can also see here a deficiency of the model since intuitively we feel that will and be act in concert to produce seront while the model aligns will with seront but aligns be with nothingin our second experiment we used the statistical approach to translate from french to englishto have a manageable task we limited the english vocabulary to the 1000 most frequently used words in the english part of the hansard corpuswe chose the french vocabulary to be the 1700 most frequently used french words in translations of sentences that were completely covered by the 1000word english vocabularywe estimated the 17 million parameters of the translation model from 117000 pairs of sentences that were completely covered by both our french and english vocabularieswe estimated the parameters of the bigram language model from 570000 sentences from the english part of the hansard datathese sentences contain about 12 million words altogether and are not restricted to sentences completely covered by our vocabularywe used our search procedure to decode 73 new french sentences from elsewhere in the hansard datawe assigned each of the resulting sentences a category according to the following criteriaif the decoded sentence was exactly the same as the actual hansard translation we assigned the sentence to the exact categoryif it conveyed the same meaning as the hansard translation but in slightly different words we assigned it to the alternate categoryif the decoded sentence was a legitimate translation of the french sentence but did not convey the same meaning as the hansard translation we assigned it to the different categoryif it made sense as an english sentence but could not be interpreted as a translation of the french sentence we assigned it to the wrong categoryfinally if the decoded sentence was grammatically deficient we assigned it to the ungrammatical categoryan example from each category is shown in figure 7 and our decoding results are summarized in figure 8only 5 of the sentences fell into the exact categoryhowever we feel that a decoded sentence that is in any of the first three categories represents a reasonable translationby this criterion the system performed successfully 48 of the timeas an alternate measure of the system performance one of us corrected each of the sentences in the last three categories to either the exact or the alternate categorycounting one stroke for each letter that must be deleted and one stroke for each letter that must be inserted 776 strokes were needed to repair all of the decoded sentencesthis compares with the 1916 strokes required to generate all of the hansard translations from scratchthus to the extent that translation time can be equated with key strokes the system reduces the work by about 60there are many ways in which the simple models described in this paper can be improvedwe expect some improvement from estimating the parameters on more datafor the experiments described above we estimated the parameters of the models from only a small fraction of the data we have available for the translation model we used only about one percent of our data and for the language model only about ten percentwe have serious problems in sentences in which the translation of certain source words depends on the translation of other source wordsfor example the translation model produces aller from to go by producing aller from go and nothing from tointuitively we feel that to go functions as a unit to produce allerwhile our model allows many target words to come from the same source word it does not allow several source words to work together to produce a single target wordin the future we hope to address the problem of identifying groups of words in the source language that function as a unit in translationthis may take the form of a probabilistic division of the source sentence into groups of wordsat present we assume in our translation model that words are placed into the target sentence independently of one anotherclearly a more realistic assumption must account for the fact that words form phrases in the target sentence that are translations of phrases in the source sentence and that the target words in these phrases will tend to stay together even if the phrase itself is moved aroundwe are working on a model in which the positions of the target words produced by a particular source word depend on the identity of the source word and on the positions of the target words produced by the previous source wordwe are preparing a trigram language model that we hope will substantially improve the performance of the systema useful informationtheoretic measure of the complexity of a language with respect to a model is the perplexity as defined by bahl et al with the bigram model that we are currently using the source text for our 1000word translation task has a perplexity of about 78with the trigram model that we are preparing the perplexity of the source text is about 9in addition to showing the strength of a trigram model relative to a bigrant model this also indicates that the 1000word task is very simplewe treat words as unanalyzed wholes recognizing no connection for example between va vais and vont or between tall taller and tallestas a result we cannot improve our statistical characterization of va say by observation of sentences involving vontwe are working on morphologies for french and english so that we can profit from statistical regularities that our current wordbased approach must overlookfinally we treat the sentence as a structureless sequence of wordssharman et al discuss a method for deriving a probabilistic phrase structure grammar automatically from a sample of parsed sentences we hope to apply their method to construct grammars for both french and english and to base future translation models on the grammatical constructs thus defined
J90-2002
a statistical approach to machine translationin this paper we present a statistical approach to machine translationwe describe the application of our approach to translation from french to english and give preliminary resultswe estimate parameters for a model of wordtoword correspondences and word reorderings directly from large corpora of parallel bilingual text
lexical cohesion computed by thesaural relations as an indicator of the structure of text 1 virgin 31 2 pine 31 3 bush 31 8 4 trees 32 8 8 5 trunks 32 6 trees 33 8 8 chain 6 word sentence lexical chain 1 handinhand 34 2 matching 34 3 whispering 35 4 laughing 35 5 warm 38 8 8 chain 7 word sentence lexical chain 1 first 1 2 initial 1 8 3 final 2 8 47 computational linguistics volume 17 number 1 chain 8 word sentence lexical chain 1 night 2 2 dusk 3 3 darkness 3 chain 9 word sentence lexical chain 1 environment 7 2 setting 7 3 surrounding 8 in text lexical cohesion is the result of chains of related words that contribute to the continuity of lexical meaningthese lexical chains are a direct result of units of text being quotabout the same thingquot and finding text structure involves finding units of text that are about the same thinghence computing the chains is useful since they will have a correspondence to the structure of the textdetermining the structure of text is an essential step in determining the deep meaning of the textin this paper a thesaurus is used as the major knowledge base for computing lexical chainscorrespondences between lexical chains and structural elements are shown to existsince the lexical chains are computable and exist in nondomainspecific text they provide a valuable indicator of text structurethe lexical chains also provide a semantic context for interpreting words concepts and sentencesa text or discourse is not just a set of sentences each on some random topicrather the sentences and phrases of any sensible text will each tend to be about the same things that is the text will have a quality of unitythis is the property of cohesion the sentences quotstick togetherquot to function as a wholecohesion is achieved through backreference conjunction and semantic word relationscohesion is not a guarantee of unity in text but rather a device for creating itas aptly stated by halliday and hasan it is a way of getting text to quothang together as a wholequot their work on cohesion has underscored its importance as an indicator of text unitylexical cohesion is the cohesion that arises from semantic relationships between wordsall that is required is that there be some recognizable relation between the wordshalliday and hasan have provided a classification of lexical cohesion based on the type of dependency relationship that exists between wordsthere are five basic classes examples 1 2 and 3 fall into the class of reiterationnote that reiteration includes not only identity of reference or repetition of the same word but also the use of superordinates subordinates and synonymsexamples 4 and 5 fall into the class of collocation that is semantic relationships between words that often cooccurthey can be further divided into two categories of relationship systematic semantic and nonsystematic semanticsystematic semantic relationships can be classified in a fairly straightforward waythis type of relation includes antonyms members of an ordered set such as one two three members of an unordered set such as white black red and parttowhole relationships like eyes mouth faceexample 5 is an illustration of collocation where the word relationship garden digging is nonsystematicthis type of relationship is the most problematic especially from a knowledge representation point of viewsuch collocation relationships exist between words that tend to occur in similar lexical environmentswords tend to occur in similar lexical environments because they describe things that tend to occur in similar situations or contexts in the worldhence contextspecific examples such as post office service stamps pay leave are included in the class who analyzed the patterns of lexical cohesion specific to the context of service encountersanother example of this type is car lights turning taken from example 14 in section 42these words are related in the situation of driving a car but taken out of that situation they are not related in a systematic wayalso contained in the class of collocation are word associationsexamples from postman and keppel are priest church citizen yousa and whistle stopagain the exact relationship between these words can be hard to classify but there does exist a recognizable relationshipoften lexical cohesion occurs not simply between pairs of words but over a succession of a number of nearby related words spanning a topical unit of the textthese sequences of related words will be called lexical chainsthere is a distance relation between each word in the chain and the words cooccur within a given spanlexical chains do not stop at sentence boundariesthey can connect a pair of adjacent words or range over an entire textlexical chains tend to delineate portions of text that have a strong unity of meaningconsider this example example 6 in front of me lay a virgin crescent cut out of pine busha dozen houses were going up in various stages of construction surrounded by hummocks of dry earth and stands of precariously tall trees nude halfway up their trunksthey were the kind of trees you might see in the mountainsa lexical chain spanning these three sentences is virgin pine bush trees trunks treessection 3 will explain how such chains are formedsection 4 is an analysis of the correspondence between lexical chains and the structure of the textthere are two major reasons why lexical cohesion is important for computational text understanding systems 121 word interpretation in contextword meanings do not exist in isolationeach word must be interpreted in its contextfor example in the context gin alcohol sober drinks the meaning of the noun drinks is narrowed down to alcoholic drinksin the context hair curl comb wave wave means a hair wave not a water wave a physics wave or a friendly hand wavein these examples lexical chains can be used as a contextual aid to interpreting word meaningsin earlier work hirst used a system called quotpolaroid wordsquot to provide for intrasentential lexical disambiguationpolaroid words relied on a variety of cues including syntax selectional restrictions case frames and most relevant here a notion of semantic distance or relatedness to other words in the sentences a sense that had such a relationship was preferred over one that did notrelationships were determined by marker passing along the arcs in a knowledge basethe intuition was that semantically related concepts will be physically close in the knowledge base and can thus be found by traversing the arcs for a limited distancebut polaroid words looked only for possible relatedness between words in the same sentence trying to find connections with all the words in preceding sentences was too complicated and too likely to be led astraythe idea of lexical chains however can address this weakness in polaroid words lexical chains provide a constrained easytodetermine representation of context for consideration of semantic distance122 cohesion and discourse structurethe second major importance of lexical chains is that they provide a clue for the determination of coherence and discourse structurewhen a chunk of text forms a unit within a discourse there is a tendency for related words to be usedit follows that if lexical chains can be determined they will tend to indicate the structure of the textwe will describe the application of lexical cohesion to the determination of the discourse structure that was proposed by grosz and sidner grosz and sidner propose a structure common to all discourse which could be used along with a structurally dependent focus of attention to delineate and constrain referring expressionsin this theory there are three interacting components linguistic structure intentional structure and attentional statelinguistic structure is the segmentation of discourse into groups of sentences each fulfilling a distinct role in the discourseboundaries of segments can be fuzzy but some factors aiding in their determination are clue words changes in intonation and changes in aspect and tensewhen found these segments indicate changes in the topics or ideas being discussed and hence will have an effect on potential referentsthe second major component of the theory is the intentional structureit is based on the idea that people have definite purposes for engaging in discoursethere is an overall discourse purpose and also a discourse segment purpose for each of the segments in the linguistic structure described aboveeach segment purpose specifies how the segment contributes to the overall discourse purposethere are two structural relationships between these segmentsthe first is called a dominance relation which occurs when the satisfaction of one segment intention contributes to the satisfaction of another segment intentionthe second relation is called satisfaction precedence which occurs when the satisfaction of one discourse segment purpose must occur before the satisfaction of another discourse segment purpose can occurthe third component of this theory is the attentional statethis is a stackbased model of the set of things that attention is focused on at any given point in the discourseit is quotparasiticquot on the intentional and linguistic structures since for each discourse segment there exists a separate focus spacethe dominance relations and satisfaction precedence relations determine the pushes and pops of this stack spacewhen a discourse segment purpose contributes to a discourse segment purpose of the immediately preceding discourse segment the new focus space is pushed onto the stackif the new discourse segment purpose contributes to a discourse segment purpose earlier in the discourse focus spaces are popped off the stack until the discourse segment that the new one contributes to is on the top of the stackit is crucial to this theory that the linguistic segments be identified and as stated by grosz and sidner this is a problem areathis paper will show that lexical chains are a good indication of the linguistic segmentationwhen a lexical chain ends there is a tendency for a linguistic segment to end as the lexical chains tend to indicate the topicality of segmentsif a new lexical chain begins this is an indication or clue that a new segment has begunif an old chain is referred to again it is a strong indication that a previous segment is being returned towe will demonstrate this in section 4the theory of coherence relations will now be considered in relation to cohesionthere has been some confusion as to the differences between the phenomena of cohesion and coherence eg reichman there is a danger of lumping the two together and losing the distinct contributions of each to the understanding of the unity of textultimately the difference between cohesion and coherence is this cohesion is a term for sticking together it means that the text all hangs togethercoherence is a term for making sense it means that there is sense in the texthence the term coherence relations refers to the relations between sentences that contribute to their making sensecohesion and coherence relations may be distinguished in the following waya coherence relation is a relation among clauses or sentences such as elaboration support because or exemplificationthere have been various attempts to classify all possible coherence relations but there is as yet no widespread agreementthere does not exist a general computationally feasible mechanism for identifying coherence relationsin contrast cohesion relations are relations among elements in a text reference ellipsis substitution conjunction and lexical cohesionsince cohesion is well defined one might expect that it would be computationally easier to identify because the identification of ellipsis reference substitution conjunction and lexical cohesion is a straightforward task for peoplewe will show below that lexical cohesion is computationally feasible to identifyin contrast the identification of a specific coherence relation from a given set is not a straightforward task even for peopleconsider this example from hobbs hobbs identifies the coherence relation as elaborationbut it could just as easily be explanationthis distinction depends on context knowledge and beliefsfor example if you questioned john ability to open bill safe you would probably identify the relation as explanationotherwise you could identify it as elaborationhere is another example the coherence relation here could be elaboration or explanation or because the point is that the identity of coherence relations is quotinterpretativequot whereas the identity of cohesion relations is notat a general level even if the precise coherence relation is not known the relation quotis about the same thingquot exists if coherence existsin the example from hobbs above safe and combination are lexically related which in a general sense means they quotare about the same thing in some wayquot in example 8 bought and shopping are lexically related as are raincoat and rainedthis shows how cohesion can be useful in identifying sentences that are coherently relatedcohesion and coherence are independent in that cohesion can exist in sentences that are not related coherently wash and core six applesuse them to cut out the material for your new suitthey tend to add a lot to the color and texture of clothingactually maybe you should use five of them instead of six since they are quite largei came home from work at 600 pm dinner consisted of two chicken breasts and a bowl of riceof course most sentences that relate coherently do exhibit cohesion as well halliday and hasan give two examples of lexical cohesion involving identity of reference example 11 reichman writes quotit is not the use of a pronoun that gives cohesion to the washandcoreapples textthese utterances form a coherent piece of text not because the pronoun them is used but because they jointly describe a set of cooking instructionsquot this is an example of lumping cohesion and coherence together as one phenomenonpronominal reference is defined as a type of cohesion therefore the them in example 11 is an instance of itthe important point is that both cohesion and coherence are distinct phenomena creating unity in textreichman also writes quotthat similar words appear in a given stretch of discourse is an artifact of the content of discussionquot it follows that if content is related in a stretch of discourse there will be coherencelexical cohesion is a computationally feasible clue to identifying a coherent stretch of textin example 12 it is computationally trivial to get the word relationship between apples and apples and this relation fits the definition of lexical cohesionsurely this simple indicator of coherence is useful since as stated above there does not exist a computationally feasible method of identifying coherence in nondomainspecific textcohesion is a useful indicator of coherence regardless of whether it is used intentionally by writers to create coherence or is a result of the coherence of texthobbs sees the resolution of coreference as being subsumed by the identification of coherencehe uses a formal definition of coherence relations an extensive knowledge base of assertions and properties of objects and actions and a mechanism that searches this knowledge source and makes simple inferencesalso certain elements must be assumed to be coreferentialhe shows how in example an assumption of coherence allows the combination to be identified as the combination of bill safe and john and he to be found to be coreferentialbut lexical cohesion would also indicate that safe and combination can be assumed to be coreferentialand more importantly one should not be misled by chickenandegg questions when dealing with cohesion and coherencerather one should use each where applicablesince the lexical cohesion between combination and safe is easy to compute we argue that it makes sense to use this information as an indicator of coherencethe thesaurus was conceived by peter mark roget who described it as being the quotconversequot of a dictionarya dictionary explains the meaning of words whereas a thesaurus aids in finding the words that best express an idea or meaningin section 3 we will show how a thesaurus can be used to find lexical chains in textroget international thesaurus 4th edition is composed of 1042 sequentially numbered basic categoriesthere is a hierarchical structure both above and below this level three structure levels are above the category levelthe topmost level consists of eight major classes developed by roget in 1852 abstract relations space physics matter sensation intellect volition and affectionseach class is divided into subclasses and under each subclass there is a subsubclassthese in turn are divided into the basic categorieswhere applicable categories are organized into antonym pairsfor example category 407 is life and category 408 is deatheach category contains a series of numbered paragraphs to group closely related wordswithin each paragraph still finer groups are marked by semicolonsin addition a semicolon group may have crossreferences or pointers to other related categories or paragraphsa paragraph contains words of only one syntactic categorythe noun paragraphs are grouped at the start of a category followed by the paragraphs for the structure of roget thesaurus index entry for the word lid verbs adjectives and so onthe thesaurus has an index which allows for retrieval of words related to a given onefor each entry a list of words suggesting its various distinct subsenses is given and a category or paragraph number for each of thesefigure 2 shows the index entry for lidto find words related to lid in its sense of cover one would turn to paragraph 5 of category 228an index entry may be a pointer to a category or paragraph if there are no subsenses to be distinguishedin the structure of traditional artificial intelligence knowledge bases such as frames or semantic networks words or ideas that are related are actually quotphysically closequot in the representationin a thesaurus this need not be truephysical closeness has some importance as can be seen clearly from the hierarchy but words in the index of the thesaurus often have widely scattered categories and each category often points to a widely scattered selection of categoriesthe thesaurus simply groups words by ideait does not have to name or classify the idea or relationshipin traditional knowledge bases the relationships must be namedfor example in a semantic net a relationship might be isa or colorof and in a frame database there might be a slot for color or locationin section 1 different types of word relationships were discussed systematic semantic nonsystematic semantic word association and words related by a common situationa factor common to all but situational relationships is that there is a strong tendency for the word relationships to be captured in the thesaurusthis holds even for the nonsystematic semantic relations which are the most problematic by definitiona thesaurus simply groups related words without attempting to explicitly name each relationshipin a traditional computer database a systematic semantic relationship can be represented by a slot value for a frame or by a named link in a semantic networkif it is hard to classify a relationship in a systematic semantic way it will be hard to represent the relationship in a traditional frame or semantic network formalismof the 16 nonsystematic semantic lexical chains given as examples in halliday and hasan 14 were found in roget thesaurus using the relations given in section 322this represents an 87 hit rate word associations show a strong tendency to be findable in a thesaurusof the 16 word association pairs given in hirst 14 were found in roget thesaurus since two of the word senses were not contained in the thesaurus at all this represents a 100 hit rate among those that weresituational word relationships are not as likely to be found in a general thesaurusan example of a situational relationship is between car and lights where the two words are clearly related in the situation involving a car lights but the relationship will not be found between them in a general thesauruswe now describe a method of building lexical chains for use as an aid in determining the structure of textthis section details how these lexical chains are formed using a thesaurus as the main knowledge basethe method is intended to be useful for text in any general domainunlike methods that depend on a full understanding of text our method is the basis of a computationally feasible approach to determining discourse structurewe developed our method in the following wayfirst we took five texts totaling 183 sentences from generalinterest magazines using our intuition we identified the lexical chains in each textwe then formalized our intuitions into an algorithm using our experience with the texts to set values for the following parameters the aim was to find efficient plausible methods that will cover enough cases to ensure the production of meaningful results the text are candidates for inclusion in chainsas pointed out by halliday and hasan repetitive occurrences of closedclass words such as pronouns prepositions and verbal auxiliaries are obviously not consideredalso highfrequency words like good do and taking do not normally enter into lexical chains for example in only the italicized words should be considered as lexical chain candidates my maternal grandfather lived to be 111zayde was lucid to the end but a few years before he died the family assigned me the task of talking to him about his problem with alcoholit should be noted that morphological analysis on candidate words was done intuitively and would actually have to be formally implemented in an automated system322 building chainsonce the candidate words are chosen the lexical chains can be formedfor this work an abridged version of roget thesaurus was usedthe chains were built by handautomation was not possible for lack of a machinereadable copy of the thesaurusgiven a copy implementation would clearly be straightforwardit is expected that research with an automated system and a large sample space of text would give valuable information on the finetuning of the parameter settings used in the general algorithmfive types of thesaural relations between words were found to be necessary in forming chains but two are by far the most prevalent constituting over 90 of the lexical relationshipsthe relationships are the following has a pointer to category 830terrified has category 860 that likewise has a pointer to category 830 one must consider how much transitivity to use when computing lexical chainsspecifically if word a is related to word b word b is related to word c and word c is related to word d then is word a related to words c and dconsider this chain cow sheep wool scarf boots hat snow if unlimited transitivity were allowed then cow and snow would be considered related which is definitely counter intuitiveour intuition was to allow one transitive link word a is related to word c but not to word d it seemed that two or more transitive links would so severely weaken the word relationship as to cause it to be nonintuitiveour analysis of our sample texts supported thisto summarize a transitivity of one link is sufficient to successfully compute the intuitive chainsan automated system could be used to test this out extensively varying the number of transitive links and calculating the consequencesit is likely that it varies slightly with respect to style author or type of textthere are two ways in which a transitive relation involving one link can cause two words to be relatedin the first way if word a is related to word b and word b is related to word c then word a is related to word c in the second way if word a is related to word b and word a is related to word c then word b is related to word c but lexical chains are calculated only with respect to the text read so farfor example if word c is related to word a and to word b then word a and word b are not related since at the time of processing they were not relatablesymmetry was not found to be necessary for computing the lexical chainswe now consider how many sentences can separate two words in a lexical chain before the words should be considered unrelatednow sometimes several sentences after a chain has clearly stopped it is returned tosuch chain returns link together larger expanses of text than are contained in single chains or chain segmentsreturns to existing chains often correspond to intentional boundaries as they occur after digressions or subintentions thereby signalling a resumption of some structural text entityintuitively the distance between words in a chain is a factor in chain formationthe distance will not be quotlargequot because words in a chain corelate due to recognizable relations and large distances would interfere with the recognition of relationsthe five texts were analyzed with respect to distance between clearly related wordsthe analysis showed that there can be up to two or three intermediary sentences between a word and the preceding element of a chain segment with which it can be linkedat distances of four or more intermediary sentences the word is only able to signal a return to an existing chainreturns happened after between 4 and 19 intermediary sentences in the sample textsone significant fact emerged from this analysis returns consisting of one word only were always made with a repetition of one of the words in the returnedto chainreturns consisting of more than one word did not necessarily use repetition in fact in most cases the first word in the return was not a repetitionthe question of chain returns and when they can occur requires further researchwhen distances between relatable words are not tightly bound the chances of incorrect chain linkages increaseit is anticipated that chain return analysis would become integrated with other text processing tools in order to prevent thisalso we believe that chain strength analysis will be required for this purposeintuitively some lexical chains are quotstrongerquot than others and possibly only strong chains can be returned tothere are three factors contributing to chain strengthideally some combination of values reflecting these three factors should result in a chain strength value that can be useful in determining whether a chain is strong enough to be returned toalso a strong chain should be more likely to have a structural correspondence than a weak oneit seems likely that chains could contain particularly strong portions with special implications for structurethese issues will not be addressed here323 notation and data structuresin the computation of lexical chains the following information is kept for each word in a chain t stands for transitively related q is the word number through which the transitive relation is formeda full example of this notation is shown in figure 4figure 5 shows the generalized algorithm for computing lexical chainsthe parameter values that we used are shown for the following the only parameter not addressed in this work is which chains should be eliminated from the chainfinding processthis section is a discussion of problems encountered during the computation of the lexical chains contained in our corpus of textsthe text example used in this paper is in section 42 and the chains found in the example are in appendix a well over 90 of the intuitive lexical relations in the five examples we studiedthe following is an analysis of when the thesaurus failed to find a relationship and whyone problem was when the relationship between words was due more to their quotfeelquot than their meaningfor example in chain 6 the intuitive chain handinhand matching whispering laughing warm was not entirely computableonly the italicized words were relatablethe words in chain 6 are cohesive by virtue of being general but strong quotgoodquot words related by their goodness rather than by their specific meaningschain 10 environment setting surrounding was not thesaurally relatablesetting was not in the thesaurus and while it seems as though environment and surrounding should be thesaurally connected they were notplace names street names and people names are generally not to be found in roget thesaurus however they are certainly contained in one quotmental thesaurusquot chain 1 which contains several major toronto street names is a good example of thisthese names were certainly related to the rest of chain 1 in the authors mental thesaurus since we are residents of toronto in chain 5 the thesaurus did not connect the words pine and trunk with the rest of the chain virgin bush trees treesin a general thesaurus specific information on and classification of plants animals minerals etc is not availableto summarize there were few cases in which the thesaurus failed to confirm an intuitive lexical chainfor those cases in which the thesaurus did fail three missing knowledge sources became apparent because two chains to merge together whereas intuition would lead one to keep them separatewe found the following intuitively separate chain beginning in sentence 38 people metropolitan toronto people urban population people population population people however the algorithm linked this chain with chain 1 which runs through the entire example and consists of these words and others city suburbs traffic communityfortunately this was a rare occurrencebut note that there will be cases in which lexical chains should be merged as a result of the intentional merging of ideas or concepts in the textconversely there were a few cases of unfortunate chain returns occurring where they were definitely counter intuitivein chain 3 word 4 wife was taken as a oneword return to the chain married wife wifehowever there is no intuitive reason for thisthis section describes how lexical chains formed by the algorithm given in section 323 can be used as a toolany structural theory of text must be concerned with identifying units of text that are about the same thingwhen a unit of text is about the same thing there is a strong tendency for semantically related words to be used within that unitby definition lexical chains are chains of semantically related wordstherefore it makes sense to use them as clues to the structure of the textthis section will concentrate on analyzing correspondences between lexical chains and structural units of text including the text structure theory chosen for this analysis was that of grosz and sidner it was chosen because it is an attempt at a general domainindependent theory of text structure that has gained a significant acceptance in the field as a good standard approachthe methodology we used in our analyses was as follows 3we compared the lexical structure formed in step 1 with the intentional structure formed in step 2 and looked for correspondences between themexample 14 shows one of the five texts that we analyzedit is the first section of an article in toronto magazine december 1987 by jay teitel entitled quotoutlandquot2 the tables in appendix a show the lexical chains for the text42in the same span of time the three outlying regions stretching across the top of metro peel durham and york increased in population by 55 percent from 814000 to some 126200043half a million people had poured into the crescent north of toronto in the space of a decade during which time the population of the city of toronto actually declined as did the populations of the quotoldquot suburbs with the exception of etobicoke and scarborough44if the sprawling agglomeration of people known as toronto has boomed in the past 10 years it has boomed outside the traditional city confines in a totally new city a new suburbia containing one and a quarter million peoplein figure 6 we show the intentional structure of the text of section 42 and in figure 7 we show the correspondences between the lexical chains and intentions of the examplethere is a clear correspondence between chain 1 driving car and intention 1 the continuity of the subject matter is reflected by the continuous lexical chainfrom sentence 40 to sentence 44 two words population and people are used repetitively in the chainpopulation is repeated three times and people is repeated five timesif chain strength were used to delineate quotstrongquot portions of a chain this strength information could also be used to indicate structural attributes of the textspecifically sentences 40 to 44 form intention 13 and hence a strong portion of the chain would correspond exactly to a structural unitin addition drive was repeated eight times between sentence 2 and sentence 26 corresponding to intention 11 suburb was repeated eleven times throughout the entire example indicating the continuity in structure between sentences 144chain 21 afflicted darkness from sentence 2 to sentence 12 corresponds to intentions 111 and 112 more textual information is needed to separate intentions 111 and 112there is a oneword return to chain 2 at sentences 16 and 24 strongly indicating that chain 2 corresponds to intention 11 which runs from sentence 1 to sentence 25also segment 22 coincides with the end of intention 1131 and segment 23 coincides with the end of intention 1133 this situation illustrates how chain returns help indicate the structure of the textif chain returns were not considered chain 2 would end at sentence 12 and the structural implications of the two singleword returns would be lostit is intuitive that the two words perverse and cruel indicate links back to the rest of intention 11the link provided by the last return cruel is especially strong since it occurs after the diversion describing the attempt to find a nice house in the suburbscruel is the third reiteration of the word in chain 2chain 3 married wife corresponds to intention 1131 and chain 4 conceded tolerance corresponds to intention 1132 the boundaries of chain 4 are two sentences inside the boundaries of the intentionthe existence of a lexical chain is a clue to the existence of a separate intention and boundaries within one or two sentences of the intention boundaries are considered to be close matcheschain 5 virgin pine corresponds closely to intention 122 chain 6 handinhand matching corresponds closely to intention 123 chains 7 first initial final and 8 night dusk darkness are a couple of short chains that overlapthey collectively correspond to intention 111 the fact that they are short and overlapping suggests that they could be taken together as a wholechain 9 environment setting surrounding corresponds to intention 112 even though the chain is a lot shorter in length than the intention its presence is a clue to the existence of a separate intention in its textual vicinitysince the lexical chain boundary is more than two sentences away from the intention boundary other textual information would be required to confirm the structureoverall the lexical chains found in this example provide a good clue for the determination of the intentional structurein some cases the chains correspond exactly to an intentionit should also be stressed however that the lexical structures cannot be used on their own to predict an exact structural partitioning of the textthis of course was never expectedas a good example of the limitations of the tool intention 12 starts in sentence 26 but there are no new lexical chains starting therethe only clue to the start of the new intention would be the ending of chain 2 afflicted darkness this example also provides a good illustration of the importance of chain returns being used to indicate a highlevel intention spanning the length of the entire chain also the returns coincided with intentional boundariesthe motivation behind this work was that lexical cohesion in text should correspond in some way to the structure of the textsince lexical cohesion is a result of a unit of text being in some recognizable semantic way about a single topic and text structure analysis involves finding the units of text that are about the same topic one should have something to say about the otherthis was found to be truethe lexical chains computed by the algorithm given in section 323 correspond closely to the intentional structure produced from the structural analysis method of grosz and sidner this is important since grosz and sidner give no method for computing the intentions or linguistic segments that make up the structure that they proposehence the concept of lexical cohesion defined originally by halliday and hasan and expanded in this work has a definite use in an automated text understanding systemlexical chains are shown to be almost entirely computable with the relations defined in section 322the computer implementation of this type of thesaurus access would be a straightforward task involving traditional database techniquesthe program to implement the algorithm given in section 323 would also be straightforwardhowever automated testing could help finetune the parameters and would help to indicate any unfortunate chain linkagesalthough straightforward from an engineering point of view the automation would require a significant efforta machinereadable thesaurus with automated index searching and lookup is requiredthe texts we have analyzed here and elsewhere are generalinterest articles taken from magazinesthey were chosen specifically to illustrate that lexical cohesion and hence this tool is not domainspecificthe methods used in this work improve on those from halliday and hasan halliday and hasan related words back to the first word to which they are tied rather than forming explicit lexical chains that include the relationships to intermediate words in the chainthey had no notions of transitivity distance between words in a chain or chain returnstheir intent was not a computational means of finding lexical chains and they did not suggest a thesaurus for this purposeventola analyzed lexical cohesion and text structure within the framework of systemic linguistics and the specific domain of service encounters such as the exchange of words between a client at a post office and a postal workerventola chainbuilding rule was that each lexical item is quottaken back once to the nearest preceding lexically cohesive item regardless of distancequot in our work the related words in a chain are seen as indicating structural units of text and hence distance between words is relevantventola did not have the concept of chain returns and transitivity was allowed up to any levelher research was specific to the domain usedshe does not discuss a computational method of determining the lexical chainshahn developed a text parsing system that considers lexical cohesionnouns in the text are mapped directly to the underlying model of the domain which was implemented as a framestructured knowledge basehahn viewed lexical cohesion as a local phenomenon between words in a sentence and the preceding onethere was also an extended recognizer that worked for cohesion contained within paragraph boundariesrecognizing lexical cohesion was a matter of searching for ways of relating frames and slots in the database that are activated by words in the textheavy reliance is put on the quotformally clear cut model of the underlying domainquot however generalinterest articles such as we analyzed do not have domains that can be a priori formally represented as frames with slot values in such a manner that lexical cohesion will correspond directly to themour work uses lexical cohesion as it naturally occurs in domainindependent text as an indicator of unity rather than fitting a domain model to the lexical cohesionhahn does not use the concept of chain returns or transitivitysedelow and sedelow have done a significant amount of research on the thesaurus as a knowledge source for use in a natural language understanding systemthey have been interested in the application of clustering patterns in the thesaurustheir student bryan proposed a graphtheoretic model of the thesaurusa boolean matrix is created with words on one axis and categories on the othera cell is marked as true if a word associated with a cell intersects with the category associated with a cellpaths or chains in this model are formed by traveling along rows or columns to other true cellssemantic quotneighborhoodsquot are grown consisting of the set of chains emanating from an entryit was found that without some concept of chain strength the semantic relatedness of these neighborhoods decays partly due to homographsstrong links are defined in terms of the degree of overlap between categories and wordsa strong link exists where at least two categories contain more than one word in common or at least two words contain more than one category in commonthe use of strong links was found to enable the growth of strong semantic chains with homograph disambiguationthis concept is different from that used in our workhere by virtue of words cooccurring in a text and then also containing at least one category in common or being in the same category they are considered lexically related and no further strength is neededwe use the thesaurus as a validator of lexical relations that are possible due to the semantic relations among words in a textit has already been mentioned that the concept of chain strength needs much further workthe intuition is that the stronger a chain the more likely it is to have a corresponding structural componentthe integration of this tool with other text understanding tools is an area that will require a lot of worklexical chains do not always correspond exactly to intentional structure and when they do not other textual information is needed to obtain the correct correspondencesin the example given there were cases where a lexical chain did correspond to an intention but the sentences spanned by the lexical chain and the intention differed by more than twoin these cases verification of the possible correspondence must be accomplished through the use of other textual information such as semantics or pragmaticscue words would be interesting to address since such information seems to be more computationally accessible than underlying intentionsit would be useful to automate this tool and run a large corpus of text through itwe suspect that the chainforming parameter settings will be shown to vary slightly according to author style and the type of textas it is impossible to do a complete and errorfree lexical analysis of large text examples in a limited timeframe automation is desirableit could help she would some light on possible unfortunate chain linkagesdo they become problematic and if so when does this tend to happenresearch into limiting unfortunate linkages and detecting when the method is likely to produce incorrect results should be done analysis using different theories of text structure was not done but could prove insightfulthe independence of different people intuitive chains and structure assignments was also not addressed by this papera practical limitation of this work is that it depends on a thesaurus as its knowledge basea thesaurus is as good as the work that went into creating it and also depends on the perceptions experience and knowledge of its creatorssince language is not static a thesaurus would have to be continually updated to remain currentfurthermore no one thesaurus exists that meets all needsroget thesaurus for example is a general thesaurus that does not contain lexical relations specific to the geography of africa or quantum mechanicstherefore further work needs to be done on identifying other sources of word knowledge such as domainspecific thesauri dictionaries and statistical word usage information that should be integrated with this workas an anonymous referee pointed out to us volks and volkswagen were not included in the chain containing driving and carthese words were not in a general thesaurus and were also missed by the authorssection 1 mentioned that lexical chains would be also useful in providing a context for word sense disambiguation and in narrowing to specific word meaningsas an example of a chain providing useful information for word sense disambiguation consider words 1 to 15 of chain 21 of the example afflicted darkness panicky mournful exciting deadly hating aversion cruel relentless weird eerie cold barren sterile in the context of all of these words it is clear that barren and sterile do not refer to an inability to reproduce but to a cruel coldnessthe use of lexical chains for ambiguity resolution is a promising area for further researchthanks to robin cohen jerry hobbs eduard hovy ian lancashire and anonymous referees for valuable discussions of the ideas in this paperthanks to chrysanne dimarco mark ryan and john morris for commenting on earlier draftsthis work was financially assisted by the government of ontario the department of computer science of the university of toronto and the natural sciences and engineering research council of canadawe are grateful to jay teitel for allowing us to reprint text from his article quotoutlandquot
J91-1002
lexical cohesion computed by thesaural relations as an indicator of the structure of textin text lexical cohesion is the result of chains of related words that contribute to the continuity of lexical meaningthese lexical chains are a direct result of units of text being about the same thing and finding text structure involves finding units of text that are about the same thinghence computing the chains is useful since they will have a correspondence to the structure of the textdetermining the structure of text is an essential step in determining the deep meaning of the textin this paper a thesaurus is used as the major knowledge base for computing lexical chainscorrespondences between lexical chains and structural elements are shown to existsince the lexical chains are computable and exist in nondomainspecific text they provide a valuable indicator of text structurethe lexical chains also provide a semantic context for interpreting words concepts and sentenceswe propose the idea of using lexical chains as indicators of lexical cohesionwe propose the concept of lexical chains to explore the discourse structure of a text
met a method for discriminating metonymy and metaphor by computer the met method distinguishes selected examples of metonymy from metaphor and from literalness and anomaly in short english sentences in the met method literalness is distinguished because it satisfies contextual constraints that the nonliteral others all violate metonymy is discriminated from metaphor and anomaly in a way that 1 supports lakoff and johnson view that in metonymy one entity stands for another whereas in metaphor one entity is viewed as another 2 permits chains of metonymies and 3 allows metonymies to cooccur with instances of either literalness metaphor or anomaly metaphor is distinguished from anomaly because the former contains a relevant analogy unlike the latter the met method is part of collative semantics a semantics for natural language processing and has been implemented in a computer program called meta5 some examples of meta5 analysis of metaphor and metonymy are given the met method is compared with approaches from artificial intelligence linguistics philosophy and psychology the met method distinguishes selected examples of metonymy from metaphor and from literalness and anomaly in short english sentencesin the met method literalness is distinguished because it satisfies contextual constraints that the nonliteral others all violatemetonymy is discriminated from metaphor and anomaly in a way that 1 supports lakoff and johnson view that in metonymy one entity stands for another whereas in metaphor one entity is viewed as another 2 permits chains of metonymies and 3 allows metonymies to cooccur with instances of either literalness metaphor or anomalymetaphor is distinguished from anomaly because the former contains a relevant analogy unlike the latterthe met method is part of collative semantics a semantics for natural language processing and has been implemented in a computer program called meta5some examples of meta5 analysis of metaphor and metonymy are giventhe met method is compared with approaches from artificial intelligence linguistics philosophy and psychologymetaphor and metonymy are kinds of figurative language or tropesother tropes include simile irony understatement and overstatement quotmy car drinks gasolinequot quotthe ham sandwich is waiting for his checkquot sentences and contain examples of metaphor and metonymy respectivelyneither sentence is literally true cars do not literally drink nor do ham sandwiches literally waitnotice though that the two sentences are interpreted differentlyquotmy carquot in is commonly understood as resembling an animate drinker while in quotthe ham sandwichquot is generally interpreted as referring to the person who ordered the ham sandwichmost of the considerable literature on metaphor and the smaller one on metonymy is from philosophy linguistics and psychologyon the whole the two phenomena remain vague poorly defined notions in that literaturein artificial intelligence detailed treatments of either metaphor or metonymy are relatively scarcemoreover most of those treatments are paper implementations that have not been coded up and run on a computerthe met method provides a means for recognizing selected examples of metonymy and metaphor and also anomaly and literalness in short english sentencesthe method is part of collative semantics which is a semantics for natural language processingcs and hence the met method has been implemented in a program called meta5 the meta5 program is as far as i know the first system to recognize examples of metaphor and metonymyto my knowledge there is only one other working program that might be said to recognize instances of metaphor and two systems that appear to recognize cases of metonymy team and tacitus the rest of the paper is organized as followssection 2 surveys general issues and approaches in metaphor and metonymy notably the distinctive characteristics of metaphor and metonymy the relationship between metaphor and metonymy and the relationship between literalness and nonliteralnesssection 3 presents the met method concentrating on the basic topology of the met method algorithmsection 4 shows details of representations and processes used in cssection 5 gives examples of the meta5 program analyzing simple metaphors and metonymiesdescriptions get progressively more detailed from section 2 through to section 5sections 6 and 7 describe some extensions to metaphor interpretation in cs and compare the met method against other approaches to metaphor and metonymy especially computational onesa glossary of key terms is provided at the very end of the papermetonymy and metaphor are so poorly understood that widely divergent views exist about them and their relationship to each otherthis section reviews research on metaphor metonymy the relationship between them and the more general relationship between literalness and nonliteralness four views of metaphor are critically discussed the comparison view the interactive view the selection restriction violation view and the conventional metaphor viewcomputational examples of each kind are included by gentner indurkhya hobbs wilks and martinspace does not permit discussion of other at work on metaphor by eg russell and weiner a metaphor is a comparison in which one term is asserted to bear a partial resemblance to something else the resemblance being insufficient to sustain a literal comparisonas with any comparison there is always some residual dissimilarity between the terms involved in the comparison but comparison theorists tend not to emphasize this dissimilarity what is crucial in the comparison approach then is finding the correct ground in a metaphoraccording to tourangeau and sternberg aristotle proposed the first comparison theory and suggested several principles for finding the ground of a metaphortourangeau and sternberg reduce these principles to two basic ones finding a category to which the tenor and vehicle belong and constructing an analogy involving themgentner structuremapping theory which has been implemented in the structuremapping engine closely resembles a comparison view of metaphorthe theory addresses literal similarity analogy abstraction and anomaly which gentner refers to as four quotkinds of comparisonquot an algorithm compares the semantic information from two concepts represented as sets of propertiesproperties are either quotattributesquot oneplace predicates like large or quotrelationsquot twoplace predicates such as collidethe four kinds of comparison are distinguished by the relative proportions of attributes and relations that are matched and the forms of mappings established between themmappings between relations are sought before those between attributespairs of relations are compared using the quotsystematicity principlequot that regular structural correspondences should exist between terms occupying the same positions in those relationsmappings are purely structural and independent of the content of the relations tourangeau and sternberg list some problems with the comparison view including the following that everything has some feature or category that it shares with everything else but we cannot combine just any two things in metaphor that the most obvious shared features are often irrelevant to a reading of the metaphor that even when the feature is relevant it is often shared only metaphorically and that metaphors are novel and surprising is hard to reconcile with the idea that they rely completely on extant similarities johnson also notes problem with comparison theories pointing out that as a result they cannot account for the semantic tension between the two terms of a metaphor the comparison theory tries to circumvent the experienced semantic strain by interpreting metaphor as nothing but a way of comparing two things to see in what respects they are alikeand since any two things are similar in some respects this kind of theory can never explain what is interesting and important about metaphor novelty that metaphors createaccording to tourangeau and sternberg proponents of the interaction view include black hesse miles richards and wheelwright interaction theorists argue that the vehicle of a metaphor is a template for seeing the tenor in a new waythis reorganization of the tenor is necessary because the characteristics or features of the vehicle cannot be applied directly to the tenor the features they hare are often only shared metaphoricallyas black observes the ground of a metaphor may itself be nonliteralmen are wolves in black example in part because both are predators but they are predators in sharply different senses that may only strike us as similar when we interpret the metaphorin black reading of this metaphor we see competition in social relations as corresponding to predacity in beasts a problem with the interaction view is that theorists have not provided much detail about the processes involved though black does make some suggestionsaccording to black tenor and vehicle each have a ystem of commonplaces associated with themthese commonplaces are stereotypes not necessarily definitional not even necessarily true just widely agreed uponin interpreting man is a wolf we evoke the wolfsystem of related commonplaces and are led by them to construct a corresponding system of implications about the principal subject in black view then interpretation involves not so much comparing tenor and vehicle for existing similarities as construing them in a new way so as to create similarity between them one might distinguish then two main differences between the interaction and comparison viewsfirst similarities are quotcreatedquot in the interaction view whereas only preexisting similarities are found in the comparison viewsecond a whole system of similarities are evoked between tenor and vehicle in the interactions view whereas the comparisons view is based upon finding a single similarityone version of the interaction view is the domainsinteraction view set forth by tourangeau and sternberg who take the view that features hared by tenor and vehicle are often at best only analogous features each limited in its application to one domain or anotherof course some features or dimensions are quite general applying across the board to a number of domains among comparison and interaction theorists much attention had been paid to selecting the comparisons or interactions in a metaphorthe importance of analogy or correspondence in metaphor has been stressed by gentner ortony tourangeau and sternberg and wilks among othersvarious mechanisms have been advanced for highlighting certain comparisons or interactions including relevance and salience among computational approaches indurkhya constrained semantic transference theory of metaphor can be viewed as a formalization of black interaction theory source and target domains are viewed as quotsystems of relationshipsquot in metaphorical interpretation an quotimplicative complexquot of the source domain is imposed on the target domain thereby shaping the features of the target domain which in turn produces changes in the features of the source domain hence the quotinteractionquot it is assumed that a structural analogy underlies every metaphor a metaphor is identified with the formal notion of a tmap which is a pair where f is a function that maps vocabulary of the source domain onto vocabulary of the target domain and s is a set of sentences from the source domain which are expected to transfer to the target domaina metaphor is quotcoherentquot if the transferred sentences s are logically consistent with the axioms of the target domain and quotstrongly coherentquot if they already lie in the deductive closure of those axioms s is thus the quotimplicative complexquot of the source domain imposed on the target domainevery metaphorical interpretation of a given set of sentences is associated with a tmapthere may be several possible tmaps for a set of sentencesi would argue that hobbs has also taken an interaction view of metaphorhobbs goal has been to develop a unified process of discourse interpretation based on the drawing of appropriate inferences from a large knowledge base which hobbs sometimes calls quotselective inferencingquot selective inferencing is concerned with drawing or refraining from drawing certain inferences in a controlled fashion he argues that many problems have the same or almost the same inferencing solutionsthese solutions are found via four separate semantic operations that all draw inferences from text 213 the selection restrictions violations viewthe selection restriction violation view has also been called quotthe semantic deviance viewquot and quotthe anomaly viewquot johnson describes this view as a common one among linguists tourangeau and sternberg list the following people as holders of this view beardsley bickerton campbell guenther percy van dijk and wheelwright to this list one might add levin johnson describes this view as where metaphor constitutes a violation of selection restriction rules within a given context where the fact of this violation is supposed to explain the semantic tension one experiences in comprehending any live metaphorthe theory of metaphor in preference semantics consists of a selection restrictions view and a comparison viewin the theory information about word senses is contained in knowledge structures called quotsemantic formulasquot an algorithm matches pairs of semantic formulas seeking satisfied or violated preferences between thema satisfied preference indicates a literal semantic relation a violated preference indicates either a metaphorical or anomalous onethis part of the theory is implemented in a machine translation system to distinguish metaphor from anomaly a different knowledge structure and a second algorithm are usedthe algorithm called projection operates on a knowledge structure called a pseudotext that contains lists of templates linked by case tiesa brief example of projection is given for example 3 quotmy car drinks gasolinequot projection operates only on preference violationsthe best representation of contains a preference violation so projection is usedthe algorithm compares the template representation for the sentence mycar drink gasoline against templates from the pseudotext of car seeking quotthe closest matchquot and selects ficengine liquidl is projected onto drink in the sentence representation which becomes nycar use gasoline example 4 quotidi amin is an animalquot example 5 quotpeople are not cattlequot example 6 quotno man is an islandquot the main problem with the selection restrictions view is that perfectly wellformed sentences exist that have a metaphorical interpretation and yet contain no selection restriction violations for example in there is a literal interpretation when uttered about a stone and a metaphorical one when said about a decrepit professor emeritussentences and also have twin interpretationsthe existence of such sentences suggests that a condition that occasionally holds has been elevated into a necessary condition of metaphor moreover viewing metaphor only in terms of selection restriction violations ignores the influence of context we seem to interpret an utterance metaphorically when to do so makes sense of more aspects of the total context than if the sentence is read literallyconsider the simple case of the sentence all men are animals as uttered by professor x to an introductory biology class and as uttered later by one of his female students to her roommate upon returning from a datein the latter instance the roommate understands the utterance as metaphorical in a similar way ortony suggests that metaphor should be thought of as contextually anomalousthis means that a literal interpretation of the expression be it a word phrase sentence or an even larger unit of text fails to fit the context so whether or not a sentence is a metaphor depends upon the context in which it is used if something is a metaphor then it will be contextually anomalous if interpreted literally insofar as the violation of selection restrictions can be interpreted in terms of semantic incompatibilities at the lexical level such violations may sometimes be the basis of the contextual anomaly 214 the conventional metaphor viewlakoff and johnson have popularized the idea of conventional metaphors also known as conceptual metaphorsthey distinguish three main kinds orientational ontological and structuralorientational metaphors are mainly to do with kinds of spatial orientation like updown inout and deepshallowexample metaphors include more is up and happy is upthey arise from human experience of spatial orientation and thus develop from the sort of bodies we have and the way they function in our physical environmentontological metaphors arise from our basic human experiences with substances and physical objects some examples are time is a substance the mind is an entity and the visual field is a containerstructural metaphors are elaborated orientational and ontological metaphors in which concepts that correspond to natural kinds of experience eg physical orientations substances war journeys and buildings are used to define other concepts also natural kinds of experience eg love time ideas understanding and argumentssome examples of structural metaphors are argument is war and time is moneythe argument is war metaphor forms a systematic way of talking about the battling aspects of arguing because the metaphorical concept is systematic the language we use to talk about the concept is systematic what lakoff and johnson fail to discuss is how metaphors in general let alone individual metaphorical concepts are recognizedmartin work has addressed this issuehe has pursued a conventional metaphor view using kodiak a variant of brachman klone knowledge representation languagewithin kodiak metaphorical relationships are represented using a primitive link type called a quotviewquot a view quotis used to assert that one concept may in certain circumstances be considered as another quotin martin work quotmetaphormapsquot a kind of view are used to represent conventional metaphors and the conceptual information they containmetonymy involves quotusing one entity to refer to another that is related to itquot quotthe ham sandwich is waiting for his checkquot for example in the metonymy is that the concept for ham sandwich is related to an aspect of another concept for quotthe person who ordered the ham sandwichquot several attempts have been made to organize instances of metonymy into categories or quotmetonymic conceptsquot as lakoff and johnson call thema common metonymic concept is part for whole otherwise known as synechdochequotdave drank the glassesquot quotthe kettle is boilingquot container for contents another metonymic concept occurs in between drink and the sense of glasses meaning quotcontainersquot and also in in drink has an object preference for a potable liquid but there is a preference violation because glasses are not potable liquidsit is not glasses that are drunk but the potable liquids in themthere is a relationship here between a container and its typical contents this relationship is the metonymic concept container for quotyou will find better ideas than that in the libraryquot reddy has observed that metonymies can occur in chainshe suggests that contains a chain of part for whole metonymies between ideas and library the ideas are expressed in words words are printed on pages pages are in books and books are found in a libraryquoti found an old car on the roadthe steering wheel was brokenquot quotwe had a party in a mysterious roomthe walls were painted in psychedelic colorquot a quoti bought an interesting bookquot b quotwho is the authorquot quothe happened to die of some disease though i do not know what the because wasquot yamanashi points out that basic metonymic relationships like partwhole and becauseresult often also link sentencesaccording to him the links in and are partwhole relations the one in is productproducer and the one in is a becauseresult relationthere has been some computational work on metonymy the team project handles metonymy though metonymy is not mentioned by name but referred to instead as quotcoercionquot which quotoccurs whenever some property of an object is used to refer indirectly to the objectquot coercion is handled by quotcoercionrelationsquot for example a coercion relation could be used to understand that fords means quotcars whose carmanufacturer is fordquot grosz et al note a similarity between coercion and modification in nounnoun compounds and use quotmodification relationsquot to decide whether eg quotyous shipsquot means quotships of yous registryquot or quotships whose destination is the yousquot hobbs and martin and stallard also discuss the relationship between metonymy and nominal compoundshobbs and martin treat the two phenomena as twin problems of reference resolution in their tacitus systemthey argue that resolving reference requires finding a knowledge base entity for an entity mentioned in discourse and suggest that the resolution of metonymy and nominal compounds both require discovering an implicit relation between two entities referred to in discoursethe example of metonymy they show is quotafter the alarmquot which really means after the sounding of the alarmhobbs and martin seem to assume a selection restrictions approach to metonymy because metonymy is sought after a selection restrictions violation in their approach solving metonymy involves finding 1 the referents for after and alarm in the domain model which are after and alarm 2 an implicit entity z to which after really refers which is after and 3 the implicit relation between the implicit entity z and the referent of alarm qlike hobbs and martin stallard translates language into logical formstallard argues that with nominal compounds and metonymies quotthe problem is determining the binary relation which has been elided from the utterancequot and suggests shifting the argument place of a predicate quotby interposing an arbitrary sortally compatible relation between an argument place of the predicate and the actual argumentquot stallard notes that quotin any usage of the metonomy operation there is a choice about which of two clashing elements to extendquot stallard work has not yet been implemented stallard also briefly discusses anaphora resolutionbrown is beginning research on metonymy and reference resolution particularly pronounsthis should prove a promising line of investigation because metonymy and anaphora share the function of allowing one entity to refer to another entityquotthe ham sandwich is waiting for his checkquot quothe is waiting for his checkquot this similarity of function can be seen in comparing which is metonymic with which is anaphoricboth metonymy and metaphor have been identified as central to the development of new word senses and hence to language change some of the best examples of the differences between the two phenomena come from data used in studies of metonymic and metaphorical effects on language changenevertheless there are widely differing views on which phenomenon is the more importantsome argue that metaphor is a kind of metonymy and others propose that metonymy is a kind of metaphor while still others suggest that they are quite different among the third group two differences between metonymy and metaphor are commonly mentionedone difference is that metonymy is founded on contiguity whereas metaphor is based on similarity contiguity and similarity are two kinds of associationcontiguity refers to a state of being connected or touching whereas similarity refers to a state of being alike in essentials or having characteristics in common a second difference advanced by lakoff and johnson for example is that metaphor is quotprincipally a way of conceiving of one thing in terms of another and its primary function is understandingquot whereas metonymy quothas primarily a referential function that is it allows us to use one entity to stand for anotherquot though it has a role in understanding because it focuses on certain aspects of what is being referred tothere is little computational work about the relationship between metonymy and metaphorstallard distinguishes separate roles for metonymy and metaphor in word sense extensionaccording to him metonymy shifts the argument place of a predicate whereas metaphor shifts the whole predicatehobbs writes about metaphor and he and martin develop a theory of quotlocal pragmaticsquot that includes metonymy but hobbs does not seem to have written about the relationship between metaphor and metonymyin knowledge representation metonymic and metaphorical relations are both represented in the knowledge representation language cycl much of the preceding material assumes what gibbs calls the quotliteral meanings hypothesisquot which is that sentences have well defined literal meanings and that computation of the literal meaning is a necessary step on the path to understanding speakers utterances there are a number of points here which gibbs expands upon in his paperone point concerns the traditional notion of literal meaning that all sentences have literal meanings that are entirely determined by the meanings of their component words and that the literal meaning of a sentence is its meaning independent of contexta second point concerns the traditional view of metaphor interpretation though gibbs criticism applies to metonymy interpretation alsousing searle views on metaphor as an example he characterizes the typical model for detecting nonliteral meaning as a threestage process 11 compute the literal meaning of a sentence 21 decide if the literal meaning is defective and if so 3 seek an alternative meaning ie a metaphorical one gibbs concludes that the distinction between literal and metaphoric meanings has quotlittle psychological validityquot among at researchers martin shares many of gibbs views in criticizing the quotliteral meaning first approachquot martin suggests a twostage process for interpreting sentences containing metaphors 1 parse the sentence to produce a syntactic parse tree plus primal representation and 21 apply inference processes of quotconcretionquot and quotmetaphoric viewingquot to produce the most detailed semantic representation possiblethe primal representation represents a level of semantic interpretation that is explicitly in need of further processingalthough it is obviously related to what has traditionally been called a literal meaning it should not be thought of as a meaning at allthe primal representation should be simply considered as an intermediate stage in the interpretation process where only syntactic and lexical information has been utilized however martin believes that at least some sentence meaning is independent of context because the primal representation contains part of the primal content of an utterance and the primal content represents the meaning of an utterance that is derivable from knowledge of the conventions of a language independent of context the metaphor literature contains many differing views including the comparison interaction selection restrictions and conventional metaphors viewsat research on metaphor includes all of these viewsof the at research only martin work has been implemented to my knowledgeamong the points raised are that metaphorical sentences exist that do not contain selection restriction violations and that metaphor requires interpretation in contextthe much smaller metonymy literature stresses the selection restrictions view toothe team and tacitus systems both seem to process metonymicsthe two main differences commonly noted between metonymy and metaphor are in their function and the kind of relationship established no one to my knowledge has a working system that discriminates examples of metaphor and metonymyin this section the basic met algorithm is outlinedthe met method is based on the selection restriction also known as the preferencemetonymy metaphor literalness and anomaly are recognized by evaluating preferences which produces four kinds of basic quotpreferencebasedquot relationship or semantic relation literal metonymic metaphorical and anomalouswithin the method the main difference between metonymy and metaphor is that a metonymy is viewed as consisting of one or more semantic relationships like container for contents and part for whole whereas a metaphor is viewed as containing a relevant analogyi agree with ortony remark that metaphor be viewed as contextual anomaly but would suggest two modificationsfirst not just metaphor but all of the preferencebased relations should be understood in terms of the presence or absence of contextual constraint violationsecond i prefer the term contextual constraint violation because 1 one of the phenomena detected by contextual violation is anomaly and 2 the selection restrictionpreference is a kind of lexical contextual constraintthe section starts with an explanation of some of the linguistic background behind the met methodi have argued elsewhere that understanding natural language be viewed as the integration of constraints from language and from contextsome language constraints are syntactic while others are semanticsome language constraints are lexical constraints that is constraints possessed by lexical items lexical syntactic constraints include those on word order number and tensethis sec tion describes three lexical semantic constraints preferences assertions and a lexical notion of relevancepreferences selection restrictions and expectations are the same all are restrictions possessed by senses of lexical items of certain parts of speech about the semantic classes of lexical items with which they cooccurthus an adjective sense has a preference for the semantic class of nouns with which it cooccurs and a verb sense has preferences for the semantic classes of nouns that fill its case rolesfor example the main sense of the verb drink prefers an animal to fill its agent case role ie it is animals that drinkthe assertion of semantic information was noted by lees in the formation of noun phrases and later developed by katz as the process of quotattributionquot assertions contain information that is possessed by senses of lexical items of certain parts of speech and that is imposed onto senses of lexical items of other parts of speech eg the adjective female contains information that any noun to which it applies is of the female sexlexical syntactic and semantic constraints are enforced at certain places in sentences which i call dependencieswithin a dependency the lexical item whose constraints are enforced is called the source and the other lexical item is called the target syntactic dependencies consist of pairs of lexical items of certain parts of speech in which the source an item from one part of speech applies one or more syntactic constraints to the target another lexical itemexamples of sourcetarget pairs include a determiner and a noun an adjective and a noun a noun and a verb and an adverb and a verbquotthe ship ploughed the wavesquot semantic dependencies occur in the same places as syntactic dependenciesthe sentence contains four semantic dependencies between the determiner the and the noun hip between hip and the verb stem plough between the and the noun waves and between waves and ploughin each semantic dependency one lexical item acts as the source and applies constraints upon the other lexical item which acts as the targetin the and plough both apply constraints upon hip and the and plough apply constraints on wavessemantic dependencies exist between not just pairs of lexical items but also pairs of senses of lexical itemsfor example the metaphorical reading of is because waves is understood as being the sense meaning quotmovement of waterquot not for example the sense meaning quotmovement of the handquot semantic relations result from evaluating lexical semantic constraints in sentencesevery semantic relation has a source and a target other terms used to refer to the source and target in a semantic relation include vehicle and tenor subsidiary subject and principal subject figurative term and literal term referent and subject secondary subject and primary subject source and destination old domain and new domain and base and target in cs seven kinds of semantic relation are distinguished literal metonymic metaphorical anomalous redundant inconsistent and novel relations combinations of these seven semantic relations are the basis of literalness metonymy metaphor anomaly redundancy contradiction contrariness and noveltysemantic relations belong to two classes the preferencebased and assertionbased classes of relations depending on the kind of lexical semantic constraint enforcedthe preferencebased class of semantic relations which are the focus of this paper contains literal metonymic metaphorical and anomalous semantic relationsthe assertionbased class of relations are described in greater length in pass quotthe man drank beerquot there is a literal relation between man and drink in because drink prefers an animal as its agent and a man is a type of animal so the preference is satisfiedquotdave drank the glassesquot quotdenise drank the bottlequot metonymy is viewed as a kind of domaindependent inferencethe process of finding metonymies is called metonymic inferencingthe metonymic concepts presently used are adapted from the metonymic concepts of lakoff and johnson two of the metonymic concepts used are container for contents and artist for art formin for example ted does not literally play the composer bach he plays music composed by himas figure 1 shows a metonymy is recognized in the met method if a metonymic inference is foundconversely if no successful inference is found then no metonymy is discovered and a metaphorical or anomalous semantic relation is then soughta successful inference establishes a relationship between the original source or the target and a term that refers to one of themlike stallard who noted that quotin any usage of the metonomy operation there is a choice about which of two clashing elements to extendquot the met method allows for metonymies that develop in different quotdirectionsquot a successful inference is sometimes directed quotforwardquot from the preference or quotbackwardquot from the target depending on the metonymic concept it is this direction of inferencing that determines whether the source or target is substituted in a successful metonymythe substitute source or target is used to discover another semantic relation that can be literal metonymic again metaphorical or anomalousin figure 1 the presence of a relevant analogy discriminates metaphorical relations from anomalous onesno one else has emphasized the role of relevance in the discovery of an analogy central to a metaphor though as noted in section 22 the importance of relevance in recognizing metaphors and the centrality of some analogy have both been discussedquotthe car drank gasolinequot the form of relevance used is a lexical notion ie the third kind of lexical semantic constraint that what is relevant in a sentence is given by the sense of the main sentence verb being currently analyzedthus it is claimed that the semantic relation between car and drink in is metaphorical because there is a preference violation and an underlying relevant analogy between car and animal the preferred agent of drinka car is not a type of animal hence the preference violationhowever what is relevant in is drinking and there is a relevant analogy that animals and cars both use up a liquid of some kind animals drink potable liquids while cars use gasolinehence the metaphorical relation between car and drinkmetaphor recognition in the met method is related to all four views of metaphor described in section 2recognition is viewed as a twopart process consisting of 1 a contextual constraint violation and 2 a set of quotcorrespondencesquot including a key correspondence a relevant analogythe contextual constraint violation may be a preference violation as in the selection restrictions view of metaphorthe set of quotcorrespondencesquot is rather like the system of commonplaces between tenor and vehicle in the interaction viewthe relevant analogy is related to the comparison and interaction views which emphasize a special comparison or an analogy as central to metaphormoreover the relevant analogies seem to form groupings not unlike the conceptual metaphors found in the conventional viewexample 21 quotthe idea drank the heartquot anomalous relations have neither the semantic relationships of a metonymic relation nor the relevant analogy of a metaphorical relationhence the semantic relation between idea and drink is anomalous in because idea is not a preferred agent of drink and no metonymic link or relevant analogy can be found between animals and ideas that is idea in does not use up a liquid like car does in this is not to say that an anomalous relation is uninterpretable or that no analogy can possibly be found in onein special circumstances search for analogies might be expanded to permit weaker analogies thereby allowing quotideas drinkingquot to be interpreted metaphoricallythe topology of the flow chart in figure 1 results from needing to satisfy a number of observations about the preferencebased phenomena particularly metonymy hence a preferencebased semantic relation can be either a single relation or a multirelationa single relation consists of one literal metaphorical or anomalous relationa multirelation contains one literal metaphorical or anomalous relation plus either a single metonymy or a chain of metonymiesall these combinations but only these are derivable from figure 1note that in the met method as presented in figure 1 semantic relations are tried in a certain order literal metonymic metaphorical and finally anomalousthis ordering implies that a literal interpretation is sought before a nonliteral one the ordering results from thinking about discriminating the semantic relations in serial processing terms rather than parallel processing terms particularly the serial order in which selection restrictions are evaluated and metonymic inference rules are tried satisfied selection restrictions then metonymic inference then violated selection restrictions gibbs criticizes the idea that literal and nonliteral meaning can be discriminated in ordered processing stagesmy response is that if the met method is viewed in parallel processing terms then literal metonymic metaphorical and anomalous interpretations are all sought at the same time and there is no ordering such that the literal meaning of a sentence is computed first and then an alternative meaning sought if the literal meaning is defectivegibbs other main criticism concerning the traditional analysis of sentence meaning as composed from word meanings and independent of context will be discussed in section 7cs is a semantics for natural language processing that extends many of the main ideas behind preference semantics cs has four components senseframes collation semantic vectors and screeningthe met method is part of the process of collationfuller and more general descriptions of the four components appear in fass senseframes are dictionary entries for individual word sensessenseframes are composed of other word senses that have their own senseframes much like quillian planeseach senseframe consists of two parts an arcs section and a node section that correspond to the genus and differentia commonly found in dictionary definitions the arcs part of a senseframe contains a labeled arc to its genus term together the arcs of all the senseframes comprise a densely structured semantic network of word senses called the sensenetworkthe node part of a senseframe contains the differentia of the word sense defined by that senseframe ie information distinguishing that word sense from other word senses sharing the same genusthe two lexical semantic constraints mentioned earlier preferences and assertions play a prominent part in senseframe nodessenseframe nodes for nouns resemble wilks pseudotextsthe nodes contain lists of twoelement and threeelement lists called cellscells contain word senses and have a syntax modeled on englisheach cell expresses a piece of functional or structural information and can be thought of as a complex semantic feature or property of a nounfigure 2 shows senseframes for two senses of the noun crookcrookl is the sense meaning quotthiefquot and crook2 is the shepherd toolall the terms in senseframes are word senses with their own senseframes or words used in a particular sense that could be replaced by word sensesit1 refers to the word sense being defined by the senseframe so for example crookl can be substituted for it1 in iit1 steall valuables11common dictionary practice is followed in that word senses are listed separately for each part of speech and numbered by frequency of occurrencehence in crook2 the cell shepherdl usel it11 contains the noun sense shepherd1 while the cell itl shepherdl sheep11 contains the verb sense shepherdl senseframe nodes for adjectives adverbs and other modifiers contain preferences and assertions but space does not permit a description of them heresenseframe nodes for verbs and prepositions are case frames containing case subparts filled by case roles such as agent object and instrumentcase subparts contain preferences and assertions if the verb describes a state change sfshepherdl usel ti 1itl shepherdl sheepinsenseframes for crook1 and crook2 sfpreference drinkl1m1the met method figure 3 shows the senseframes for the verb senses eat1 and drink1in both the agent preference is for an animal but the object preferences differ the preference of eatl is for foodl ie an edible solid while the preference of drinkl is for drink1 ie a potable liquidthe second component of cs is the process of collationit is collation that contains the met method in cscollation matches the senseframes of two word senses and finds a system of multiple mappings between those senseframes thereby discriminating the semantic relations between the word sensesfigure 4 shows the use of the met method in csfigure 4 is similar to the one in figure 1 except that the diamonds contain the processes used in cs to check for satisfied preferences metonymic inferences and relevant analogies the basic mappings in collation are paths found by a graph search algorithm that operates over the sensenetworkfive types of network path are distinguishedtwo types of path called ancestor and same denote kinds of quotinclusionquot eg that the class of vehicles includes the class of cars satisfied substitu e metonym for source or target 2 applicable metonymic inference rule preferences are indicated by network paths denoting inclusion also known as quotinclusivequot paths the other three types of network path called sister descendant and estranged denote quotexclusionquot eg that the class of cars does not include the class of vehicles violated preferences are network paths denoting exclusion also known as quotexclusivequot pathsthese paths are used to build more complex mappings found by a framematching algorithmthe framematching algorithm matches the sets of cells from two senseframesthe sets of cells which need not be ordered are inherited down the sensenetworka series of structural constraints isolate pairs of cells that are matched using the graph search algorithmnetwork paths are then sought between terms occupying identical positions in those cellsseven kinds of cell match are distinguished based on the structural constraints and types of network path foundancestor and same are quotinclusivequot cell matches egcompositionl metal includes composition1 steell because the class of metals includes the class of steels sister descendant and estranged are types of quotexclusivequot cell matches egcomposition1 stee11 and compositionl aluminium1 are exclusive because the class of steels does not include the class of aluminiums since both belong to the class of metals the remaining cell matches distinctive source and distinctive target account for cells that fail the previous five kinds of cell matchfor more detail on cell matches see fass a kind of lexical relevance is found dynamically from the sentence contextthis notion of relevance is used in finding the relevant analogies that distinguish metaphorical from anomalous relations it is also used when finding coagent for activity metonymiesrelevance divides the set of cells from the source senseframe into two subsetsone cell is selected as relevant given the context the remaining cells are termed nonrelevantcollation matches both the source relevant and nonrelevant cells against the cells from the target senseframea relevant analogy is indicated by a sister match of the source relevant cell five types of metonymic concepts are currently distinguishedexamples of two of the metonymic concepts container for contents and artist for art form have already been giventhe remaining three are part for whole property for whole and coagent for activityquotarthur ashe is blackquot quotjohn mcenroe is whitequot in and the skins of arthur ashe and john mcenroe parts of their bodies are colored black quotjohn mcenroe is yellowquot quotnatalia zvereva is greenquot in for example john mcenroe is limited with respect to his bravery a property possessed by humans and other animalsquotashe played mcenroequot these concepts are encoded in metonymic inference rules in cs the rules are ordered from most common to leastthe order used is part for whole property for whole container for contents coagent for activity and artist for art formthe first two concepts part for whole and property for whole are sourcedriven the others are targetdriventhe difference in direction seems to be dependent on the epistemological structure of the knowledge being related by the different inferencespart for whole metonymies are sourcedriven perhaps because the epistemological nature of parts and wholes is that a part generally belongs to fewer wholes than wholes have parts hence it makes sense to drive inferencing from a part toward the whole than vice versain container for contents on the other hand the epistemological nature of containers and contents is that the containers generally mentioned in container for contents metonymies are artifacts designed for the function of containing hence one can usually find quite specific information about the typical contents of a certain container for example some glasses as in whereas the contents do not generally have the function of being the contents of somethinghence it makes sense to drive inferencing from the container and the function it performs toward the contents than vice versathe same reasoning applies to artist for art form an artist has the vocation of creating art that is hisher purposea further step in collation distinguishes metaphorical from anomalous semantic relationsrecall that a metaphorical relation contains a relevant analogy as in and while an anomalous relation does not as in a relevant analogy is found by matching the relevant cell from the source senseframe with one of the cells from the target senseframeif the match of cells is composed of a set of sister network paths between corresponding word senses in those cells then this is interpreted as analogical and hence indicative of a metaphorical relationany other match of cells is interpreted as not analogical and thus an anomalous semantic relation is recognized the third component of cs is the semantic vector which is a form of representation like the senseframe but senseframes represent lexical knowledge whereas semantic vectors represent coherencesemantic vectors are therefore described as a kind of coherence representationa semantic vector is a data structure that contains nested labels and ordered arrays structured by a simple dependency syntaxthe labels form into setsthe outer sets of labels indicate the application of the three kinds of lexical semantic constraintsthe outermost set of labels is preference and assertionthe middle set is relevant and nonrelevantthe innermost set is the kind of mapping used network path and cell matchesthe nesting of labels shows the order in which each source of knowledge was introducedthe ordered arrays represent the subkinds of each kind of mappingfivecolumn arrays are for the five network paths sevencolumn arrays are for the seven types of cell matcheach column contains a positive number that shows the number of occurrences of a particular network path or cell matchthe fourth component of cs is the process of screeningduring analysis of a sentence constituent a semantic vector is created for every pairwise combination of word sensesthese word sense combinations are called semantic readings or simply quotreadingsquot each reading has an associated semantic vectorscreening chooses between two semantic vectors and hence their attached semantic readingsrank orderings among semantic relations are appliedin the event of a tie a measure of conceptual similarity is usedthe ranking of semantic relations aims to achieve the most coherent possible interpretation of a readingthe class of preferencebased semantic relations takes precedence over the class of assertionbased semantic relations for lexical disambiguationthe rank order among preferencebased semantic relations is literal metaphorical anomalousif the semantic vectors are still tied then the measure of conceptual similarity is employedthis measure was initially developed to test a claim by tourangeau and sternberg about the aptness of a metaphorthey contend that aptness is a function of the distance between the conceptual domains of the source and target involved the claim is that the more distant the domains the better the metaphorthis is discussed further in section 5the conceptual similarity measure is also used for lexical ambiguity resolution cs has been implemented in the meta5 natural language programthe meta5 program is written in quintus prolog and consists of a lexicon holding the senseframes of just over 500 word senses a small grammar and semantic routines that embody collation and screening the two processes of csthe program is syntaxdriven a form of control carried over from the structure of earlier programs by boguraev and huang on which meta5 is basedmeta5 analyzes sentences discriminates the seven kinds of semantic relation between pairs of word senses in those sentences and resolves any lexical ambiguity in those sentencesmeta5 analyzes all the sentences given in sections 3 and 4 plus a couple more metaphorical sentences discussed in section 7below are simplified versions of some of the metonymic inference rules used in meta5the metonymic concepts used in cs contain three key elements the conceptual relationship involved the direction of inference and a replacement of the source or targetthe metonymic inference rules in meta5 contain all three key elementsthe rules though written in a prologlike format assume no knowledge of prolog on the part of the reader and fit with the role of metonymy shown in figures 1 and 4each metonymic inference rule has a lefthand side and a righthand sidethe lefthand side is the topmost statement and is of the form metonymic_inference_rulethe righthand side consists of the remaining statementsthese statements represent the conceptual relationship and the direction of inference except for the bottom most one which controls the substitution of the discovered metonym for either the source or target this statement is always a call to find a new sensenetwork paththis rule represents property for whole which is sourcedrivenstatement 1 represents the conceptual relationship and direction of inferencethe conceptual relationship is that the source is a property possessed by the whole in a propertywhole relationthe inference is driven from the source find_cell searches through the source list of cells for one referring to a quotwholequot of which the source is a quotpartquot statement 2 controls the substitution of the discovered metonym the quotwholequot is the substitute metonym that replaces the source and the next sensenetwork path is sought between the whole and the targetagain the inference in artist for art form is from the targetthe target is a person who is an quotartistquot in an artistart form relationthe occupation of the person is found by searching up the sensenetwork the list of cells associated with the occupation are searched for a cell describing the main activity involved in the occupation eg a cook cooks food and an artist makes art formschecks are done to confirm that any activity found is indeed making an art form ie that the quotmakingquot involved is a type of creating and that the quotart formquot is a type of art forml the quotart formquot is the substitute metonym that replaces the targeta new sensenetwork path is computed between the source and the art form i will now describe how meta5 recognizes some metonymies and metaphorsin between bach and the twelfth sense of play in meta5 lexicon there is a chain of metonymies plus a literal relationthe chain consists of artist for art form and container for contents metonymiesboth metonymic concepts are targetdrivenin artist for art form the inference is from the artist to the art form so the substitute metonym replaces the target if the inference is successfulthe senseframes of the verb sense play12 and the noun senses musicl and johann_sebastian_bach are shown in figure 5the semantic relation results from matching the object preference of play12 which is for music against the surface object which is bach short for johann sebastian bachthe preference is the source and the surface object is the targetwe will follow what happens using the flow chart of figure 4the sensenetwork path between the source and the target computational linguistics volume 17 number 1 senseframes for play12 musicl and johann_sebastian_bach is soughtthe path is not inclusive because johann_sebastian_ bach is not a type of music1metonymic inference rules are appliedthe rules for part for whole property for whole container for contents coagent for activity are tried in turn but all failthe rule for artist for art form however succeedsthe discovered metonymic inference is that johann_ sebastian_bach composes musical pieces the metonymic inference is driven from the target which is johann_sebastian_bachthe successful metonymic inference using the artist for art form inference rule above is as follows 1 johann_sebastian_bach is a composer1 2 composers compose1 musical pieces additional tests confirm 2 which are that 3 composing is a type of creating and 4 a musical_piece1 is a type of art_formlthe original target is replaced by the substitute metonym the sensenetwork path between the source and the new target is soughtthe path is not inclusivemetonymic inference rules are appliedthe rules for part for whole and property for whole fail but the rule for container for contents succeedsthe successful inference using the description of the containercontents inference rule given previously is that 1 a musical_piecel contains music1 the direction of inference in the container for contents metonymic concept is from the target towards the source so 2 the target is replaced by the substitute metonym when an inference is successfulhence in our example the target is again replaced by a substitute metonym the source which is music1 the object preference of play12 remains unchangedthe sensenetwork path between the source and the latest target is soughtthe path is inclusive that music1 is a type of musicl so a literal relation is foundthe processing of the preferencebased semantic relation between play12 and its preference for music1 and johann_sebastian_bach is completedafter an initial preference violation the semantic relation found was an artist for art form metonymic relation followed by a container for contents metonymic relation followed by a literal relation there is a metaphorical relation between carl and the verb sense drinkl in the source is drinkl whose agent preference is animall and the target is carl a metaphorical relation is sought after failing to find an inclusive network path or a metonymic inference between animall and carl hence the network path between animall and carl must be exclusivethe network path found is an estranged onethe second stage is the match between the relevant cell of animall and the cells of carlin the present example drinkl is relevantthe list of cells for animall is searched for one referring to drinkingthe relevant cell in the list is animall drinkl drink11 which is matched against the inherited cells of carl a sister match is found between animall drinkl drinkl and carl use2 gasoline1 from carlthe sister match is composed of two sister paths found in the sensenetworkthe first sister path is between the verb senses drinkl and use2 which are both types of expending the second path is between the noun senses drinkl and gasolinel which are both types of liquid the effect of the network paths is to establish correspondences between the two cells such that an analogy is quotdiscoveredquot that animals drink potable liquids as cars use gasolinenote that like gentner systematicity principle the correspondences found are structural and independent of the content of the word senses they connectnote also that the two cells have an underlying similarity or quotgroundquot in that both refer to the expenditure of liquidsthis second stage of finding a relevant analogy seems the crucial one in metaphor recognitionfigure 10 shows the match of the nonrelevant cells from animall and carlthe cell use2 gasolinel i has been removedthere are three inclusive cell matches as animals and cars share physical objectlike properties of boundedness three dimensions semantic vector for a metaphorical semantic relation and soliditytwo cell matches are exclusiveanimals are composed of flesh whereas cars are composed of steelanimals are living whereas cars are nonlivingthere are two distinctive cells of animall and five distinctive cells of carl tourangeau and sternberg hypothesis predicts that the greater the distance between the conceptual domains of the terms involved in a metaphor the more apt the metaphorthe proportion of similarities to differences is 3 to 2 which is a middling distance suggesting tentatively an unimposing metaphorall of these matches made by collation are recorded in the semantic vector shown in figure 11the crucial elements of the metaphorical relation in are the preference violation and the relevant analogyin figure 11 the preference violation has been recorded as the 1 in the first array and the relevant analogy is the 1 in the second arrayinformation about the distance between conceptual domains is recorded in the third arraythe preference label indicates that a preference has been matched the five columns of the first array record the presence of ancestor same sister descendant and estranged network paths respectivelywhen a preference is evaluated only one network path is found hence the single 1 in the fifth column which indicates that an estranged network path was found between animall and car1cell matches are recorded in the second and third arrays which each contain seven columnsthose columns record the presence of ancestor same sister descendant estranged distinctive source and distinctive target cell matches respectivelythe 1 in the third column of the second array is the relevant analogy a sister match of the relevant cell animall drinkl drink1 and the cell carl use2 gasoline1i the 10 is the ten distinctive cells of carl that did not match animall drink1 drink1this is the match of 12 cells 1 from the source and 11 from the target the sum of array columns is the 3 similarities 2 differences 2 distinctive cells of animall and 5 distinctive cells of carl are the nonzero numbers of the final arraythe 3 similarities are all same cell matches the 2 differences are both sister cell matchesa total of 17 cells are matched 7 from the source and 10 from the target the total of array columns is quotthe ship ploughed the wavesquot in there is a metaphorical relation between a sense of the noun hip and the second sense of the verb plough in meta5 lexiconnote that plough like drink belongs to several parts of speechfigure 12 shows the senseframes for the verb sense plough2 the noun sense plough1 which is the instrument preference of plough2 and the noun sense ship1in meta5 matches senses of hip against senses of ploughwhen meta5 pairs ship1 with plough2 it calls upon collation to match shipl against the noun sense plough1 the instrument preference of plough2first the graph search algorithm searches the sensenetwork for a path between plough1 and ship1 and finds an estranged network path between them ie a ship is not a kind of plough so plough2 instrument preference is violatednext collation inherits down lists of cells for ploughl and shipl from their superordinates in the sensenetworkwhat is relevant in the present context is the action of ploughing because is about a ship ploughing wavescollation then runs through the list of inherited cells for the noun sense plough1 searching for a cell that refers to the action of ploughing in the sense currently under examination by meta5 plough2senseframes for plough2 ploughl and ship1 relevant cell of olough1 cells of shipi collation finds a relevant cell plough1 plough2 soill and uses its framematching algorithm to seek a match for the cell against the list of inherited cells for shipl shown in figure 13 the algorithm finds a match with shipl sail2 water2 and hence collation quotdiscoversquot a relevant analogy that both ships and ploughs move through a medium ie that ploughs plough through soil as ships sail through waterfinally collation employs the frame matching algorithm a second time to match together the remaining nonrelevant cells of plough1 and ship1 the cell shipl sail2 water is removed to prevent it from being used a second timefigure 15 shows the semantic vector producedas with figure 11 it shows a metaphorical relationthere is a preference violation an estranged network path indicated by the 1 in the fifth column of the first arraythere is also a relevant analogy shown by the 1 in the third column of the second array the analogical match of the cells plough1 plough2 soil and shipl sail2 water2the second array shows that 11 cells are matched 1 from the source and 10 from the target the sum of the array columns is semantic vector for another metaphorical semantic relation in the third array the match of nonrelevant cells there is 1 ancestor match 4 same matches 1 sister match and 3 distinctive cells of ship1fifteen cells are matched 6 from the source and 9 from the target the totals are semantic vectors can represent all the semantic relations except metonymic onesthe reason is that metonymic relations unlike the others are not discriminated by cs in terms of only five kinds of network path and seven kinds of cell matchesinstead they consist of combinations of network paths and specialized matches of cells that have not fallen into a regular enough pattern to be represented systematicallyeven for those semantic dependencies investigated the interpretation of semantic relations seems to require more complexity than has been described so far in this paperconsider the differences between the following sentences intuitively sentence is metaphorical while is metaphoricalanomalousin the semantic relation between car and drink is thought to be metaphorical and the isolated semantic relation between just drink and gasoline is anomalous but the sentence as a whole is metaphorical because it is metaphorical that cars should use up gasolinein the semantic relation between car and drink is metaphorical the semantic relation between just drink and coffee is literal yet the effect of as a whole is metaphoricalanomalousthe object preference of drink is for a drink ie a potable liquidit seems that it is metaphorical for cars to quotdrinkquot a liquid commonly used up by cars eg gasoline but anomalous if the liquid has nothing to do with cars eg coffee as in the problem of understanding the differences between sentences and requires some further observations about the nature of semantic relations principally that the differences are caused by the combinations of semantic relations found in the sentences and the relationships between those relationsbelow is a suggestion as to how deeper semantic processing might discriminate the differences between the two sentencesbefore getting to the deeper processing we need a better semantic vector notationthe better semantic vector notation which developed from a discussion with afzal ballim is a modification of the notation shown in section 5the key differences are reformulation by rewriting the five and seven column arrays in terms of the predicateargument notation used in the rest of semantic vectors and extension by adding the domain knowledge connected by every network path and cell matchfigure 16 shows the semantic vector in figure 11 reformulated and extendedthe advantage of vectors like the one in figure 16 is that they record both how the senseframes of two word senses are matched and what information in the senseframes is matched for example the part of figure 16 that begins quotrelevant quot contains all the information found in figure 7 the match of the relevant cell from animall against the cells of car1 both the types of cell matches and the cells matchedthe equivalent part of figure 11 only records the types of cell matchesrecording the contents of the matched cells is useful because it enables a deepened analysis of semantic relationssuch an analysis is needed to detect the differences between and in the description of cs in section 4 collation discriminates the one or more semantic relations in each semantic dependency but treats the semantic relations in one dependency as isolated from and unaffected by the semantic relations in another dependencywhat is needed is extra processing that interprets the semantic relation in a later dependency with respect to the semantic relation established in an earlier 1 manima11 drink1 drink1j cart use2 gasolinelth distinctive_larget 10 bounds1 distinctl extentl three_dimensionall behaviourl solidi compositionl metall animacyl nonlivinglj cart rolll on3 landith driverl drivel cart cart hovel 4 whee111 carl havel anginal cart carryl passengerl non_relevant same 3 eboundsl distinctl bounds1 distinctl extentl three_dimensionallj extent1 three_dimensionall behaviourl solidi behaviourl solidl sister 2 compositionl fleshl compositionl meta111 animacyl livingl animacyl nonlivingi distinctive_source 2 animall earl foodl biologyl animall distinctive_target 5 onethis processing matches the domain knowledge in semantic vectors ie this processing is a comparison of coherence representationsin sentences such as and there are two key semantic dependenciesthe first one is between the subject noun and the verb the second is between the verb and object nounin each dependency the source is the verb and the targets are the nounssemantic relations are found for each dependencyone way to detect the difference between metaphorical sentences such as and metaphoricalanomalous ones such as is in each sentence to consult the semantic vectors produced in its two main semantic dependencies and compare the matches of the relevant cells that are found by collationlet us go through such an analysis using cs starting with the first semantic dependency between subject noun and verbin this semantic dependency in both and a relevant analogy is discovered as part of a metaphorical relation between the target car1 and animall the agent preference of the source drinklthe semantic vector in figure 16 records the two cells that figure in that relevant analogyfigure 17 shows the same information from the semantic vector but written as a statementwhen the second semantic dependency is analyzed in the target is gasolinel and is matched against the noun sense drink1 the object preference of the source drink1 a semantic vector is producedthe relevant cell found in the noun sense drinkl is animall drinkl drink1its match against vehicle1 use2 gasolinel a cell from gasolinel is shown in the vector statement in figure 18the match is a sister match indicating a relevant analogynow this is peculiar because quotdrinking gasolinequot is anomalous yet a relevant analogy has been found and this paper has argued that relevant analogies are special to metaphorical relationsone possible explanation is that differences exist between the recognition of metaphorical relations that concern agents and metaphorical relations that concern objects and other case rolesit may be that metaphorical relations are indicated by a relevant analogy but only in selected circumstancesthis needs further investigationvector statement of match of relevant cell from drinkl against cells from coffeel to return to the analysis of what appears to be important in determining that is a metaphorical sentence is the comparison of the two pairs of matched relevant cells animall drinkl drinkl carl use2 gasoline11 animall drink1 drink1 vehicle1 use2 gasoline11 the two source cells are the same and the two target cells carl use2 gasoline and vehicle1 use2 gasolinel are almost identical indicating that the same basic analogy runs through the whole of hence the sentence as a whole is metaphoricalnow let us analyze the second semantic dependency in the target is coffeel and is again matched against drinkl the object preference of the verb sense drinkl the sourcethe relevant cell from the noun sense drinkl is again animall drinkl drink1 which matches against human_being1 drink1 coffeel i from the target coffeelthis time the match is an ancestor match and hence not a relevant analogyfigure 19 shows this match of the relevant cell as a vector statementlet us compare the two pairs of matched relevant cells for animall drink1 drinkl carl use2 gasolinel animall drinkl drinkl human_beingl drinkl coffeel the two source cells are the same but the two target cells carl use2 gasoline and human_being1 drinkl coffeel are very differentthe reason that the sentence as a whole is metaphoricalanomalous is because of the clash between these target cellsthe basic analogy of a car ingesting a liquid does not carry over from the first semantic dependency into the secondthe anomalous flavor of could not be detected by looking at the semantic relations in the dependencies in isolation because one semantic relation is metaphorical and the other is literalneither relation is anomalous the anomaly comes from the interaction between the two relationsfigure 20 is a proposed representation for sentence the left side of figure 20 shows the knowledge representation part of the sentence representation a simple caseframe based representation of the right side of figure 20 within the grey partition is the coherence representation component of the sentence representation abridged semantic vectors for the two main semantic dependencies in the upper semantic vector is the match of the target carl against the source animallthe lower semantic vector is the match of the target gasoline1 against the source drinkl the noun sensethe upper abridged semantic vector indicates a metaphorical relationthe lower semantic vector also indicates a metaphorical relation though as was noted earlier quotdrinking gasolinequot when interpreted in isolation is surely anomalousthe underlines in figure 20 denote pointers linking the semantic vectors to the case framethe grey vertical arrows show that the two semantic vectors are also linked sentence representation for quotthe car drank coffeequot together via the matches of their relevant cellsin those matches the arrows are sensenetwork paths found between the elements of the two target cellsthe network paths indicated in grey that connect the two abridged semantic vectors show processing of coherence representationsthe particular network paths found a descendant path and two same quotpathsquot show that the same relevant analogy is used in both semantic relations that both semantic relations involve a match between animals drinking potable liquids and vehicles using gasoline hence sentence as a whole is metaphoricalfigure 20 is therefore unlike any of the coherence representations shown previously because it shows a representation of a metaphorical sentence not just two isolated metaphorical relationscompare figure 20 with figure 21 a sentence representation for the upper semantic vector again indicates a metaphorical relation between carl and drink1the lower semantic vector indicates a literal relation between drinkl and coffee1what is important here is the match of relevant information discovered in the two semantic relations as indicated by the three network pathsthe paths found are two estranged paths and a sister path indicating that the relevant information found during the two semantic relations is different in one semantic relation information about animals drinking potable liquids is matched against cars using gasoline in the other the same information is matched against human beings drinking coffee but cars using gasoline and human beings drinking coffee are quite different hence sentence is anomalous overallnote that in figures 20 and 21 the coherence representation part of the sentence representation is much larger than the knowledge representation partthe detailed quotworld knowledgequot about car1 the verb sense drinkl gasolinel and coffeel are all on the right sideit is interesting to contrast the figures with early conceptual dependency diagrams such as those in schank because rather than the large and seemingly unlimited amounts of world knowledge that appear in cd diagrams the two figures present only the world knowledge needed to discriminate the semantic relations in and this section reviews the material on metonymy and metaphor in section 2 in light of the explanation of the met method given in sections 36when compared with the al work described in section 2 the met method has three main advantagesfirst it contains a detailed treatment of metonymysecond it shows the interrelationship between metonymy metaphor literalness and anomalythird it has been programmedpreference semantics addresses the recognition of literal metaphorical and anomalous relations but does not have a treatment of metonymyin the case of preference semantics the theory described in wilks has not been implemented though the projection algorithm was implemented using some parts of cs to supply detail missing from wilks original specificationgentner structuremapping theory has no treatment of metonymythe theory has been implemented in the structuremapping engine and some examples analyzed by it but not to my knowledge examples of metaphor or anomalyindurkhya constrained semantic transference theory of metaphor has no treatment of metonymy anomaly or literalnessit has also not been implemented see indurkhya for reasons whyhobbs and martin offer a relatively shallow treatment of metonymy without for instance acknowledgement that metonymies can be driven from either the source or the targethobbs quotselective inferencingquot approach to text interpretation has been applied to problems including lexical ambiguity metaphor and the quotlocal pragmaticsquot phenomena of metonymy but not anomalyto my knowledge hobbs has yet to produce a unified description of selective inferencing that shows in detail how lexical ambiguity is resolved or how the differences between metaphor metonymy and so on can be recognizedhobbs earlier papers include a series of programs sate diana and diana2 but the papers are not clear about what the programs can doit is not clear for example whether any of the programs actually analyze any metaphorsmartin work is the only other computational approach to metaphor that has been implementedhowever the work does not have a treatment of metonymymartin metaphormaps which are used to represent conventional metaphors and the conceptual information they contain seem to complement semantic vectors of the extended kind described in section 6in section 6 i argued that vectors need to record the conceptual information involved when finding mappings between a source and targetwhat metaphormaps do is freeze the conceptual information involved in particular metaphorical relationsthere is some theoretical convergence here between our approaches it would be interesting to explore this furthermoreover the metaphors studied so far in cs seem linked to certain conventional metaphors because certain types of ground have recurred types which resemble lakoff and johnson structural metaphorstwo types of ground have cropped up so farexample 28 quottime fliesquot the first is a useuparesource metaphor which occurs in and in when viewed as nounverb sentenceboth sentences are analyzed by meta5useuparesource resembles structural metaphors like time is a resource and labor is a resource which according to lakoff and johnson both employ the simple ontological metaphors of time is a substance and an activity is a substance these two substance metaphors permit labor and time to be quantified that is measured conceived of as being progressively quotused upquot and assigned monetary values they allow us to view time and labor as things that can be quotusedquot for various endsquotthe horse flewquot the second type of ground is motionthroughamedium a type of ground discussed by russell this appears in and again both analyzed by meta5incidentally it is worth noting that structural metaphors have proven more amenable to the met method than other kinds triedi assumed initially that orientational and ontological metaphors would be easier to analyze than structural metaphors because they were less complexhowever structural metaphors have proved easier to analyze probably because structural metaphors contain more specific concepts such as quotdrinkquot and quotploughquot which are more simple to represent in a network structure so that analogies can be found between those conceptswe return here to gibbs point concerning the traditional notion of literal meaning that 1 all sentences have literal meanings that are entirely determined by the meanings of their component words and that 2 the literal meaning of a sentence is its meaning independent of contextalthough 1 and 2 are both presently true of cs there are means by which context can be introduced more actively into sentence interpretationat present the meaning of a sentence in cs whether literal or nonliteral is not derived entirely independently of context however the only context used is a limited notion of relevance which is generated by collation from within the sentence being analyzed what is relevant is given by the sense of the main sentence verbnevertheless because of this notion of relevance contextual influence is present in semantic interpretation in csmoreover the notion of relevance is recorded in semantic vectors and the extended coherence representations discussed in section 6hence the processes and representations of cs possess basic equipment for handling further kinds of contextthe met method is consistent with the view that metaphor is based on similarity whereas metonymy is based on contiguity contiguity readers may recall refers to being connected or touching whereas similarity refers to being alike in essentials or having characteristics in commonthe difference comes from what and how the conceptual information is relatedquotmy car drinks gasolinequot let us consider what is related firstin metaphor an aspect of one concept is similar to an aspect of another concept eg in an aspect of the concept for animal that animals drink potable liquids is similar to an aspect of another concept that cars use gasolinequotthe ham sandwich is waiting for his checkquot however in metonymy a whole concept is related to an aspect of another conceptfor example in the metonymy is that the concept for ham sandwich is related to an aspect of another concept for quotthe man who ate a ham sandwichquot regarding how that conceptual information is related in the case of metaphor the met method assigns a central role to finding an analogy and an analogy between two terms is due to some underlying similarity between them eg in the analogy that animals drinking potable liquids is like cars using gasoline the underlying similarity is that both animals and cars ingest liquidsin an analogy the relationship between aspects of two concepts is purely structuralin metonymies however the relationships are quotknowledgeladenquot connections eg partwhole and containercontentsso in summary quotsimilarityquot in metaphor is understood to be based on structural relationships between aspects of concepts whereas quotcontiguityquot in metonymy is based on knowledgespecific relationships between a concept and an aspect of another conceptthese observations i would argue support the view that metonymy has primarily a referential function allowing something to stand for something else a connection between a concept and an aspect of another conceptthe observations also support the view that metaphor primary function is understanding allowing something to be conceived of in terms of something else the role of analogy is especially crucial to this functionthe treatment of metonymy permits chains of metonymies and allows metonymies to cooccur with instances of either literalness metaphor or anomalythe kinds of inferences sought resemble the kinds of inferences that yamanashi notes link sentencesan obvious direction in which to extend the present work is toward acrosssentence inferencesexample 30 quotjohn drank from the faucetquot example 31 quotjohn filled his canteen at the springquot metonymy seems closely related to the work on nonlogical inferencing done by schank and the yale group for example lehnert observes that just one inference is required for understanding both and the inference that water comes from the faucet in and the spring in is an instance of producer for product in which the faucet and spring are producers and water is the producthowever the inference is not a metonymy because it is from unused cases of the verbs drink and fill whereas metonymy only occurs in the presence of a violated selection restriction that neither nor containmetaphor recognition in the met method is related to all four views of metaphor described in section 2 consisting of in cs the presence of metaphor has been investigated in violations of preferences a kind of lexical contextual constraintthough clearly this is a small part of the picture it seems worth establishing an extensive picture of preference violation and metaphor before moving on to other contextual constraintscollation and the met method have certain similarities with the comparison view of metaphor especially in the cell matching processthe relevant analogies discovered in cs are indeed to quote tourangeau and sternberg quota comparison in which one term is asserted to bear a partial resemblance to something elsequot the collation process gives quite a clear picture of the ground and tension in a metaphorthe ground is the most specific statement that subsumes both statements that figure in the analogy eg it1 ingest1 liquidl is the ground for the analogy involving anima11 drinkl drinkl and car1 use2 gasoline11 moreover the details of the process match well aristotle two basic principles for finding the ground of a metaphor in that both terms in a metaphorical relation belong to a common category and an analogy is found between themthe collation process also takes care of many of the problems tourangeau and sternberg note with the comparison viewregarding the problem that quoteverything shares some feature or category with everything elsequot cs is in agreement the only significant combination of features in a metaphor are those involved in a relevant analogythe problem that quotthe most obvious shared features are often irrelevantquot ie that the most obvious shared features are irrelevant to a metaphor is borne out by experience with cs for example animals and cars share some basic physical objectlike properties but these have a minor role in understanding cars drinkingthe met method bears out another problem that quoteven when a feature is relevant it is often shared only metaphoricallyquot finally with the problem that novel metaphors cannot be based on quotextant similaritiesquot the relevant analogies found in the met method are not quotextantquot but have to be actively discoveredin section 2 two main differences were noted between the interaction and comparison views first that similarities are quotcreatedquot in the interaction view whereas only preexisting similarities are found in the comparison view and second that a whole system of similarities are evoked in the interactions view unlike the comparisons view which focuses upon finding a single similarityregarding the first difference i would argue that the difference is a mistaken one and that interaction theorists are simply using a sophisticated form of comparisonthis is quite evident when one examines for example the methods tourangeau and sternberg propose for relating features across domains in their theorythe second of aristotle basic principles is finding an analogy yet tourangeau and sternberg themselves say that quotin a sense we are proposing that metaphors are analogies that include both tenor and vehicle and their different domains as termsquot and of course finding an analogy is central to the met method on csregarding the second difference i would agree that finding a system of commonplaces is distinctivehowever the extensions to cs described in section 6 move toward the direction of finding a system of commonplaces in that the deeper semantic vectors and sentence representations shown in figures 20 and 21 contain the information crucial to finding a system of commonplaceshaving identified the crucial analogy in the deeper semantic vector contains the two pairs of matched relevant cells that provide the core analogy on which the metaphorical interpretation of is built ffanimall drink1 drink1j car1 use2 gasoline1 ranima11 drink1 drinkl vehiclel use2 gasolinel with this information at hand the senseframes for word senses in analogical correspondence the verb senses drink1 and use2 the noun senses animall and car1 animal1 and vehicle1 and drinkl and gasolinel can be systematically expanded to uncover deeper commonplaces between animals and carsin conclusion the view of metonymy and metaphor in the met method is consistent with much of the literature on these phenomenathe met method is consistent with the view that the primary function of metaphor is understanding while that of metonymy is referential like anaphoranevertheless metonymy and metaphor do have much in common both might be described as forms of quotconceptual ellipsisquot a shorthand way of expressing ideasthe met method in its present serial form recognizes literalness metonymy metaphor and anomaly in the following order and by the following characteristicsthe above analysis also illustrates i hope why metonymy and metaphor are easily confused both are nonliteral and are found through the discovery of some aspect shared by the source a preference and the target in the above case a surface nounthe differences are how that aspect is selected the operations that follow the effect those operations produce and subsequent processingin the case of metonymy the selected aspect forms a regular semantic relationship with a property from the target there is substitution ie replacement of one concept with another hence the apparent referential function of metonymy and is unclear at presentin the case of metaphor the selected aspect is relevant forms an analogy with another aspect from the target and the effect is of surprise discovery of similarity between the two concepts and the discovered analogy is used to unearth further similarities between the two concepts and to guide subsequent sentence interpretationmoreover the view of metaphor in cs contains elements of the selection restrictions view the comparisons view and the interactions view of metaphorit should be emphasized that the met method has only been applied to a small set of english sentencesmetonymy interpretation has been investigated only for adjectivenoun and subjectverbobject constructions metaphor interpretation only for the latterthe best avenue for progress with the met method appears to be the extensions to metaphor interpretation described in section 6in the meantime i am looking for sentences that contain semantic relations consisting of a metonymy followed by a metaphorexample 32 quotamerica believes in democracyquot on a related point some sentences are interesting in this respect because they have either a metaphorical or metonymic interpretationin for example quotare we viewing america metaphorically as something which can believe or are we using it metonymically to refer to the typical inhabitant or the majority of inhabitants of americaquot example 33 quotprussia invaded france in 1870quot sentence which was discussed in a group working on beliefs at the crl also has separate metonymic and metaphorical interpretationsthe key semantic relation is between prussia and invadethe relation is nonliteral because army is the expected agent of invade and prussia is a country not an armywhat then is the semantic relation between prussia and armyone possibility is that a chain of metonymies is involved that the army is controlled by the government which also controls prussiaa second possibility is that prussia is understood metaphorically as being an animate thing that extends itself into francei would like to thank the many people at the cognitive studies centre university of essex the computing research laboratory new mexico state university and the centre for systems science simon fraser university with whom i have had fruitful discussions over the years especially those in the beliefs group at the crl others at the crl and colleagues in the css who made helpful comments on earlier drafts of this paper a special word of thanks for the help given by yorick wilks the director of the crl and nick cercone the director of the cssi also gratefully acknowledge the financial support provided by serc project grc68828 while at essex by the new mexico state legislature while at nmsu and by the advanced systems institute and the centre for systems science while at sfu
J91-1003
met a method for discriminating metonymy and metaphor by computerthe met method distinguishes selected examples of metonymy from metaphor and from literalness and anomaly in short english sentencesin the met method literalness is distinguished because it satisfies contextual constraints that the nonliteral others all violatemetonymy is discriminated from metaphor and anomaly in a way that 1 supports lakoff and johnson view that in metonymy one entity stands for another whereas in metaphor one entity is viewed as another 2 permits chains of metonymies and 3 allows metonymies to cooccur with instances of either literalness metaphor or anomalymetaphor is distinguished from anomaly because the former contains a relevant analogy unlike the latterthe met method is part of collative semantics a semantics for natural language processing and has been implemented in a computer program called meta5some examples of meta5 analysis of metaphor and metonymy are giventhe met method is compared with approaches from artificial intelligence linguistics philosophy and psychologywe use selectional preference violation technique to detect metaphorswe developed a system called met capable of discriminating between literalness metonymy metaphor and anomalywe build a system met which is designed to distinguish both metaphor and metonymy from literal text providing special techniques for processing these instances of figurative languagewe developed a system called met capable of discriminating between literalness metonymy metaphor and anomaly
the generative lexicon in this paper i will discuss four major topics relating to current research in lexical semantics methodology descriptive coverage adequacy of the representation and the computational usefulness of representations in addressing these issues i will discuss what i think are some of the central problems facing the lexical semantics community and suggest ways of best approaching these issues then i will provide a method for the decomposition of lexical categories outline a theory of lexical semantics embodying a notion of well as several levels of semantic description where the semantic load is spread more evenly throughout the lexicon i argue that lexical decomposition is possible if it is perthan assuming a fixed set of primitives i will assume a fixed number of generative devices that can be seen as constructing semantic expressions i develop theory of structure representation language for lexical items which renders much lexical ambiguity in the lexicon unnecessary while still explaining the systematic polysemy that words carry finally i discuss how individual lexical structures can be integrated into the lexical knowledge base through a theory of inheritance provides us with the necessary principles of global organization for the lexicon enabling us to fully integrate our natural language lexicon into a conceptual whole in this paper i will discuss four major topics relating to current research in lexical semantics methodology descriptive coverage adequacy of the representation and the computational usefulness of representationsin addressing these issues i will discuss what i think are some of the central problems facing the lexical semantics community and suggest ways of best approaching these issuesthen i will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion as well as several levels of semantic description where the semantic load is spread more evenly throughout the lexiconi argue that lexical decomposition is possible if it is performed generativelyrather than assuming a fixed set of primitives i will assume a fixed number of generative devices that can be seen as constructing semantic expressionsi develop a theory of qualia structure a representation language for lexical items which renders much lexical ambiguity in the lexicon unnecessary while still explaining the systematic polysemy that words carryfinally i discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritancethis provides us with the necessary principles of global organization for the lexicon enabling us to fully integrate our natural language lexicon into a conceptual wholei believe we have reached an interesting turning point in research where linguistic studies can be informed by computational tools for lexicology as well as an appreciation of the computational complexity of large lexical databaseslikewise computational research can profit from an awareness of the grammatical and syntactic distinctions of lexical items natural language processing systems must account for these differences in their lexicons and grammarsthe wedding of these disciplines is so important in fact that i believe it will soon be difficult to carry out serious computational research in the fields of linguistics and nlp without the help of electronic dictionaries and computational lexicographic resources positioned at the center of this synthesis is the study of word meaning lexical semantics which is currently witnessing a revivalin order to achieve a synthesis of lexical semantics and nlp i believe that the lexical semantics community should address the following questions before addressing these questions i would like to establish two basic assumptions that will figure prominently in my suggestions for a lexical semantics frameworkthe first is that without an appreciation of the syntactic structure of a language the study of lexical semantics is bound to failthere is no way in which meaning can be completely divorced from the structure that carries itthis is an important methodological point since grammatical distinctions are a useful metric in evaluating competing semantic theoriesthe second point is that the meanings of words should somehow reflect the deeper conceptual structures in the system and the domain it operates inthis is tantamount to stating that the semantics of natural language should be the image of nonlinguistic conceptual organizing principles computational lexical semantics should be guided by the following principlesfirst a clear notion of semantic wellformedness will be necessary to characterize a theory of possible word meaningthis may entail abstracting the notion of lexical meaning away from other semantic influencesfor instance this might suggest that discourse and pragmatic factors should be handled differently or separately from the semantic contributions of lexical items in compositionalthough this is not a necessary assumption and may in fact be wrong it may help narrow our focus on what is important for lexical semantic descriptionssecondly lexical semantics must look for representations that are richer than thematic role descriptions as argued in levin and rappaport named roles are useful at best for establishing fairly general mapping strategies to the syntactic structures in languagethe distinctions possible with thetaroles are much too coarsegrained to provide a useful semantic interpretation of a sentencewhat is needed i will argue is a principled method of lexical decompositionthis presupposes if it is to work at all a rich recursive theory of semantic composition the notion of semantic wellformedness mentioned above and an appeal to several levels of interpretation in the semantics thirdly and related to the point above the lexicon is not just verbsrecent work has done much to clarify the nature of verb classes and the syntactic constructions that each allows yet it is not clear whether we are any closer to understanding the underlying nature of verb meaning why the classes develop as they do and what consequences these distinctions have for the rest of the lexicon and grammarthe curious thing is that there has been little attention paid to the other lexical categories that is we have little insight into the semantic nature of adjectival predication and even less into the semantics of nominalsnot until all major categories have been studied can we hope to arrive at a balanced understanding of the lexicon and the methods of compositionstepping back from the lexicon for a moment let me say briefly what i think the position of lexical research should be within the larger semantic pictureever since the earliest attempts at real text understanding a major problem has been that of controlling the inferences associated with the interpretation processin other words how deep or shallow is the understanding of a textwhat is the unit of wellformedness when doing natural language understanding the sentence utterance paragraph or discoursethere is no easy answer to this question because except for the sentence these terms are not even formalizable in a way that most researchers would agree onit is my opinion that the representation of the context of an utterance should be viewed as involving many different generative factors that account for the way that language users create and manipulate the context under constraints in order to be understoodwithin such a theory where many separate semantic levels have independent interpretations the global interpretation of a quotdiscoursequot is a highly flexible and malleable structure that has no single interpretationthe individual sources of semantic knowledge compute local inferences with a high degree of certainty when integrated together these inferences must be globally coherent a state that is accomplished by processes of cooperation among separate semantic modulesthe basic result of such a view is that semantic interpretation proceeds in a principled fashion always aware of what the source of a particular inference is and what the certainty of its value issuch an approach allows the reasoning process to be both tractable and computationally efficientthe representation of lexical semantics therefore should be seen as just one of many levels in a richer characterization of contextual structuregiven what i have said let us examine the questions presented above in more detailfirst let us turn to the issue of methodologyhow can we determine the soundness of our methodare new techniques available now that have not been adequately exploredvery briefly one can summarize the most essential techniques assumed by the field in some way as follows such alternations reveal subtle distinctions in the semantic and syntactic behavior of such verbsthe lexical semantic representations of these verbs are distinguishable on the basis of such tests is not dependent on the syntactic context this is illustrated in example 3 where a killing always entails a dyingexample 3 when the same lexical item may carry different entailments in different contexts we say that the entailments are sensitive to the syntactic contexts for example forget in example 4 example 4 a john forgot that he locked the door b john forgot to lock the doorexample 4a has a factive interpretation of forget that 4b does not carry in fact 4b is counterfactiveother cases of contextual specification involve aspectual verbs such as begin and finish as shown in example 5example 5 the exact meaning of the verb finish varies depending on the object it selects assuming for these examples the meanings finish smoking or finish drinkingwhile female behaves as a simple intersective modifier in 8b certain modifiers such as alleged in 8a cannot be treated as simple attributes rather they create an intensional context for the head they modifyan even more difficult problem for compositionality arises from phrases containing frequency adjectives as shown in 8c and 8dexample 8 the challenge here is that the adjective does not modify the nominal head but the entire proposition containing it a similar difficulty arises with the interpretation of scalar predicates such as fast in example 9both the scale and the relative interpretation being selected for depends on the noun that the predicate is modifying a a fast typist one who types quickly b a fast car one which can move quickly c a fast waltz one with a fast tempo such data raise serious questions about the principles of compositionality and how ambiguity should be accounted for by a theory of semanticsthis just briefly characterizes some of the techniques that have been useful for arriving at pretheoretic notions of word meaningwhat has changed over the years are not so much the methods themselves as the descriptive details provided by each testone thing that has changed however and this is significant is the way computational lexicography has provided stronger techniques and even new tools for lexical semantics research see atkins for sense discrimination tasks amsler atkins et al for constructing concept taxonomies wilks et al for establishing semantic relatedness among word senses and boguraev and pustejovsky for testing new ideas about semantic representationsturning now to the question of how current theories compare with the coverage of lexical semantic data there are two generalizations that should be madefirst the taxonomic descriptions that have recently been made of verb classes are far superior to the classifications available twenty years ago using mainly the descriptive vocabulary of talmy and jackendoff fine and subtle distinctions are drawn that were not captured in the earlier primitivesbased approach of schank or the frame semantics of fillmore as an example of the verb classifications developed by various researchers consider the grammatical alternations in the example sentences below these three pairs show how the semantics of transitive motion verbs is similar in some respects to reciprocal verbs such as meetthe important difference however is that the reciprocal interpretation requires that both subject and object be animate or moving hence 12b is illformedanother example of how diathesis reveals the underlying semantic differences between verbs is illustrated in examples 13 and 14 belowa construction called the conative involves adding the preposition at to the verb changing the verb meaning to an action directed toward an object a mary cut the bread b mary cut at the bread a mary broke the bread bmary broke at the breadwhat these data indicate is that the conative is possible only with verbs of a particular semantic class namely verbs that specify the manner of an action that results in a change of state of an objectas useful and informative as the research on verb classification is there is a major shortcoming with this approachunlike the theories of katz and fodor wilks and quillian there is no general coherent view on what the entire lexicon will look like when semantic structures for other major categories are studiedthis can be essential for establishing a globally coherent theory of semantic representationon the other hand the semantic distinctions captured by these older theories were often too coarsegrainedit is clear therefore that the classifications made by levin and her colleagues are an important starting point for a serious theory of knowledge representationi claim that lexical semantics must build upon this research toward constructing a theory of word meaning that is integrated into a linguistic theory as well as interpreted in a real knowledge representation systemin this section i turn to the question of whether current theories have changed the way we look at representation and lexicon designthe question here is whether the representations assumed by current theories are adequate to account for the richness of natural language semanticsit should be pointed out here that a theory of lexical meaning will affect the general design of our semantic theory in several waysif we view the goal of a semantic theory as being able to recursively assign meanings to expressions accounting for phenomena such as synonymy antonymy polysemy metonymy etc then our view of compositionality depends ultimately on what the basic lexical categories of the language denoteconventional wisdom on this point paints a picture of words behaving as either active functors or passive arguments but we will see that if we change the way in which categories can denote then the form of compositionality itself changestherefore if done correctly lexical semantics can be a means to reevaluate the very nature of semantic composition in languagein what ways could lexical semantics affect the larger methods of composition in semanticsi mentioned above that most of the careful representation work has been done on verb classesin fact the semantic weight in both lexical and compositional terms usually falls on the verbthis has obvious consequences for how to treat lexical ambiguityfor example consider the verb bake in the two sentences belowatkins kegl and levin demonstrate that verbs such as bake are systematically ambiguous with both a changeofstate sense and a create sense a similar ambiguity exists with verbs that allow the resulative construction shown in examples 16 and 17 and discussed in dowty jackendoff and levin and rapoport these two verbs differ in their semantic representations where run in 18a means gotobymeansofrunning while in 18b it means simply movebyrunning the methodology described above for distinguishing word senses is also assumed by those working in more formal frameworksfor example dowty proposes multiple entries for control and raising verbs and establishes their semantic equivalence with the use of meaning postulatesthat is the verbs in examples 19 and 20 are lexically distinct but semantically related by rules3 given the conventional notions of function application and composition there is little choice but to treat all of the above cases as polysemous verbsyet something about the systematicity of such ambiguity suggests that a more general and simpler explanation should be possibleby relaxing the conditions on how the meaning of a complex expression is derived from its parts i will in fact propose a very straightforward explanation for these cases of logical polysemyin this section i will outline what i think are the basic requirements for a theory of computational semanticsi will present a conservative approach to decomposition where lexical items are minimally decomposed into structured forms rather than sets of featuresthis will provide us with a generative framework for the composition of lexical meanings thereby defining the wellformedness conditions for semantic expressions in a languagewe can distinguish between two distinct approaches to the study of word meaning primitivebased theories and relationbased theoriesthose advocating primitives assume that word meaning can be exhaustively defined in terms of a fixed set of primitive elements inferences are made through the primitives into which a word is decomposedin contrast to this view a relationbased theory of word meaning claims that there is no need for decomposition into primitives if words are associated through a network of explicitly defined links sometimes referred to as meaning postulates these links establish any inference between words as an explicit part of a network of word concepts4 what i would like to do is to propose a new way of viewing primitives looking more at the generative or compositional aspects of lexical semantics rather than the decomposition into a specified number of primitivesmost approaches to lexical semantics making use of primitives can be characterized as using some form of featurebased semantics since the meaning of a word is essentially decomposable into a set of features even those theories that rely on some internal structure for word meaning do not provide a complete characterization for all of the wellformed expressions in the languagejackendoff comes closest but falls short of a comprehensive semantics for all categories in languageno existing framework in my view provides a method for the decomposition of lexical categorieswhat exactly would a method for lexical decomposition give usinstead of a taxonomy of the concepts in a language categorized by sets of features such a method would tell us the minimal semantic configuration of a lexical itemfurthermore it should tell us the compositional properties of a word just as a grammar informs us of the specific syntactic behavior of a certain categorywhat we are led to therefore is a generative theory of word meaning but one very different from the generative semantics of the 1970sto explain why i am suggesting that lexical decomposition proceed in a generative fashion rather than the traditional exhaustive approach let me take as a classic example the word closed as used in example 21 athe door is closed bthe door closed c john closed the doorlakoff jackendoff and others have suggested that the sense in 21c must incorporate something like becausetobecomenotopen for its meaningsimilarly a verb such as give specifies a transfer from one person to another eg becausetohavemost decomposition theories assume a set of primitives and then operate within this set to capture the meanings of all the words in the languagethese approaches can be called exhaustive since they assume that with a fixed number of primitives complete definitions of lexical meaning can be givenin the sentences in 21 for example close is defined in terms of the negation of a primitive openany method assuming a fixed number of primitives however runs into some wellknown problems with being able to capture the full expressiveness of natural languagethese problems are not however endemic to all decomposition approachesi would like to suggest that lexical decomposition is possible if it is performed generativelyrather than assuming a fixed set of primitives let us assume a fixed number of generative devices that can be seen as constructing semantic expressionsjust as a formal language is described more in terms of the productions in the grammar than its accompanying vocabulary a semantic language is definable by the rules generating the structures for expressions rather than the vocabulary of primitives itself6 how might this be doneconsider the sentences in example 21 againa minimal decomposition on the word closed is that it introduces an opposition of terms closed and notclosedfor the verbal forms in 21b and 21c both terms in this opposition are predicated of different subevents denoted by the sentencesin 21a this opposition is left implicit since the sentence refers to a single stateany minimal analysis of the semantics of a lexical item can be termed a generative operation since it operates on the predicate already literally provided by the wordthis type of analysis is essentially aristotle principle of opposition and it will form the basis of one level of representation for a lexical itemthe essential opposition denoted by a predicate forms part of what i will call the qualia structure of that lexical itembriefly the qualia structure of a word specifies four aspects of its meaning i will call these aspects of a word meaning its constitutive role formal role telic role and its agentive role respectivelythis minimal semantic distinction is given expressive force when combined with a theory of event typesfor example the predicate in 21a denotes the state of the door being closedno opposition is expressed by this predicatein 21b and 21c however the opposition is explicitly part of the meaning of the predicateboth these predicates denote what i will call transitionsthe intransitive use of close in 21b makes no mention of the causer yet the transition from notclosed to closed is still entailedin 2k the event that brings about the closed state of the door is made more explicit by specifying the actor involvedthese differences constitute what i call the event structure of a lexical itemboth the opposition of predicates and the specification of causation are part of a verb semantics and are structurally associated with slots in the event template for the wordas we will see in the next section there are different inferences associated with each event type as well as different syntactic behaviors because the lexical semantic representation of a word is not an isolated expression but is in fact linked to the rest of the lexicon in section 7 i suggest how the global integration of the semantics for a lexical item is achieved by structured inheritance through the different qualia associated with a wordi call this the lexical inheritance structure for the wordfinally we must realize that part of the meaning of a word is how it translates the underlying semantic representations into expressions that are utilized by the syntaxthis is what many have called the argument structure for a lexical itemi will build on grimshaw recent proposals for how to define the mapping from the lexicon to syntax to a particular vocabulary of primitives a lexical semantics should provide a method for the decomposition and composition of lexical items7 some of these roles are reminiscent of descriptors used by various computational researchers such as wilks hayes and hobbs et al within the theory outlined here these roles determine a minimal semantic description of a word that has both semantic and grammatical consequencesthis provides us with an answer to the question of what levels of semantic representation are necessary for a computational lexical semanticsin sum i will argue that lexical meaning can best be captured by assuming the following levels of representationthese four structures essentially constitute the different levels of semantic expressiveness and representation that are needed for a computational theory of lexical semanticseach level contributes a different kind of information to the meaning of a wordthe important difference between this highly configurational approach to lexical semantics and featurebased approaches is that the recursive calculus defined for word meaning here also provides the foundation for a fully compositional semantics for natural language and its interpretation into a knowledge representation modela logical starting point for our investigations into the meaning of words is what has been called the functional structure or argument structure associated with verbswhat originally began as the simple listing of the parameters or arguments associated with a predicate has developed into a sophisticated view of the way arguments are mapped onto syntactic expressions one of the most important contributions has been the view that argument structure is highly structured independent of the syntaxwilliams distinction between external and internal arguments and grimshaw proposal for a hierarchically structured representation provide us with the basic syntax for one aspect of a word meaningthe argument structure for a word can be seen as a minimal specification of its lexical semanticsby itself it is certainly inadequate for capturing the semantic characterization of a lexical item but it is a necessary componentas mentioned above the theory of decomposition being outlined here is based on the central idea that word meaning is highly structured and not simply a set of semantic featureslet us assume this is the casethen the lexical items in a language will essentially be generated by the recursive principles of our semantic theoryone level of semantic description involves an eventbased interpretation of a word or phrasei will call this level the event structure of a word the event structure of a word is one level of the semantic specification for a lexical item along with its argument structure qualia structure and inheritance structurebecause it is recursively defined on the syntax it is also a property of phrases and sentencesi will assume a sortal distinction between three classes of events states processes and transitions unlike most previous sortal classifications for events i will adopt a subeventual analysis or predicates as argued in pustejovsky and independently proposed in croft in this view an event sort such as et may be decomposed into two sequentially structured subevents aspects of the proposal will be introduced as needed in the following discussionin section 5 i demonstrated how most of the lexical semantics research has concentrated on verbal semanticsthis bias influences our analyses of how to handle ambiguity and certain noncompositional structurestherefore the only way to relate the different senses for the verbs in the examples below was to posit separate entries8 this proposal is an extension of ideas explored by bach higginbotham and allen for a full discussion see pustejovsky see tenny for a proposal on how aspectual distinctions are mapped to the syntaxa similar philosophy has lead linguists to multiply word senses in constructions involving control and equiverbs where different syntactic contexts necessitate different semantic typesnormally compositionality in such structures simply refers to the application of the functional element the verb to its argumentsyet such examples indicate that in order to capture the systematicity of such ambiguity something else is at play where a richer notion of composition is operativewhat then accounts for the polysemy of the verbs in the examples abovethe basic idea i will pursue is the followingrather than treating the expressions that behave as arguments to a function as simple passive objects imagine that they are as active in the semantics as the verb itselfthe product of function application would be sensitive to both the function and its active argumentsomething like this is suggested in keenan and faltz as the meaningform correlation principlei will refer to such behavior as cocompositionality what i have in mind can best be illustrated by returning to the examples in 28 a john baked the potato b john baked the cakerather than having two separate word senses for a verb such as bake suppose there is simply one a changeofstate readingwithout going into the details of the analysis let us assume that bake can be lexically specified as denoting a process verb and is minimally represented as example 29quot lexical semantics for bake11 in order to explain the shift in meaning of the verb we need to specify more clearly what the lexical semantics of a noun isi have argued above that lexical semantic theory must make a logical distinction between the following qualia roles the constitutive formal telic and agentive rolesnow let us examine these roles in more detailone can distinguish between potato and cake in terms of how they come about the former 9 for example dowty proposes multiple entries for verbs taking different subcategorizationsgazdar et al adopting the analysis in klein and sag propose a set of lexical typeshifting operations to capture sense relatednesswe return to this topic below10 i will be assuming a davidsonianstyle representation for the discussion belowpredicates in the language are typed for a particular eventsort and thematic roles are treated as partial functions over the event 11 more precisely the process el should reflect that it is the substance contained in the object x that is affectedsee footnote 20 for explanation is a natural kind while the latter is an artifactknowledge of an object includes not just being able to identify or refer but more specifically being able to explain how an artifact comes into being as well as what it is used for the denotation of an object must identify these rolesthus any artifact can be identified with the state of being that object relative to certain predicatesas is well known from work on event semantics and aktionsarten it is a general property of processes that they can shift their event type to become a transition event this particular fact about event structures together with the semantic distinction made above between the two object types provides us with an explanation for what i will refer to as the logical polysemy of verbs such as bakeas illustrated in example 30a when the verb takes as its complement a natural kind such as potato the resulting semantic interpretation is unchanged ie a process reading of a statechangethis is because the noun does not quotprojectquot an event structure of its ownthat is relative to the process of baking potato does not denote an eventtype12 what is it then about the semantics of cake that shifts this core meaning of bake from a statechange predicate to its creation senseas just suggested this additional meaning is contributed by specific lexical knowledge we have about artifacts and cake in particular namely there is an event associated with that object quotcoming into beingquot in this case the process of bakingthus just as a verb can select for an argumenttype we can imagine that an argument is itself able to select the predicates that govern iti will refer to such constructions as cospecificationsinformally relative to the process bake the noun cake carries the selectional information that it is a process of quotbakingquot that brings it about13 we can illustrate this schematically in example 31 where the complement effectively acts like a quotstagelevelquot event predicate relative to the process eventtype of the verb 14 the change in meaning in 31 comes not from the semantics of bake but rather in composition with the complement of the verb at the level of the entire verb phrasethe quotcreationquot sense arises from the semantic role of cake that specifies it is an artifact thus we can derive both word senses of verbs like bake by putting some of the semantic weight on the npthis view suggests that in such cases the verb itself is not polysemousrather the sense of quotcreatequot is part of the meaning of cake by virtue of it being an artifactthe verb appears polysemous because certain complements add to the basic meaning by virtue of what they denotewe return to this topic below there are several interesting things about such collocationsfirst because the complement quotselectsquot the verb that governs it the semantics of the phrase is changedthe semantic quotconnectednessquot as it were is tighter when cospecification obtainsin such cases the verb is able to successfully drop the dative pp argument as shown below in when the complement does not select the verb governing it dativedrop is ungrammatical as seen in iaromeo gave the lecture bhamlet mailed a letterfor discussion see pustejovsky and provide a formal treatment for how the nominal semantics is expressed in these examplessimilar principles seem to be operating in the resultative constructions in examples 23 and 24 namely a systematic ambiguity is the result of principles of semantic composition rather than lexical ambiguity of the verbsfor example the resultative interpretations for the verbs hammer in 23 and wipe in 24 arise from a similar operation where both verbs are underlyingly specified with an event type of processthe adjectival phrases flat and clean although clearly stative in nature can also be interpreted as stagelevel event predicates notice then how the resultative construction requires no additional word sense for the verb nor any special semantic machinery for the resultative interpretation to be availableschematically this is shown in example 32in fact this analysis explains why it is that only process verbs participate in the resultative construction and why the resultant phrase must be a subset of the states namely stagelevel event predicatesbecause the meaning of the sentence in 32 is determined by both function application of hammer to its arguments and function application of flat to the eventtype of the verb this is an example of cocompositionality having discussed some of the behavior of logical polysemy in verbs let us continue our discussion of lexical ambiguity with the issue of metonymymetonymy where a subpart or related part of an object quotstands forquot the object itself also poses a problem for standard denotational theories of semanticsto see why imagine how our semantics could account for the quotreference shiftsquot of the complements shown in example 3316 example 33 the complements of enjoy in 33 and begin in 33 are not what these verbs normally select for semantically namely a property or actionsimilarly the verb veto normally selects for an object that is a legislative bill or a suggestionsyntactically these may simply be additional subcategorizations but how are these examples related semantically to the normal interpretationsi suggest that these are cases of semantic type coercion where the verb has coerced the meaning of a term phrase into a different semantic typebriefly type coercion can be defined as follows17as these examples illustrate the syntactic argument to a verb is not always the same logical argument in the semantic relationalthough superficially similar to cases of general metonymy there is an interesting systematicity to such shifts in meaning that we will try to characterize below as logical metonymythe sentences in 34 illustrate the various syntactic consequences of metonymy and coercion involving experiencer verbs while those in 35 show the different metonymic extensions possible from the causing event in a killingthe generalization here is that when a verb selects an event as one of its arguments type coercion to an event will permit a limited range of logical metonymiesfor example in sentences 34 the entire event is directly referred to while in 34 only a participant from the coerced event reading is directly expressedother examples of coercion include quotconcealed questionsquot 36 and quotconcealed exclamationsquot 37 that is although the italicized phrases syntactically appear as nps their semantics is the same as if the verbs had selected an overt question or exclamationin explaining the behavior of the systematic ambiguity above i made reference to properties of the noun phrase that are not typical semantic properties for nouns in linguistics eg artifact natural kindin pustejovsky and pustejovsky and anick i suggest that there is a system of relations that characterizes the semantics of nominals very much like the argument structure of a verbi called this the qualia structure inspired by aristotle theory of explanation and ideas from moravcsik essentially the qualia structure of a noun determines its meaning as much as the list of arguments determines a verb meaningthe elements that make up a qualia structure include notions such as container space surface figure artifact and so onquot as stated earlier there are four basic roles that constitute the qualia structure for a lexical itemhere i will elaborate on what these roles are and why they are usefulthey are given in example 38 where each role is defined along with the possible values that these roles may assumewhen we combine the qualia structure of a np with the argument structure of a verb we begin to see a richer notion of compositionality emerging one that looks very much like objectoriented approaches to programming to illustrate these structures at play let us consider a few examplesassume that the decompositional semantics of a nominal includes a specification of its qualia structure for example a minimal semantic description for the noun novel will include values for each of these roles as shown in example 40 where x can be seen as a distinguished variable representing the object itselfthis structures our basic knowledge about the object it is a narrative typically in the form of a book for the purpose of reading and is an artifact created by a transition event of writingobserve how this structure differs minimally but significantly from the qualia structure for the noun dictionary in example 41notice the differences in the values for the constitutive and telic rolesthe purpose of a dictionary is an activity of referencing which has an event structure of a processi will now demonstrate that such structured information is not only useful for nouns but necessary to account for their semantic behaviori suggested earlier that for cases such as 33 repeated below there was no need to posit a separate lexical entry for each verb where the syntactic and semantic types had to be represented explicitlyexample 42 rather the verb was analyzed as coercing its complement to the semantic type it expectedto illustrate this consider 42the type for begin within a standard typed intensional logic is and its lexical semantics is similar to that of other subject control verbs assuming an event structure such as that of krifka or pustejovsky we can convert this lexical entry into a representation consistent with a logic making use of eventtypes by means of the following meaning postulate2 vpvx1 el pa 2ea p il this allows us to type the verb begin as taking a transition event as its first argument represented in example 45because the verb requires that its first argument be of type transition the complement in 33 will not match without some sort of shiftit is just this kind of context where the complement is coerced to another typethe coercion dictates to the complement that it must conform to its type specification and the qualia roles may 20 it should be pointed out that the lexical structure for the verb bake given above in 30 and 31 can more properly be characterized as a process acting on various qualia of the arguments in fact have values matching the correct typefor purposes of illustration the qualia structure for novel from 41 can be represented as the logical expression in example 46the coercion operation on the complement in the above examples can be seen as a request to find any transition event associated with the nounas we saw above the qualia structure contains just this kind of informationwe can imagine the qualia roles as partial functions from a noun denotation into its subconstituent denotationsfor our present purposes we abbreviate these functions as qf qc qt qawhen applied they return the value of a particular qualia rolefor example the purpose of a novel is for reading it shown in 47 while the mode of creating a novel is by writing it represented in 47as the expressions in 47 suggest there are in fact two obvious interpretations for this sentence in 42 a john began to read a novel b john began to write a novelone of these is selected by the coercing verb resulting in a complement that has a eventpredicate interpretation without any syntactic transformations 21 the derivation in 49 and the structure in 49 show the effects of this coercion on the verb complement using the telic value of nove122 21 there are of course an indefinite number of interpretations depending on pragmatic factors and various contextual influencesbut 1 maintain that there are only a finite number of default interpretations available in such constructionsthese form part of the lexical semantics of the nounadditional evidence for this distinction is given in pustejovsky and anick and briscoe et al 22 partee and rooth suggest that all expressions in the language can be assigned a base type while also being associated with a type ladderpustejovsky extends this proposal and argues that each expression a may have available to it a set of shifting operators which we call ea which operate over an expression changing its type and denotationby making reference to these operators directly in the rule of function application we can treat the functor polymorphically as illustrated belowthe fact that this is not a unique interpretation of the elliptical event predicate is in some ways irrelevant to the notion of type coercionthat there is some event involving the complement is required by the lexical semantics of the governing verb and the rules of type wellformedness and although there are many ways to act on a novel i argue that certain relations are quotprivilegedquot in the lexical semantics of the nounit is not the role of a lexical semantic theory to say what readings are preferred but rather which are available23 assuming the semantic selection given above for begin is correct we would predict that because of the process eventtype associated with the telic role for dictionary there is only one default interpretation for the sentence in 50 namely the agentive event of quotcompilingquot 23 there are interesting differences in complement types between finish and completethe former takes both np and a gerundive vp while the latter takes only an np iajohn finished the book b john finished writing the book2ajohn completed the book bjohn completed writing the bookthe difference would indicate that contrary to some views lexical items need to carry both syntactic and semantic selectional information to determine the range of complements they may takenotice here also that complete tends to select the agentive role value for its complement and not the telic rolethe scope of semantic selection is explored at length in pustejovsky not surprisingly when the noun in complement position has no default interpretation within an event predicate as given by its qualia structure the resulting sentence is extremely odd amary began a rock bjohn finished the flowerthe semantic distinctions that are possible once we give semantic weight to lexical items other than verbs are quite widerangingthe next example i will consider concerns scalar modifiers such as fast that modify different predicates depending on the head they modifyif we think of certain modifiers as modifying only a subset of the qualia for a noun then we can view fast as modifying only the telic role of an objectthis allows us to go beyond treating adjectives such as fast as intersective modifiers for example as axcari a fast let us assume that an adjective such as fast is a member of the general type but can be subtyped as applying to the telic role of the noun being modifiedthat is it has as its type this gives rise directly to the different interpretations in example 52these interpretations are all derived from a single word sense for fastbecause the lexical semantics for this adjective indicates that it modifies the telic role of the noun it effectively acts as an event predicate rather than an attribute over the entire noun denotation as illustrated in example 53 for fast motorway as our final example of how the qualia structure contributes to the semantic interpretation of a sentence observe how the nominals window and door in examples 54 and 55 carry two interpretations a john crawled through the window bthe window is closedeach noun appears to have two word senses a physical object denotation and an aperture denotationpustejovsky and anick characterize the meaning of such quotdouble figuregroundquot nominals as inherently relational where both parameters are logically part of the meaning of the nounin terms of the qualia structure for this class of nouns the formal role takes as its value the figure of a physical object while the constitutive role assumes the invertfigure value of an aperture24 the foregrounding or backgrounding of a nominal qualia is very similar to argument structurechanging operations for verbsthat is in 55 paint applies to the formal role of the door while in 55 through will apply to the constitutive interpretation of the same npthe ambiguity with such nouns is a logical one one that is intimately linked to the semantic representation of the object itselfthe qualia structure then is a way of capturing this logical polysemyin conclusion it should be pointed out that the entire lexicon is organized around such logical ambiguities which pustejovsky and anick call lexical conceptual paradigmspustejovsky distinguishes the following systems and the paradigms that lexical items fall into example 57 such paradigms provide a means for accounting for the systematic ambiguity that may exist for a lexical itemfor example a noun behaving according to paradigm 57 24 there are many such classes of nominals both twodimensional such as those mentioned in the text and threedimensional such as quotroomquot quotfireplacequot and quotpipequot they are interesting semantically because they are logically ambiguous referring to either the object or the aperture but not bothboguraev and pustejovsky show how these logical polysemies are in fact encoded in dictionary definitions for these words exhibits a logical polysemy involving packaging or grinding operators eg haddock or lamb in previous sections i discussed lexical ambiguity and showed how a richer view of lexical semantics allows us to view a word meaning as being flexible where word senses could arise generatively by composition with other wordsthe final aspect of this flexibility deals with the logical associations a word has in a given context that is how this semantic information is organized as a global knowledge basethis involves capturing both the inheritance relations between concepts and just as importantly how the concepts are integrated into a coherent expression in a given sentencei will assume that there are two inheritance mechanisms at work for representing the conceptual relations in the lexicon fixed inheritance and projective inheritancethe first includes the methods of inheritance traditionally assumed in al and lexical research that is a fixed network of relations which is traversed to discover existing related and associated concepts in order to arrive at a comprehensive theory of the lexicon we need to address the issue of global organization and this involves looking at the various modes of inheritance that exist in language and conceptualizationsome of the best work addressing the issue of how the lexical semantics of a word ties into its deeper conceptual structure includes that of hobbs et al and wilks while interesting work on shared information structures in nlp domains is that of flickinger et al and evans and gazdar in addition to this static representation i will introduce another mechanism for structuring lexical knowledge the projective inheritance which operates generatively from the qualia structure of a lexical item to create a relational structure for ad hoc categoriesboth are necessary for projecting the semantic representations of individual lexical items onto a sentence level interpretationthe discussion here however will be limited to a description of projective inheritance and the notion of quotdegrees of prototypicalityquot of predicationi will argue that such degrees of salience or coherence relations can be explained in structural terms by examining a network of related lexical itemsquot i will illustrate the distinction between these mechanisms by considering the two sentences in example 58 and their relative prototypicality athe prisoner escaped last night bthe prisoner ate dinner last nightboth of these sentences are obviously wellformed syntactically but there is a definite sense that the predication in 58 is quottighterquot or more prototypical than that in 58what would account for such a differenceintuitively we associate prisoner with an escaping event more strongly than an eating eventyet this is not information that comes from a fixed inheritance structure but is rather usually assumed to be cornmonsense knowledgein what follows however i will show that such distinctions can be captured within a theory of lexical semantics by means of generating ad hoc categoriesfirst we give a definition for the fixed inheritance structure of a lexical item let q and p be concepts in our model of lexical organizationthen definition a sequence ii temporal succession temporal equivalence and act an operator adding agency to an argumentintuitively the space of concepts traversed by the application of such operators will be related expressions in the neighborhood of the original lexical itemthis space can be characterized by the following two definitions a series of applications of transformations 7rn generates a sequence of predicates called the projective expansion of qi pthe projective conclusion space p is the set of projective expansions generated from all elements of the conclusion space on role r of predicate q as p p i 430from this resulting representation we can generate a relational structure that can be considered the set of ad hoc categories and relations associated with a lexical item using these definitions let us return to the sentences in example 58i will assume that the noun prisoner has a qualia structure such as that shown in 60using the representation in 60 above i now trace part of the derivation of the projective conclusion space for prisonerinheritance structures are defined for each qualia role of an elementin the case above values are specified for only two rolesfor each role r we apply a projective transformation 7 onto the predicate q that is the value of that rolefor example from the telic role of prisoner we can generalize to the concept of being confinedfrom this concept we can apply the negation operator generating the predicate opposition of notconfined and confinedto this we apply the two temporal operators generating two states free before capture and free after capturefinally to these concepts if we apply the operator act varying who is responsible for the resulting transition event we generate the concepts turn in capture escape and releaseprojecting on telic role of prisoner these relations constitute the projective conclusion space for the telic role of prisoner relative to the application of the transformations mentioned abovesimilar operations on the formal role will generate concepts such as die and killgenerating such structures for all items in a sentence during analysis we can take those graphs that result in no contradictions to be the legitimate semantic interpretations of the entire sentencelet us now return to the sentences in example 58it is now clear why these two sentences differ in their prototypicality the predicate eat is not within the space of related concepts generated from the semantics of the np the prisoner escape however did fall within the projective conclusion space for the telic role of prisoner as shown in example 63we can therefore use such a procedure as one metric for evaluating the quotproximityquot of a predication in the examples above the difference in semanticality can now be seen as a structural distinction between the semantic representations for the elements in the sentencein this section i have shown how the lexical inheritance structure of an item relates in a generative fashion the decompositional structure of a word to a much larger set of concepts that are related in obvious wayswhat we have not addressed however is how the fixed inheritance information of a lexical item is formally derivable during compositionthis issue is explicitly addressed in briscoe et al as well as pustejovsky and briscoe in this paper i have outlined a framework for lexical semantic research that i believe can be useful for both computational linguists and theoretical linguists alikei argued against the view that word meanings are fixed and inflexible where lexical ambiguity must be treated by multiple word entries in the lexiconrather the lexicon can be seen as a generative system where word senses are related by logical operations defined by the wellformedness rules of the semanticsin this view much of the lexical ambiguity of verbs and prepositions is eliminated because the semantic load is spread more evenly throughout the lexicon to the other lexical categoriesi described a language for structuring the semantic information carried by nouns and adjectives termed qualia structure as well as the rules of composition that allow this information to be incorporated into the semantic interpretation of larger expressions including explicit methods for type coercionfinally i discussed how these richer lexical representations can be used to generate projective inheritance structures that connect the conceptual information associated with lexical items to the global conceptual lexiconthis suggests a way of accounting for relations such as coherence and the prototypicality of a predicationalthough much of what i have presented here is incomplete and perhaps somewhat programmatic i firmly believe this approach can help clarify the nature of word meaning and compositionality in natural language and at the same time bring us closer to understanding the creative use of word sensesi would like to thank the following for comments on earlier drafts of this paper peter anick sabine bergler bran boguraev ted briscoe noam chomsky bob ingria george miller sergei nirenburg and rich thomason
J91-4003
the generative lexiconin this paper i will discuss four major topics relating to current research in lexical semantics methodology descriptive coverage adequacy of the representation and the computational usefulness of representationsin addressing these issues i will discuss what i think are some of the central problems facing the lexical semantics community and suggest ways of best approaching these issuesthen i will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion as well as several levels of semantic description where the semantic load is spread more evenly throughout the lexiconi argue that lexical decomposition is possible if it is performed generativelyrather than assuming a fixed set of primitives i will assume a fixed number of generative devices that can be seen as constructing semantic expressionsi develop a theory of qualia structure a representation language for lexical items which renders much lexical ambiguity in the lexicon unnecessary while still explaining the systematic polysemy that words carryfinally i discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritancethis provides us with the necessary principles of global organization for the lexicon enabling us to fully integrate our natural language lexicon into a conceptual wholewe propose the generative lexicon theory which can be said to take advantage of both linguistic and conceptual approaches providing a framework which arose from the integration of linguistic studies and of techniques found in ai
using multiple knowledge sources for word sense discrimination this paper addresses the problem of how to identify the intended meaning of individual words in unrestricted texts without necessarily having access to complete representations of sentences to discriminate senses an understander can consider a diversity of information including syntactic tags word frequencies collocations semantic context rolerelated expectations and syntactic restrictions however current approaches make use of only small subsets of this information here we will describe how to use the whole range of information our discussion will include how the preference cues relate to general lexical and conceptual knowledge and to more specialized knowledge of collocations and contexts we will describe a method of combining cues on the basis their individual than a fixed ranking among cuetypes we will also discuss an application of the approach in a system that computes sense tags for arbitrary texts even when it is unable to determine a single syntactic or semantic representation for some sentences this paper addresses the problem of how to identify the intended meaning of individual words in unrestricted texts without necessarily having access to complete representations of sentencesto discriminate senses an understander can consider a diversity of information including syntactic tags word frequencies collocations semantic context rolerelated expectations and syntactic restrictionshowever current approaches make use of only small subsets of this informationhere we will describe how to use the whole range of informationour discussion will include how the preference cues relate to general lexical and conceptual knowledge and to more specialized knowledge of collocations and contextswe will describe a method of combining cues on the basis of their individual specificity rather than a fixed ranking among cuetypeswe will also discuss an application of the approach in a system that computes sense tags for arbitrary texts even when it is unable to determine a single syntactic or semantic representation for some sentencesmany problems in applied natural language processing including information retrieval database generation from text and machine translation hinge on relating words to other words that are similar in meaningcurrent approaches to these applications are often wordbased that is they treat words in the input as strings mapping them directly to other wordshowever the fact that many words have multiple senses and different words often have similar meanings limits the accuracy of such systemsan alternative is to use a knowledge representation or interlingua to reflect text content thereby separating text representation from the individual wordsthese approaches can in principle be more accurate than wordbased approaches but have not been sufficiently robust to perform any practical text processing tasktheir lack of robustness is generally due to the difficulty in building knowledge bases that are sufficient for broadscale processingbut a synthesis is possibleapplications can achieve greater accuracy by working at the level of word senses instead of word stringsthat is they would operate on text in which each word has been tagged with its senserobustness need not be sacrificed however because this tagging does not require a fullblown semantic analysisdemonstrating this claim is one of the goals of this paperhere is an example of the level of analysis a sense tagger would provide to an application programsuppose that the input is the agreement reached by the state and the epa provides for the safe storage of the wastethe analysis would provide an application with the following informationpreliminary evidence suggests that having access to a sense tagging of the text improves the performance of information retrieval systems the primary goal of this paper then is to describe in detail methods and knowledge that will enable a language analyzer to tag each word with its senseto demonstrate that the approach is sufficiently robust for practical tasks the article will also discuss the incorporation of the approach into an existing system trump and the application of it to unrestricted textsthe principles that make up the approach are completely general however and not just specific to trumpan analyzer whose tasks include wordsense tagging must be able to take an input text determine the concept that each word or phrase denotes and identify the role relationships that link these conceptsbecause determining this information accurately is knowledgeintensive the analyzer should be as flexible as possible requiring a minimum amount of customization for different domainsone way to gain such flexibility is give the system enough generic information about word senses and semantic relations so that it will be able to handle texts spanning more than a single domainwhile having an extensive grammar and lexicon is essential for any system domain independence this increased flexibility also introduces degrees of ambiguity not frequently addressed by current nlp worktypically the system will have to choose from several senses for each wordfor example we found that trump base of nearly 10000 root senses and 10000 derivations provides an average of approximately four senses for each word of a sentence taken from the wall street journalthe potential for combinatoric explosion resulting from such ambiguity makes it critical to resolve ambiguities quickly and reliablyit is unrealistic to assume that word sense discrimination can be left until parsing is complete as suggested for example by dahlgren mcdowell and stabler and janssen no simple recipe can resolve the general problem of lexical ambiguityalthough semantic context and selectional restrictions provide good cues to disambiguation they are neither reliable enough nor available quickly enough to be used alonethe approach to disambiguation that we will take below combines many different strong sources of information syntactic tags word frequencies collocations semantic context selectional restrictions and syntactic cuesthe approach incorporates a number of innovations including although improvements to our system are ongoing it already interprets arbitrary text and makes coarse word sense selections reasonably wellno other system to our knowledge has been as successfulwe will now review word sense discrimination and the determination of role relationsin section 3 we discuss some sources of knowledge relevant to solving these problems and in section 4 how trump semantic interpreter uses this knowledge to identify sense preferencessection 5 describes how it combines the preference information to select sensesafterward we will discuss the results of our methods and the avenues for improvement that remainthe problem of word sense discrimination is to choose for a particular word in a particular context which of its possible senses is the quotcorrectquot one for the contextinformation about senses can come from a wide variety of sources of course not all these cues will be equally usefulwe have found that in general the most important sources of information for word sense discrimination are syntactic tags morphology collocations and word associationsrolerelated expectations are also important but to a slightly lesser degreesyntactic tags are very important because knowing the intended part of speech is often enough to identify the correct sensefor example according to our lexicon when safe is used as an adjective it always denotes the sense related to security whereas safe used as a noun always denotes a type of container for storing valuablesmorphology is also a strong cue to discrimination because certain senseaffix combinations are preferred deprecated or forbiddenconsider the word agreementthe verb agree can mean either concur benefit or be equivalent and in general adding the affix ment to a verb creates a noun corresponding either to an act or to its result its object or its associated statehowever of the twelve possible combinations of root sense and affix sense in practice only four occur agreement can refer only to the act object or result in the case of the concur sense of agree or the state in the case of the equivalence sense of agreefurthermore the last of these combinations is deprecatedcollocations and word associations are also important sources of information because they are usually quotdead giveawaysquot that is they make immediate and obvious sense selectionsfor example when paired with increase the preposition in clearly denotes a patient rather than a temporal or spatial location or a directionword associations such as bank money similarly create a bias for the related sensesdespite their apparent strength however the preferences created by these cues are not absolute as other cues may defeat themfor example although normally the collocation wait on means erve the failure of a rolerelated expectation such as that the beneficiary be animate can override this preference thus collocations and word associations are strong sources of information that an understander must weigh against other cues and not just treat as rules for sensefiltering the selection of a role relationship can both influence and be influenced by the selection of word senses because preferences partially constrain the various combinations of a role its holder and the fillerfor example the preposition from prefers referring to the source role transfers such as give prefer to have a destination role and instances of colors such as red prefer to fill a color roleapproaches based on the word disambiguation model tend to apply constraint satisfaction techniques to combine these role preferences preferences based on rolerelated expectations are often only a weak cue because they are primarily for verbs and not normally very restrictivealthough generally a weak cue rolerelated preferences are quite valuable for the disambiguation of prepositionsin our view prepositions should be treated essentially the same as other words in the lexiconthe meaning of a preposition either names a relation directly as one of its core senses or indirectly as a specialized sense triggered for example by a collocation or concretionbecause the meaning of a preposition actually names a relation relationbased cues are a good source of information for disambiguating themthe problem of determining role relationships entangles word sense discrimination with the problem of syntactic attachmentthe attachment problem is a direct result of the ambiguity in determining whether a concept is related to an adjacent object or to some enveloping structure that incorporates the adjacent objectmost proposed solutions to this problem specify a fixed set of ordered rules that a system applies until a unique satisfactory attachment is found such rules can be either syntactic semantic or pragmaticsyntactic rules attempt to solve the attachment problem independent of the sense discrimination problemfor example a rule for right association says to prefer attaching a new word to the lowest nonterminal node on the rightmost branch of the current structure semantic rules by contrast intertwine the problems of discrimination and attachment one must examine all combinations of senses and attachments to locate the semantically best onesuch rules normally also collapse the attachment problem into the conceptual role filling problemfor example a lexical preference rule specifies that the preference for a particular attachment depends on how strongly or weakly the verb of the clause prefers its possible arguments pragmatic rules also intermingle sense discrimination and attachment but consider the context of the utterancefor example one suggested rule says to prefer to build structures describing objects just mentioned the accuracy of systems with fixedorder rules is limited by the fact that it is not always possible to strictly order a set of rules independent of the contextfor example dahlgren mcdowell and stabler propose the rule quotif the object of the preposition is an expression of time then sattach the ppquot to explain the preference for assuming that quotin the afternoonquot modifies adjourn in example 2 the judge adjourned the hearing in the afternoonalthough they admit this rule would fail for a sentence like john described the meeting on january 20th where the np has a lexical preference for a time modifier lexical preferences are not always the determining factor eitherthe existence of a conceptually similar object in the context can also create an expectation for the grouping quothearing in the afternoonquot as in example 3 belowthe judge had to leave town for the dayhe found a replacement to take over his morning trial but could not find anyone else that was availablehe called the courthouse and cancelled the hearing in the afternoonmoreover pragmatic effects are not always the determining factor either leading many people to judge the following sentence as silly the landlord painted all the walls with cracks the presence of different lexical items or different objects in the discourse focus may strengthen or weaken the information provided by an individual ruleanother possibility we will discuss in section 5 is to weigh all preference information dynamically the system we will be describing in section 4 will use many of the cues described above including syntactic tags morphology word associations and rolerelated expectationsbut first we need to discuss the sources of knowledge that enable a system to identify these cuesto identify preference cues such as morphology word frequency collocations semantic contexts syntactic expectations and conceptual relations in unrestricted texts a system needs a large amount of knowledge in each categoryin most cases this just means that the understander lexicon and conceptual hierarchy must include preference information although processing concerns suggest moving some information out of these structures and into data modules specific to a particular process such as identifying collocationstrump obtains the necessary knowledge from a moderately sized lexicon specifically designed for use in language understanding and a hierarchy of nearly 1000 higherlevel concepts overlaid with approximately 40 conceptcluster definitionsit also uses a library of over 1400 collocational patternswe will consider each in turndevelopment of trump current lexicon followed an experiment with a moderatelysized commercially available lexicon which demonstrated many substantive problems in applying lexical resources to text processingalthough the lexicon had good morphological and grammatical coverage as well as a thesaurusbased semantic representation of word meanings it lacked reasonable information for discriminating sensesthe current lexicon although roughly the same size as the earlier one has been designed to better meet the needs of producing semantic representations of textthe lexicon features a hierarchy of 1000 parent concepts for encoding semantic preferences and restrictions sensebased morphology and subcategorization a distinction between primary and secondary senses and senses that require particular quottriggersquot or appear only in specific contexts and a broad range of collocational informationat this time the lexicon contains about 13000 senses and 10000 explicit derivationseach lexical entry provides information about the morphological preferences sense preferences and syntactic cues associated with a root its senses and their possible derivationsan entry also links words to the conceptual hierarchy by naming the conceptual parent of each senseif necessary an entry can also specify the composition of common phrases such as collocations that have the root as their headtrump lexicon combines a core lexicon with dynamic lexicons linked to specialized conceptual domains collocations and concretionsthe core lexicon contains the generic or contextindependent senses of each wordthe system considers these senses whenever a word appears in the inputthe dynamic lexicons contain word senses that normally appear only within a particular context these senses are considered only when that context is activethis distinction is a product of experience it is conceivable that a formerly dynamic sense may become static as when military terms creep into everyday languagethe partitioning of the lexicon into static and dynamic components reduces the number of senses the system must consider in situations where the context does not trigger some dynamic sensealthough the idea of using dynamic lexicons is not new our approach is much more flexible than previous ones because trump lexicon does not link all senses to a domainas a result the lexical retrieval mechanism never forces the system to use a sense just because the domain has preselected it311 the core lexiconthe core lexicon by design includes only coarse distinctions between word sensesthis means that for a task such as generating databases from text taskspecific processing or inference must augment the core lexical knowledge but problems of considering many nuances of meaning or lowfrequency senses are avoidedfor example the financial sense of issue falls under the same core sense as the latest issue of a magazinethe progeny and exit senses of issue are omitted from the lexiconthe idea is to preserve in the core lexicon only the common coarse distinctions among senses figure 1 shows the lexical entries for the word issueeach entry has a part of speech pos and a set of core senses senseseach sense has a type field that indicates primary for a preferred sense and secondary for a deprecated sensethe general rule for determining the type of a sense is that secondary senses are those that the semantic interpreter should not select without specific contextual information such as the failure of some selectional restriction pertaining to the primary sensefor example the word yard can mean an enclosed area a workplace or a unit of measure but in the empty context the enclosedarea sense is assumedthis classification makes clear the relative frequency of the sensesthis is in contrast to just listing them in historical order the approach of many lexicons that have been used in computational applicationsthe par field links each word sense to its immediate parent in the semantic hierarchythe parents and siblings of the two noun senses of issue which are listed in figure 2 give an idea of the coverage of the lexiconin the figure word senses are given as a root followed by a sense number conceptual categories are designated by atoms beginning with cexplicit derivations such as quotperiodicalxquot are indicated by roots followed by endings and additional type specifiersthese derivative lexical entries do quotdouble dutyquot in the lexicon an application program can use the derivation as well as the semantics of the derivative formthe assoc field not currently used in processing includes the lexicographer choice of synonym or closely related words for each sensethe syntax field encodes syntactic constraints and subcategorizations for each sensewhen senses share constraints they can be encoded at the level of the word entrywhen the syntactic constraints influence semantic preferences they are attached to the sense entryfor example in this case issue used as an intransitive verb would favor passive moving even though it is a secondary sensethe lore c subcategorization in the first two senses means indirect object as recipient the ditransitive form will fill the recipient rolethe grammatical knowledge base of the system relates these subcategories to semantic rolesthe gderiv and sderiv fields mark morphological derivationsthe former which is nil in the case of issue to indicate no derivations encodes the derivations at the word root level while the latter encodes them at the sense preference levelfor example the sderiv constraint allows issuance to derive from either of the first two senses of the verb with issuer and issuable deriving only from the giving sensethe lexical entries for issuethe derivation triples encode the form of each affix the resulting syntactic category and the quotsemantic transformationquot that applies between the core sense and the resulting sensefor example the triple in the entry for issue says that an issuer plays the actor role of the first sense of the verb issuebecause derivations often apply to multiple senses and often result in different semantic transformations a lexical entry can mark certain interpretations of a morphological derivation as primary or secondary monthlyx magazinel guidel feature4 dissertationl copy2 column1 brochure1 bibliographyl anthologyl the parents and siblings of two senses of issue situations the dynamic lexicons contain senses that are active only in a particular contextalthough these senses require triggers a sense and its trigger may occur just as frequently as a core sensethus the dynamicstatic distinction is orthogonal to the distinction between primary and secondary senses made in the core lexiconcurrently trump has lexicons linked to domains collocations and concretionsfor example trump military lexicon contains a sense of engage that means attackhowever the system does not consider this sense unless the military domain is activesimilarly the collocational lexicon contains senses triggered by wellknown patterns of words for example the sequence take effect activates a sense of take meaning transpireconcretions activate specializations of the abstract sense of a word when it occurs with an object of a specific typefor example in the core lexicon the verb project has the abstract sense transfer however if its object is a sound the system activates a sense corresponding to a communication event as in she projected her voiceencoding these specializations in the core lexicon would be problematic because then a system would be forced to resolve such nuances of meaning even when there was not enough information to do sodynamic lexicons can provide much finer distinctions among senses than the core lexicon because they do not increase the amount of ambiguity when their triggering context is inactivetogether the core and dynamic lexicons provide the information necessary to recognize morphological preferences sense preferences and syntactic cuesthey also provide some of the information required to verify and interpret collocationssections 32 33 and 34 below describe sources of information that enable a system to recognize rolebased preferences collocations and the semantic contextthe concept hierarchy serves several purposesfirst it associates word senses that are siblings or otherwise closely related in the hierarchy thus providing a thesaurus for information retrieval and other tasks in a sense tagging system these associations can help determine the semantic contextsecond it supplies the basic ontology to which domain knowledge can be associated so that each new domain requires only incremental knowledge engineeringthird it allows rolebased preferences wherever possible to apply to groups of word senses rather than just individual lexical entriesto see how the hierarchy concept definitions establish the basic ontology consider figure 3 the definition of the concept crecording crecording is the parent concept for activities involving the storage of information namely the following verb senses book2 cataloguel clockl compilel date3 documentl enter3 indexl inputl keyl logl recordl in a concept definition the par fields link the concept to its immediate parents in the hierarchythe assoc field links the derived instances of the given concept to their places in the hierarchyfor example according to figure 3 the object form derived the conceptual definition of cclothingthe conceptual definition of cmadeof rel from enter3 has the parent cinformationthe roleplay fields mark specializations of a parent roles each roleplay indicates the parent name for a role along with the concept specialization of itfor example cre cording specializes its inherited object role as patientthe rels and pref fields identify which combinations of concept role and filler an understander should expect for example the definition in figure 4 expresses that fabric materials are common modifiers of clothing and fill the clothing madeof roletrump hierarchy also allows the specification of such preferences from the perspective of the filler where they can be made more generalfor example although colors are also common modifiers of clothing it is better to associate this preference with the filler because colors prefer to fill the color role of any physical objectthe hierarchy also permits the specification of such preferences from the perspective of the relation underlying a rolefor example the relation cmadeof in figure 6 indicates that physical objects normally have a madeof role and that the role is normally filled by some physical objectfigure 7 gives a complete account of the use of the rels and pref fields and how they permit the expression of rolerelated preferences from any perspectivecollocation is the relationship among any group of words that tend to cooccur in a predictable configurationalthough collocations seem to have a semantic basis many collocations are best recognized by their syntactic formthus for current purposes we limit the use of the term quotcollocationquot to sense preferences that result from these welldefined syntactic constructions1 for example the particle combination pick up 1 traditionally many of these expressions have been categorized as idioms but as most are at least partly compositional and can be processed by normal parsing methods we prefer to use the more general term quotcollocationquot this categorization thus happily encompasses both the obvious idioms and the compositional expressions whose status as idioms is highly debatableour use of the term is thus similar to that of smadja and mckeown who partition collocations into open compounds predicative relations and idiomatic expressions the use of pref and relsthe top ten cooccurences with take and the verbcomplement combination make the team are both collocationinducing expressionsexcluded from this classification are unstructured associations among senses that establish the general semantic context for example courtroomdefendantcollocations often introduce dynamic word senses ie ones that behave compositionally but occur only in the context of the expression making it inappropriate for the system to consider them outside that contextfor example the collocation hang from triggers a sense of from that marks an instrumentin other cases a collocation simply creates preferences for selected core senses as in the pairing of the opportunity sense of break with the becausetohave sense of give in give her a breakthere is also a class of collocations that introduce a noncompositional sense for the entire expression for example the collocation take place invokes a sense transpireto recognize collocations during preprocessing trump uses a set of patterns each of which lists the root words or syntactic categories that make up the collocationfor example the pattern bath matches the clauses take a hot bath and takes hot bathsin a pattern parentheses indicate optionality the system encodes the repeatability of a category such as adjectives procedurallycurrently there are patterns for verbparticle verbpreposition and verbobject collocations as well as compound nounsinitially we acquired patterns for verbobject collocations by analyzing lists of root word pairs that were weighted for relative cooccurrence in a corpus of articles from the dow jones news service as an example of the kind of data that we derived figure 8 shows the ten most frequent cooccurrences involving the root quottakequot note that the collocation quottake actionquot appears both in its active form as well as its passive actions were taken from an examination of these lists and the contexts in which the pairs appeared in the corpus we constructed the patterns used by trump to identify collocationsthen using the patterns as a guide we added lexical entries for each collocationthese entries link the collocations to the semantic hierarchy and where appropriate provide syntactic constraints that the parser can use to verify the presence of a collocationfor example figure 10 shows the entry for the noncompositional collocation take place which requires that the object be singular and determinerlessthese entries differ from similar representations of collocations or idioms in smadja and mckeown and stock in that they are sensebased rather than wordbasedthat is instead of expressing collocations as wordtemplates the lexicon groups together collocations that combine the same sense of the head verb with particular senses or higherlevel concepts this approach better addresses the fact that collocations do have a semantic basis capturing general forms such as give him or her which underlies the collocations give month give minute and give timecurrently the system has entries for over 1700 such collocationsthe last source of sense preferences we need to consider is the semantic contextwork on lexical cohesion suggests that people use words that repeat a conceptual category or that have a semantic association to each other to create unity in text these associations can be thought of as a class of collocations that lack the predictable syntactic structure of say collocations arising from verbparticle or compound noun constructionssince language producers select senses that group together semantically a language analyzer should prefer senses that share a semantic associationhowever it is unclear whether the benefit of knowing the exact nature of an association would justify the cost of determining itthus our system provides a cluster mechanism for representing and identifying groups of senses that are associated in some unspecified waya cluster is a set of the senses associated with some central conceptthe definition of a cluster includes a name suggesting the central concept and a list of the cluster members as in figure 11a cluster may contain concepts or other clusterstrump knowledge base contains three types of clusters categorial functional and situationalthe simplest type of cluster is the categorial clusterthese clusters consist of the sets of all senses sharing a particular conceptual parentsince the conceptual hierarchy already encodes these clusters implicitly we need not write formal cluster definitions for themobviously a sense will belong to a number of categorial clusters one for each element of its parent chainthe second type of cluster is the functional clusterthese consist of the sets of all senses sharing a specified functional relationshipfor example our system has a small number of partwhole clusters that list the parts associated with the object named by the clusterfigure 12 shows the partwhole cluster clegg for parts of an eggthe third type of cluster the situational cluster encodes general relationships among senses on the basis of their being associated with a common setting event the definition of the cluster cleggthe definition of the cluster clcourtroom or purposesince a cluster usefulness is inversely proportional to its size these clusters normally include only senses that do not occur outside the clustered context or that strongly suggest the clustered context when they occur with some other member of the clusterthus situational clusters are centered upon fairly specific ideas and may correspondingly be very specific with respect to their elementsit is not unusual for a word to be contained in a cluster while its synonyms are notfor example the cluster clcourtroom shown in figure 13 contains sense verb_testify1 but not verb_assert1situational clusters capture the associations found in generic descriptions or dictionary examples but are more compact because clusters may include whole categories of objects as members and need not specify relationships between the membersthe use of clusters for sense discrimination is also comparable to approaches that favor senses linked by marked paths in a semantic network in fact clusters capture most of the useful associations found in scripts or semantic networks but lack many of the disadvantages of using networksfor example because clusters do not specify what the exact nature of any association is learning new clusters from previously processed sentences would be fairly straightforward in contrast to learning new fragments of networkusing clusters also avoids the major problem associated with markerpassing approaches namely how to prevent the production of stupid paths the relevant difference is that a cluster is cautious because it must explicitly specify all its elementsa marker passer takes the opposite stance however considering all paths up down and across the network unless it is explicitly constrainedthus a marker passer might find the following dubious path from the written object sense of book to the partofaplant sense of leaf book madeof paper paper madefrom wood tree madeof wood tree haspart leaf whereas no cluster would link these entities unless there had been some prior evidence of a connectionfrom the lexical entries the underlying concept hierarchy and the specialized entries for collocation and clusters just described a language analyzer can extract the information that establishes preferences among sensesin the next section we will describe how a semantic interpreter can apply knowledge from such a wide variety of sourcesthere is a wide variety of information about which sense is the correct one and the challenge is to decide when and how to use this informationthe danger of a combinatorial explosion of possibilities makes it advantageous to try to resolve ambiguities as early as possibleindeed efficient preprocessing of texts can elicit a number of cues for word senses set up preferences and help control the parsethen the parse and semantic interpretation of the text will provide the cues necessary to complete the task of resolutionwithout actually parsing a text a preprocessor can identify for each word its morphology2 its syntactic tag or tags3 and whether it is part of a collocation for each sense it can identify whether the sense is preferred or deprecated and whether it is supported by a clusterthese properties are all either retrievable directly from a knowledge base or computable from short sequences of wordsto identify whether the input satisfies the expectations created by syntactic cues or whether it satisfies rolerelated expectations the system must first perform some syntactic analysis of the inputidentifying these properties must come after parsing because recognizing them requires both the structural cues provided by parsing and a semantic analysis of the textin our system processing occurs in three phases morphology preprocessing and parsing and semantic interpretationanalysis of a text begins with the identification of the morphological features of each word and the retrieval of the senses of each wordthen the input passes through a special preprocessor that identifies parseindependent semantic preferences and makes a preliminary selection of word sensesthis selection process eliminates those core senses that are obviously inappropriate and triggers certain the system architecture specialized sensesin the third phase trump attempts to parse the input and at the same time produce a quotpreferredquot semantic interpretation for itsince the preferred interpretation also fixes the preferred sense of each word it is at this point that the text can be given semantic tags thus allowing sensebased information retrievalin the next few subsections we will describe in greater detail the processes that enable the system to identify semantic preferences morphological analysis tagging collocation identification cluster matching and semantic interpretationafterward we will discuss how the system combines the preferences it identifiesthe first step in processing an input text is to determine the root syntactic features and affixes of each wordthis information is necessary both for retrieving the word lexical entries and for the syntactic tagging of the text during preprocessingmorphological analysis not only reduces the number of words and senses that must be in the lexicon but it also enables a system to make reasonable guesses about the syntactic and semantic identity of unknown words so that they do not prevent parsing once morphological analysis of a word is complete the system retrieves the corresponding senses and establishes initial semantic preferences for the primary sensesfor example by default the sense of agree meaning to concur is preferred over its other sensesthe lexical entry for agree marks this preference by giving it type primary the entry also says that derivations agree1ment and agree21able are preferred derivations agreelable and agree3ment are deprecated and all other senseaffix combinations have been disallowedduring morphological analysis the system retrieves only the most general sensesit waits until the preprocessor or the parser identifies supporting evidence before it retrieves word senses specific to a context such as a domain a situation or a collocationin most cases this approach helps reduce the amount of ambiguitythe approach is compatible with evidence discussed by simpson and burgess that the lexical entry for the verb agreequotmultiple meanings are activated in frequencycoded orderquot and that lowfrequency senses are handled by a second retrieval process that accumulates evidence for those senses and activates them as necessaryonce the system determines the morphological analysis of each word the next step in preprocessing is to try to determine the correct part of speech for the wordour system uses a tagging program written by uri zernik that takes information about the root affix and possible syntactic category for each word and applies stochastic techniques to select a syntactic tag for each wordstochastic taggers look at small groups of words and pick the most likely assignment of tags determined by the frequency of alternative syntactic patterns in similar textsalthough it may not be possible to completely disambiguate all words prior to parsing approaches based on stochastic information have been quite successful 4 to allow for the fact that the tagger may err as part of the tagging process the system makes a second pass through the text to remove some systematic errors that result from biases common to statistical approachesfor example they tend to prefer modifiers over nouns and nouns over verbs for instance in example 5 the tagger erroneously marks the word need as a nounyou really need the campbell soups of the world to be interested in your magazinein this second pass the system applies a few rules derived from our grammar and resets the tags where necessaryfor example to correct for the noun versus verb overgeneralization whenever a word that can be either a noun or a verb gets tagged as just a noun the corrector let us it remain ambiguous unless it is immediately preceded by a determiner or it is immediately preceded by a plural noun or a preposition or is immediately followed by a determiner the system is able to correct for all the systematic errors we have identified thus far using just nine rules of this sortafter tagging the preprocessor eliminates all senses corresponding to unselected parts of speechfollowing the syntactic filtering of senses trump preprocessor identifies collocations and establishes semantic preferences for the senses associated with themin this stage of preprocessing the system recognizes the following types of collocations to recognize a collocation the preprocessor relies on a set of simple patterns which match the general syntactic context in which the collocation occursfor example the system recognizes the collocation quottake profitquot found in example 6 with the pattern profita number of stocks that have spearheaded the market recent rally bore the brunt of isolated profittaking tuesdaythe preprocessor strategy for locating a collocation is to first scan the text for trigger words and if it finds the necessary triggers then to try to match the complete patternthe system matching procedures allow for punctuation and verbcomplement inversionif the triggers are found and the match is successful the preprocessor has a choice of subsequent actions depending on how cautious it is supposed to bein its aggressive mode it updates the representations of the matched words adding any triggered senses and preferences for the collocated sensesit also deletes any unsupported deprecated sensesin its cautious mode it just adds the word senses associated with the pattern to a dynamic storeonce stored these senses are then available for the parser to use after it verifies the syntactic constraints of the collocation if it is successful it will add preferences for the appropriate sensesearly identification of triggered senses enables the system to use them for cluster matching in the next stageafter the syntactic filtering of senses and the activation of senses triggered by collocations the next step of preprocessing identifies preferences for senses that invoke currently active clusters a cluster is active if it contains any of the senses under consideration for other words in the current paragraphthe system may also activate certain clusters to represent the general topic of the textthe preprocessor strategy for assessing clusterbased preferences is to take the set of cluster names invoked by each sense of each content word in the sentence and locate all intersections between it and the names of other active clustersfor each intersection the preprocessor finds it adds preferences for the senses that are supported by the cluster matchthen the preprocessor activates any previously inactive senses it found to be supported by a cluster matchthis triggering of senses on the basis of conceptual context forms the final step of the preprocessing phaseonce preprocessing is complete the parsing phase beginsin this phase trump attempts to build syntactic structures while calling on the semantic interpreter to build and rate alternative interpretations for each structure proposedthese semantic evaluations then guide the parser evaluation of syntactic structuresthey may also influence the actual progression of the parsefor example if a structure is found to have incoherent semantics the parser immediately eliminates it from further considerationalso whenever the semantics of a parse becomes sufficiently better than that of its competitors the system prunes the semantically inferior parses reducing the number of ambiguities even furtheras suggested above the system builds semantic interpretations incrementallyfor each proposed combination of syntactic structures there is a corresponding combination of semantic structuresit is the job of the semantic interpreter to identify the possible relations that link the structures being combined identify the preferences associated with each possible combination of head role and filler and then rank competing semantic interpretations5 a similar approach has been taken by gibson and is supported by the psychological experiments of kurtzman for each proposed combination knowledge sources may contribute the following preferences certain syntactic form preferences associated with the semantic quotfitquot between any two of the head the role and the filler for example filler and role eg foods make good fillers for the patient role of eating activities filler and head eg colors make good modifiers of physical objects head and role eg monetary objects expect to be qualified by some quantitythe conceptual hierarchy and the lexicon contain the information that encodes these preferenceshow the semantic interpreter combines these preferences is the subject of the next sectiongiven the number of preference cues available for discriminating word senses an understander must face the question of what to do if they conflictfor example in the sentence mary took a picture to bob the fact that photography does not normally have a destination should override the support for the photograph interpretation of took a picture given by collocation analysisa particular source of information may also support more than one possible interpretation but to different degreesfor example cigarette filter may correspond either to something that filters out cigarettes or to something that is part of a cigarette but the latter relation is more likelyour strategy for combining the preferences described in the preceding sections is to rate most highly the sense with the strongest combination of supporting cuesthe system assigns each preference cue a strength an integer value between 10 and 10 and then sums these strengths to find the sense with the highest ratingthe strength of a particular cue depends on its type and on the degree to which the expectations underlying it are satisfiedfor cues that are polar for example a sense is either low or high frequency a value must be chosen experimentally depending on the strength of the cue compared with othersfor example the system assigns frequency information a score close to zero because this information tends to be significant only when other preferences are inconclusivefor cues that have an inherent extent for example the conceptual category specified by a role preference subsumes a set of elements that can be counted the cue strength is a function of the magnitude of the extent that is its specificitytrump specificity function maps the number of elements subsumed by the concept onto the range 0 to 10the function assigns concepts with few members a high value and concepts with many members a low valuefor example the concept cobject which subsumes roughly half the knowledge base has a low specificity value in contrast the concept noun_hammer 1 which subsumes only a single entity has a high specificity value concept strength is inversely proportional to concept size because a preference for a very general concept often indicates that either there is no strong expectation at all or there is a gap in the system knowledgein either case a concept that subsumes only a few senses is stronger information than a concept that subsumes morethe preference score for a complex concept formed by combining simpler concepts with the connectives and or and not is a function of the number of senses subsumed by both either or neither concept respectivelysimilarly the score for a cluster is the specificity of that cluster the exact details of the function necessarily depend on the size and organization of one concept hierarchyfor example one would assign specificity value 1 to any concept with more members than any immediate specialization of the most abstract conceptwhen a preference cue matches the input the cue strength is its specificity value when a concept fails to match the input the strength is a negative value whose magnitude is usually the specificity of the concept but it is not always this straightforwardrating the evidence associated with a preference failure is a subtle problem because there are different types of preference failure to take into accountfailure to meet a general preference is always significant whereas failure to meet a very specific preference is only strong information when a slight relaxation of the preference does not eliminate the failurethis presents a bit of a paradox the greater the specificity of a concept the more information there is about it but the less information there may be about a corresponding preferencethe paradox arises because the failure of a very specific preference introduces significant uncertainty as to why the preference failedfailing to meet a very general preference is always strong information because in practice the purpose of such preferences is to eliminate the grossly inappropriate such as trying to use a relation with a physical object when it should only be applied to eventsthe specificity function in this case returns a value whose magnitude is the same as the specificity of the complement of the concept the result is a negative number whose absolute value is greater than it would be by defaultfor example if a preference is for the concept cobj ect which has a positive specificity of 1 and this concept fails to match the input then the preference value for the cue will be 9on the other hand a very specific preference usually pinpoints the expected entity ie the dead giveaway pairings of role and fillerthus it is quite common for these preferences to overspecify the underlying constraint for example cut may expect a tool as an instrument but almost any physical object will sufficewhen a slight relaxation of the preference is satisfiable a system should take the cautious route and assume it has a case of overspecification and is at worst a weak failureagain the specificity function returns a negative value with magnitude equivalent to the specificity of the complement of the concept but this time the result will be a negative number whose absolute value is less than it would be by defaultwhen this approach fails a system can safely assume that the entity under consideration is quotobviously inappropriatequot for a relatively strong expectation and return the default valuethe default value for a concept that is neither especially general nor specific and that fails to match the input is just 1 times the positive specificity of the conceptthe strategy of favoring the most specific information has several advantagesthis approach best addresses the concerns of an expanding knowledge base where one must be concerned not only with competition between preferences but also with the inevitable gaps in knowledgegenerally the more specific information there is the more complete and hence more trustworthy the information isthus when there is a clear semantic distinction between the senses and the system has the information necessary to identify it a clear distinction usually emerges in the ratingswhen there is no strong semantic distinction or there is very little information preference scores are usually very close so that the parser must fall back on syntactic preferences such as right associationthis result provides a simple sensible means of balancing syntactic and semantic preferencesto see how the cue strengths of frequency information morphological preferences collocations clusters syntactic preferences and rolerelated preferences interact with one another to produce the final ranking of senses consider the problem of deciding the correct sense of reached in example 1 example 1 the agreement reached by the state and the epa provides for the safe storage of the wasteaccording to the system lexicon reached has four possible verb senses figure 16 shows a tabulation of cue strengths for each of these interpretations of reach in example 1 when just information in the vp reached by the state and the epa is consideredthe sense reach3 has the highest total scorefrom the table we see that at this point in the parse the only strong source of preferences is the role information the derivation of these numbers is shown in figures 17 18 and 19 which list the role preferences associated with the possible interpretations of the preposition by for reach3 and its two nearest competitors reachl and reach4together the data in the tables reveal the following sources of preference strength the arrival sense gains support from the fact that there is a sense of by meaning agent which is a role that arrivals expect and the state and the epa make reasonably good agents rolerelated preferences of reachl for the preposition by the communication sense gains support from the fact that there is a sense of by corresponding to the expected role communicator and the state and the epa make very good agents of communication events in particular as well as being good agents in general however reach3 is disfavored by frequency information although the system favors the communication sense of reach in the vp for the final result it must balance this information with that provided by the relationship between agreement and the verb phraseby the end of the parse the eventchange sense comes to take precedence rolerelated preferences of reach4 for the preposition bythe main because of this weakness is that the role that agreement would fill destination has no special preference for being associated with a cde stevent many events allow a destination roleby summing the cue strengths of each possible interpretation in this way and selecting the one with the highest total score the system decides which sense is the quotcorrectquot one for the contextthe strengths of individual components of each interpretation contribute to but do not determine the strength of the final interpretation because there are also strengths associated with how well the individual components fit togetherno additional weights are necessary because the specificity values the system uses are a direct measure of strengthour goal has been a natural language system that can effectively analyze an arbitrary input at least to the level of word sense taggingalthough we have not yet fully accomplished this goal our results are quite encouragingusing a lexicon of approximately 10000 roots and 10000 derivations the system shows excellent lexical and morphological coveragewhen tested on a sample of 25000 words of text from the wall street journal the system covered 98 of nonproper noun nonabbreviated word occurrences twelve percent of the senses the system selected were derivativesthe semantic interpreter is able to discriminate senses even when the parser cannot produce a single correct parsefigure 20 gives an example of the sense tagging that the system gives to the following segment of wall street journal textthe network also is changing its halftime show to include viewer participation in an attempt to hold on to its audience through halftime and into the second halves of gamesone show will ask viewers to vote on their favorite alltime players through telephone pollseach word is tagged with its part of speech and sense number along with a parent conceptfor example the tag changing verb_3 shows that the input word is changing the preferred sense is number 3 of the verb and this sense falls under the concept creplacing in the hierarchythis tagging was produced even though the parser was unable to construct a complete and correct syntactic representation of the textin fact when tested on the wall street journal texts the system rarely produces a single correct parse however the partial parses produced generally cover most of the text at the clause levelsince most semantic preferences appear at this level the results of this tagging are encouragingthis example also shows some of the limitations of our system in practicethe system is unable to recognize the collocation quothold on toquot in the first sentence because it lacks a pattern for itthe system also lacks patterns for the collocations quotvote onquot and quotalltime playersquot that occur in the second sentence and as a result mistakenly tags on as ctemporalproximityrel rather than something more appropriate such as cpurposer elthese difficulties point out the need for even more knowledgeit is encouraging to note that even if our encoding scheme is not entirely quotcorrectquot according to human intuition as long as it is consistent in theory it should lead to capabilities that are no worse with zero customization than wordbased methods for information retrievalhowever having access to sense tags allows for easy improvement by more knowledgeintensive methodsalthough this theory is still untested there is some preliminary evidence that word sense tagging can improve information retrieval system performance to date we have been unable to get a meaningful quantitative assessment of the accuracy of the system sense taggingwe made an unsuccessful attempt at evaluating the accuracy of sensetagging over a corpusfirst we discovered that a human quotexpertquot had great difficulty identifying each sense and that this task was far more tedious than manual partofspeech tagging or bracketingsecond we questioned what we would learn from the evaluation of these partial results and have since turned our attention back to evaluating the system with respect to some task such as information retrievalimproving the quality of our sense tagging requires a fair amount of straightforward but timeconsuming workthis needed work includes filling a number of gaps in our knowledge sourcesfor example the system needs much more information about rolerelated preferences and specialized semantic contextsat present all this information is collected and coded by hand although recent work by ravin and dahlgren mcdowell and stabler suggests that the collection of rolerelated information may be automatableour next step is to evaluate the effect of text coding on an information retrieval task by applying traditional termweighted statistical retrieval methods to the recoded textone intriguing aspect of this approach is that errors in distinguishing sense preferences should not be too costly in this task so long as the program is fairly consistent in its disambiguation of terms in both the source texts and the input querieshaving access to a large amount of information and being able to use it effectively are essential for understanding unrestricted texts such as newspaper articleswe have developed a substantial knowledge base for text processing including a word sensebased lexicon that contains both core senses and dynamically triggered entrieswe have also created a number of conceptcluster definitions describing common semantic contexts and a conceptual hierarchy that acts as a sensedisambiguated thesaurusour approach to word sense discrimination uses information drawn from the knowledge base and the structure of the text combining the strongest most obvious sense preferences created by syntactic tags word frequencies collocations semantic context selectional restrictions and syntactic cuesto apply this information most efficiently the approach introduces a preprocessing phase that uses preference information available prior to parsing to eliminate some of the lexical ambiguity and establish baseline preferencesthen during parsing the system combines the baseline preferences with preferences created by selectional restrictions and syntactic cues to identify preferred interpretationsthe preference combination mechanism of the system uses dynamic measures of strength based on specificity rather than relying on some fixed ordered set of rulesthere are some encouraging results from applying the system to sense tagging of arbitrary textwe expect to evaluate our approach on tasks in information retrieval and later machine translation to determine the likelihood of achieving substantive improvements through sensebased semantic analysisi am grateful to paul jacobs for his comments and his encouragement of my work on natural language processing at ge to george krupka for helping me integrate my work with trump and for continuing to improve the system to graeme hirst for his many comments and suggestions on this article and to jan wiebe and evan steeg for their comments on earlier draftsi acknowledge the financial support of the general electric company the university of toronto and the natural sciences and engineering research council of canada
J92-1001
using multiple knowledge sources for word sense discriminationthis paper addresses the problem of how to identify the intended meaning of individual words in unrestricted texts without necessarily having access to complete representations of sentencesto discriminate senses an understander can consider a diversity of information including syntactic tags word frequencies collocations semantic context rolerelated expectations and syntactic restrictionshowever current approaches make use of only small subsets of this informationhere we will describe how to use the whole range of informationour discussion will include how the preference cues relate to general lexical and conceptual knowledge and to more specialized knowledge of collocations and contextswe will describe a method of combining cues on the basis of their individual specificity rather than a fixed ranking among cuetypeswe will also discuss an application of the approach in a system that computes sense tags for arbitrary texts even when it is unable to determine a single syntactic or semantic representation for some sentenceswe are one of the first to use multiple kinds of features for word sense disambiguation in the semantic interpretation system trumpwe describe a study of different sources useful for word sense disambiguation including morphological information
tina a natural language system for spoken language applications new natural language system been developed for applications involving spoken tasks key ideas from context free grammars augmented transition and the unification concept a seamless interface between syntactic and semantic analysis and also produces a highly constraining probabilistic language model to improve recognition performance an initial set of contextfree rewrite rules provided by hand is first converted to a network structure probability assignments on all arcs in the network are obtained automatically from a set of example sentences the parser uses a stack decoding search strategy with a topdown control flow and includes a featurepassing mechanism to deal longdistance movement agreement and semantic constraints an automatic sentence generation capability that has been effective for identifying overgeneralization problems as well as in producing a wordpair language model for a recognizer the parser is currently with mit for use in two application domains with the parser screening recognizer outputs either at the sentential level or to filter partial theories during the active search process a new natural language system tina has been developed for applications involving spoken language taskstina integrates key ideas from context free grammars augmented transition networks and the unification concepttina provides a seamless interface between syntactic and semantic analysis and also produces a highly constraining probabilistic language model to improve recognition performancean initial set of contextfree rewrite rules provided by hand is first converted to a network structureprobability assignments on all arcs in the network are obtained automatically from a set of example sentencesthe parser uses a stack decoding search strategy with a topdown control flow and includes a featurepassing mechanism to deal with longdistance movement agreement and semantic constraintstina provides an automatic sentence generation capability that has been effective for identifying overgeneralization problems as well as in producing a wordpair language model for a recognizerthe parser is currently integrated with mit summit recognizer for use in two application domains with the parser screening recognizer outputs either at the sentential level or to filter partial theories during the active search processover the past few years there has been a gradual paradigm shift in speech recognition research both in the yous and in europein addition to continued research on the transcription problem ie the conversion of the speech signal to text many researchers have begun to address as well the problem of speech understanding1 this shift is at least partly brought on by the realization that many of the applications involving humanmachine interface using speech require an quotunderstandingquot of the intended messagein fact to be truly effective many potential applications demand that the system carry on a dialog with the user using its knowledge base and information gleaned from previous sentences to achieve proper response generationcurrent advances in research and development of spoken language systems2 can be found for example in the proceedings of the darpa speech and natural language workshops as well as in publications from participants of the esprit sundial projectrepresentative systems are described in boisen et al de mattia and giachin niedermair niemann and young a spoken language system relies on its natural language component to provide the meaning representation of a given sentenceideally this component should also be useful for providing powerful constraints to the recognizer component in terms of permissible syntactic and semantic structures given the limited domainif it is to be useful for constraint however it must concern itself not only with coverage but also and perhaps more importantly with overgeneralizat ionin many existing systems the ability to parse as many sentences as possible is often achieved at the expense of accepting inappropriate word strings as legitimate sentencesthis had not been viewed as a major concern in the past since systems were typically presented only with wellformed text strings as opposed to errorful recognizer outputsthe constraints can be much more effective if they are embedded in a probabilistic frameworkthe use of probabilities in a language model can lead to a substantially reduced perplexity3 for the recognizerif the natural language component computational and memory requirements are not excessive and if it is organized in such a way that it can easily predict a set of nextword candidates then it can be incorporated into the active search process of the recognizer dynamically predicting possible words to follow a hypothesized word sequence and pruning away hypotheses that cannot be completed in any waythe natural language component should be able to offer significant additional constraint to the recognizer beyond what would be available from a local wordpair or bigram4 language model because it is able to make use of longdistance constraints in requiring wellformed whole sentencesthis paper describes a natural language system tina which attempts to address some of these issuesthe mechanisms were designed to support a graceful seamless interface between syntax and semantics leading to an efficient mechanism for constraining semanticsgrammar rules are written such that they describe syntactic structures at the high levels of a parse tree and semantic structures at the low levelsall of the meaningcarrying content of the sentence is completely encoded in the names of the categories of the parse tree thus obviating the need for separate semantic rulesby encoding meaning in the structural entities of the parse tree it becomes feasible to realize probabilistic semantic restrictions in an efficient mannerthis also makes it straightforward to extract a semantic frame representation directly from an unannotated parse treethe contextfree rules are automatically converted to a shared network structure and probability assignments are derived automatically from a set of parsed sentencesthe probability assignment mechanism was deliberately designed to support an ability to predict a set of nextword candidates with associated word probabilitiesconstraint mechanisms exist and are carried out through feature passing among nodesa unique aspect of the grammar is that unification constraints are expressed onedimensionally being associated directly with categories rather than with rulessyntactic and semantic fields are passed from node to node by default thus making available by default the second argument to unification operationsthis leads to a very efficient implementation of the constraint mechanismunifications introduce additional syntactic and semantic constraints such as person and number agreement and subjectverb semantic restrictionsthis paper is organized as followssection 2 contains a detailed description of the grammar and the control strategy including syntactic and semantic constraint mechanismssection 3 describes a number of domaindependent versions of the system that have been implemented and addresses within the context of particular domains several evaluation measures including perplexity coverage and portabilitysection 4 discusses briefly two application domains involving database access in which the parser provides the link between a speech recognizer and the database queriesthe last section provides a summary and a discussion of our future plansthere is also an appendix which walks through an example grammar for threedigit numbers showing how to train the probabilities parse a sentence and compute perplexity on a test sentencethis section describes several aspects of the system in more detail including how the grammar is generated and trained how the control strategy operates how constraints are enforced and practical issues having to do with efficiency and ease of debuggingtina is based on a contextfree grammar augmented with a set of features used to enforce syntactic and semantic constraintsthe grammar is converted to a network structure by merging common elements on the righthand side of all rules sharing the same lefthand side category each lhs category becomes associated with a parent node whose children are the collection of unique categories appearing in the rhss of all the rules in the common seteach parent node establishes a twodimensional array of permissible links among its children based on the ruleseach child can link forward to all of the children that appear adjacent to that child in any of the shared rule setprobabilities are determined for pairs of siblings through frequency counts on rules generated by parsing a set of training sentencesthe parsing process achieves efficiency through structuresharing among rules resembling in this respect a topdown chart processorthe grammar nodes are contained in a static structure describing a hierarchy of permissible sibling pairs given each parent and a nodedependent set of constraint filterseach grammar node contains a name specifying its category a twodimensional probability array of permissible links among the next lower level in the hierarchy and a list of filter specifications to be applied either in the topdown or the bottomup cyclewhen a sentence is parsed a dynamic structure is created a set of parse nodes that are linked together in a hierarchical structure to form explicit paths through the grammarduring the active parse process the parse nodes are entered into a queue prioritized by their path scoreseach node in a given parse tree enters the queue exactly twice once during the topdown cycle during which it enters into the queue all of its possible first children and once again during the bottomup cycle during which it enters all of its possible right siblings given its parentthe control strategy repeatedly pops the queue advancing the active hypothesis by exactly one step and applying the appropriate nodelevel unificationseach feature specification for each grammar node contains a feature name a value or set of values for that feature a logic function and a specification as to whether the unification should take place during the topdown or during the bottomup cycleall features are associated with nodes rather than with rules and each node performs exactly the same unifications without regard to whatever rule it might be a part ofin fact during the active parse process a rule is not an explicit entity while it is being formedeach instantiation of a rule takes place only at the time that the next sibling is the distinguished end node a special node that signifies a return to the level of the parentthe rule can be acquired by tracing back through the left siblings until the distinguished start node is encountered although this is not done in practice until the entire parse is completedthe parse nodes contain a set of features whose values will be modified through the unification processall modifications to features are made nondestructively by copying a parse node each time a hypothesis is updatedthus each independent hypothesis is associated with a particular parse node that contains all of the relevant feature information for that hypothesisas a consequence all hypotheses can be pursued in parallel and no explicit backtracking is ever donecontrol is repeatedly passed to the currently most probable hypothesis until a complete sentence is found and all of the input stream is accounted foradditional parses can be found by simply continuing the processthe grammar is built from a set of training sentences using a bootstrapping procedureinitially each sentence is translated by hand into a list of the rules invoked to parse itafter the grammar has built up a substantial knowledge of the language many new sentences can be parsed automatically or with minimal intervention to add a few new rules incrementallythe arc probabilities can be incrementally updated after the successful parse of each new sentencethe process of converting the rules to a network form is straightforwardall rules with the same lhs are combined to form a structure describing possible interconnections among children of a parent node associated with the lefthand categorya probability matrix connecting each possible child with each other child is constructed by counting the number of times a particular sequence of two siblings occurred in the rhss of the common rule set and normalizing by counting all pairs from the particular leftsibling to any right sibling5 two distinguished nodes a start node and an end node are included among the children of every grammar nodea subset of the grammar nodes are terminal nodes whose children are a list of vocabulary wordsthis process can be illustrated with the use of a simple example6 suppose there exists a grammar for noun phrases that can be expressed through the single compact rule form np article noun where the parentheses signify optional nodesthis grammar would be converted to a network as shown in figure 1 which would be stored as a single grammar node with the name npthe resulting grammar could be used to parse the set of phrases shown on the left each of which would generate the corresponding rule shown on the rightquotthe boyquot npi article noun quota beautiful townquot np article adjective noun quota cute little babyquot np article adjective adjective noun quotthe wonderful puddingquot np article adjective noun illustration of probabilistic network obtained from four rules with the same lhs as given in the texta parent node named np would contain these five nodes as its children with a probability matrix specifying the network connectionsto train the probabilities a record is kept of the relative counts of each subseqent sibling with respect to each permissible child of the parent node in our case np as they occurred in an entire set of parsed training sentencesin the example adjective is followed three times by noun and once by adjective so the network shows a probability of 1 4 for the self loop and 34 for the advance to nounnotice that the system has now generalized to include any number of adjectives in a roweach rule in general would occur multiple times in a given training set but in addition there is a significant amount of sharing of individual sibling pairs among different rules the socalled crosspollination effectthis method of determining probabilities effectively amounts to a bigram language model7 embedded in a hierarchical structure where a separate set of bigram statistics is collected on category pairs for each unique lhs category namethe method is to be distinguished from the more common method of applying probabilities to entire rule productions rather than to sibling pairs among a shared rule setan advantage to organizing probabilities at the siblingpair level is that it conveniently provides an explicit probability estimate for a single next word given a particular word sequencethis probability can be used to represent the language model score for the next word which when used in conjunction with the acoustic score provides the overall score for the wordwe make a further simplifying assumption that each sentence has only a single parse associated with itthis is probably justified only in conjunction with a grammar that contains semantic categorieswe have found that within the restricted domains of specific applications the first parse is essentially always a correct parse and often in fact the only parsewith only a single parse from each sentence and with the grammar trained at the siblingpair level training probabilities becomes a trivial exercise of counting and normalizing siblingpair frequencies within the pooled contextfree rule setstraining is localized such that conditional on the parent there is an advance from one sibling to some next sibling with probability 10normalization requires only this locally applied constraint making it extremely fast to train on a set of parsed sentencesfurthermore the method could incorporate syntactic and semantic constraints by simply renormalizing the probabilities at run time after paths that fail due to constraints have been eliminateda functional block diagram of the control strategy is given in figure 2at any given time a set of active parse nodes are arranged on a priority queueeach parse node contains a pointer to a corresponding grammar node and has access to all the information needed to pursue its partial theorythe top node is popped from the queue and it then creates a number of new nodes and inserts them into the queue according to their probabilitiesif the node is an end node it returns control to the parent node giving that node a completed subparseas each new node is considered unifications of syntactic and semantic constraints are performed and may lead to failurethe process can terminate on the first successful completion of a sentence or the nth successful completion if more than one hypothesis is desireda parse in tina begins with a single parse node linked to the grammar node sentence which is entered on the queue with probability 10this node creates new parse nodes that might have categories such as statement question and request and places them on the queue prioritizedif statement is the most likely child it gets popped from the queue and returns nodes indicating subject it etc to the queuewhen subject reaches the top of the queue it activates units such as noun phrase gerund and noun clauseeach node after instantiating firstchildren becomes inactive pending the return of a successful subparse from a sequence of childreneventually the cascade of firstchildren reaches a terminal node such as article stephanie seneff tina a natural language system for spoken language applications which proposes a set of words to be compared with the input streamif a match with an appropriate word is found then the terminal node fills its subparse slot with an entry such as and activates all of its possible rightsiblingswhenever a terminal node has successfully matched an input word the path probability is reset to 1010 thus the probabilities that are used to prioritize the queue represent not the total path probability but rather the probability given the partial word sequenceeach path climbs up from a terminal node and back down to a next terminal node with each new node adjusting the path probability by multiplying by a new conditional probability the resulting conditional path probability for a next word represents the probability of that word in its linguistic role given all preceding words in their linguistic roleswith this strategy a partial sentence does not become increasingly improbable as more and more words are addedbecause of the sharing of common elements on the righthand side of rules tina can automatically generate new rules that were not explicitly providedfor instance having seen the rule x abc and the rule x b c d the system would automatically generate two new rules x b c and x abc d although this property can potentially lead to certain problems with overgeneralization there are a number of reasons why it should be viewed as a featurefirst of all it permits the system to generalize more quickly to unseen structuresfor example having seen the rule question aux subject predicate and the rule question have subject link predadjective the system would also understand the forms question have subject predicate and question aux subject link predadjective quot secondly it greatly simplifies the implementation because rules do not have to be explicitly monitored during the parsegiven a particular parent and a particular child the system can generate the allowable right siblings without having to note who the left siblings werefinally and perhaps most importantly probabilities are established on arcs connecting sibling pairs regardless of which rule is under constructionin this sense the arc probabilities behave like the familiar wordlevel bigrams of simple recognition language models except that they apply to siblings at multiple levels of the hierarchythis makes the probabilities meaningful as a product of conditional probabilities as the parse advances to deeper levels of the parse tree and also as it returns to higher levels of the parse treethis approach implies an independence assumption that claims that what can follow depends only on the left sibling and the parentone negative aspect of the crosspollination is that the system can potentially generalize to include forms that are agrammaticalfor instance the forms quotpick the box upquot and quotpick up the boxquot if defined by the same lhs name would allow the system to include rules producing forms such as quotpick up the box upquot and quotpick up the box up the boxquot this problem can be overcome either by giving the two structures different lhs names or by grouping quotup the boxquot and quotthe box upquot into distinct parent nodes adding another layer to the hierarchy on the rhsanother solution is to use a trace mechanism to link the two positions for the object thus preventing it from occurring in both placesa final alternative is to include a particle bit among the features which once set cannot be resetin fact there were only a few situations where such problems arose and reasonable solutions could always be foundtina design includes a number of features that lead to rapid development of the grammar andor porting of the grammar to a new domain as well as efficient implementation capabilities in terms of both speed and memoryamong its features are semiautomatic training from a set of example sentences a sentence generation capability and a design framework that easily accomodates parallel implementationsit is a twostep procedure to acquire a grammar from a specific set of sentencesthe rule set is first built up gradually by parsing the sentences onebyone adding rules andor constraints as neededonce a full set of sentences has been parsed in this fashion the parse trees from the sentences are automatically converted to the sequence of rules used to parse each sentencethe training of both the rule set and the probability assignments is then established directly in a second pass from the provided set of parsed sentences ie the parsed sentences are the grammargeneration mode uses the same routines as those used by the parser but chooses a small subset of the permissible paths based on the outcome of a randomnumber generator rather than exploring all paths and relying on an input word stream to resolve the correct onesince all of the arcs have assigned probabilities the parse tree is traversed by generating a random number at each node and deciding which arcs to select based on the outcomethe arc probabilities can be used to weigh the alternativesoccasionally the generator chooses a path that leads to a dead end because of unanticipated constraintshence we in general need to keep more than one partial theory alive at any given time to avoid having to backtrack upon a failure conditionwe could in fact always choose to sprout two branches at any decision point although this generally leads to a much larger queue than is really necessarywe found instead that it was advantageous to monitor the size of the queue and arbitrarily increase the number of branches kept alive from one to two whenever the queue becomes dangerously short shrinking it back to one upon recoverywe have used generation mode to detect overgeneralizations in the grammar to build a wordpair language model for use as a simple constraint mechanism in our recognizer and to generate random sentences for testing our interface with the backenda final practical feature of tina is that as in unification grammars all unifications are nondestructive and as a consequence explicit backtracking is never necessaryevery hypothesis on the queue is independent of every other one in the sense that activities performed by pursuing one lead do not disturb the other active nodesthis feature makes tina an excellent candidate for parallel implementationthe control strategy would simply deliver the most probable node to an available processortina has been implemented in commonlisp and runs on both a sun workstation and a symbolics lisp machinea deterministic word sequence can be parsed in a small fraction of realtime on either machineof course once the input is a speech waveform rather than a word sequence the uncertainty inherent in the proposed words will greatly increase the search spaceuntil we have a better handle on control strategies in the bestfirst search algorithm it is impossible to predict the computational load for a spokeninput modethis section describes how tina handles several issues that are often considered to be part of the task of a parserthese include agreement constraints semantic restrictions stephanie seneff tina a natural language system for spoken language applications subjecttagging for verbs and long distance movement do you think i should read quot the gap mechanism resembles the hold register idea of atns and the treatment of bounded domination metavariables in lexical functional grammars but it is different from these in that the process of filling the hold register equivalent involves two steps separately initiated by two independent nodesour approach to the design of a constraint mechanism is to establish a framework general enough to handle syntactic semantic and ultimately phonological constraints using identical functional procedures applied at the node levelthe intent was to design a grammar for which the rules would be kept completely free of any constraintsto achieve this goal we decided to break the constraint equations usually associated with rules down into their component parts and then to attach constraints to nodes as equations in a single variablethe missing variable that must be unified with the new information would be made available by defaultin effect the constraint mechanism is thus reduced from a twodimensional to a onedimensional domainthus for example the developer would not be permitted to write an fstructure equation of the form subjiilf np associated with the rule vp verb np unf to cover quoti told john to goquot instead the np node would generate a currentfocus from its subparse which would be passed along passively to the verb quotgoquot the verb would then simply consult the currentfocus to establish the identity of its subjectthe procedure works as followsin the absence of any explicit instructions from its grammar node a parse node simply passes along all features to the immediate relative any constraints specified by the grammar node result in a modification of certain feature valuesthe modifications are specified through a fourtuple of the possible features include person and number case determiner mode and a semantic category bit mapthe new value entered as a bit pattern could be a single value such as singular or could be multiple valued as in the number for the noun quotfishquot furthermore during the bottomup cycle the new value can be the special variable topdownsetting ie the value for that feature that currently occupies the slot in the parse node in questionthis has the effect of disconnecting the node from its children with respect to the feature in questionthe logic function is one of and or or set and the cycle is either topdown or bottomupa parse node has jurisdiction over its own slots only during the bottomup cycleduring the topdown cycle its feature value modifications are manifested only in its descendantsthe node retains the values for the features that its parent delivered and may use these for unifications prior to passing information on to its right siblingsthis additional complexity was felt necessary to handle number agreement in questions of the form quotdo john and mary eat out a lotquot here the auxiliary verb quotdoquot sets the number to plural but the two individual nouns are singularthe subject node blocks transfer of number information to its children but unifies the value for number returned during the bottomup cycle with the value previously delivered to it by its left sibling the auxiliary verbthere is a node andnounphrase that deals specifically with compound nounsthis node blocks transfer 12 if the right sibling happens to be the distinguished end node then the features get passed up to the parent of number information to its children and sets number to plural during the bottomup cycleit has been found expedient to define a metalevel operator named quotdetachquot that invokes a block operation during both the topdown and bottomup cyclesthis operation has the effect of isolating the node in question from its descendents with respect to the particular blocked featurethis mechanism was commonly used to detach a subordinate clause from a main clause with respect to the semantic bits for examplethe setting that had been delivered to the node during the topdown cycle is retained and sent forward during the bottomup cycle but not communicated to the node childrenanother special blocking property can be associated with certain features but the block only applies at the point where an end node returns a solution to a parentthis is true for instance of the mode for the verbalong with the syntactic and semantic features there are also two slots that are concerned with the trace mechanism and these are used as well for semantic filtering on key information from the pastthere are some special operations concerned with filling these slots and unifying semantics with these slots that will be described in more detail in later sectionslexical entries contain threetuple specifications of values for features the fourth element is irrelevant since there are no separate topdown and bottomup cyclesthus a terminal verb node contains vocabulary entries that include settings for verb mode and for personnumber if the verb is finitethe plural form for nouns can be handled through a ph morph for the sake of efficiencythis morph sets the value of number to plural regardless of its prior settingit is the job of a parent node to unify that setting with the value delivered by the left siblings of the nounsome examples may help explain how the constraint mechanism worksconsider for example the illformed phrase quoteach boatsquot suppose the grammar has the three rules and the lexical item quoteachquot sets the number to singular and passes this value to the noun nodethe noun node blocks transfer of number to its childrenquotboatquot sets the number to singular but the pl morph overrides this value returning a plural value to the parentthis plural value gets unified with the singular value that had been retained from quoteachquot during the topdown cyclethe unification fails and the parse diesby splitting off the plural morph singular and plural nouns can share the bulk of their phonetics thus reducing greatly the redundancy in the recognizer matching problemin theory morphs could be split off for verbs as well but due to the large number of irregularities this was not donesubjectverb agreement gets enforced by default because the number information that was realized during the parsing of the subject node gets passed along to the predicate and down to the terminal verb nodethe lexical item unifies the number information and the parse fails if the result is zeroany nonauxiliary verb node blocks the transfer of any predecessor personnumber information to its right siblings during the bottomup cycle reflecting the fact that verbs agree in personnumber with their subject but not their objectcertain nodes set the mode of the verb either during the topdown or the bottomup cyclethus for example quothavequot as an auxiliary verb sets mode to pastparticiple during the bottomup cycle the category gerund sets the mode to presentparticiple during the topdown cycle whenever a predicate node is invoked the verb mode has always been set by a predecessor251 gapsthe mechanism to deal with gaps resembles in certain respects the hold register idea of atns but with an important difference reflecting the design philosophy that no node can have access to information outside of its immediate domainthe mechanism involves two slots that are available in the feature vector of each parse nodethese are called the currentfocus and the floatobject respectivelythe currentfocus slot contains at any given time a pointer to the most recently mentioned content phrase in the sentenceif the floatobject slot is occupied it means that there is a gap somewhere in the future that will ultimately be filled by the partial parse contained in the floatobjectthe process of getting into the floatobject slot requires two steps executed independently by two different nodesthe first node the generator fills the currentfocus slot with the subparse returned to it by its childrenthe second node the activator moves the currentfocus into the floatobject position for its children during the topdown cycleit also requires that the floatobject be absorbed somewhere among its descendants by a designated absorber node a condition that is checked during the bottomup cyclethe currentfocus only gets passed along to siblings and their descendants and hence is unavailable to activators at higher levels of the parse treethat is to say the currentfocus is a feature like verbmode that is blocked when an end node is encounteredto a first approximation a currentfocus reaches only nodes that are ccommanded by its generatorfinally certain blocker nodes block the transfer of the floatobject to their childrena simple example will help explain how this worksfor the sentence quot did mike buy quot as illustrated by the parse tree in figure 3 the qsubject quothow many piesquot is a generator so it fills the currentfocus with its subparsethe doquestion is an activator it moves the currentfocus into the floatobject positionfinally the object of quotbuyquot an absorber takes the qsubject as its subparsethe doquestion refuses to accept any solutions from its children if the floatobject has not been absorbedthus the sentence quothow many pies did mike buy the piesquot would be rejectedfurthermore the same doquestion grammar node deals with the yesno question quotdid mike buy the piesquot except in this case there is no currentfocus and hence no gapmore complicated sentences involving nested or chained traces are handled straightforwardly by this schemefor instance the phrase quotwhich hospital was jane taken toquot can be parsed correctly by tina identifying quotwhich hospitalquot as the object of the preposition quottoquot and quotjanequot as the object of quottakenquot the phrase quotwhich hospitalquot gets generated by the qsubject and activated by the following bequestion thus filling the floatobject slotwhen the predicate of the clause is reached the word quotjanequot is in the currentfocus slot and the phrase quotwhich hospitalquot is still in the floatobject slotthe participialphrase for quottaken objectquot activates quotjanequot but only for its childrenthis word is ultimately absorbed by the object node within the verb phrasemeanwhile the participialphrase passes along the original floatobject to its right sibling the adverbial prepositional phrase quotto objectquot the phrase quotwhich hospitalquot is finally absorbed by the preposition objectthe example used to illustrate the power of atns quotjohn was believed to have been shotquot also parses correctly because the object node following the verb quotbelievedquot acts as both an absorber and a generatorcases of crossed traces are automatically blocked because the second currentfocus gets moved into the floatobject position at the time of the second activator overriding the preexisting floatobject set up by the earlier activatorthe wrong floatobject is available at the position of the first trace and the parse dies i did you ask john 1 bill bought example of a parse tree illustrating a gapthe currentfocus slot is not restricted to nodes that represent nounssome of the generators are adverbial or adjectival parts of speech an absorber checks for agreement in pos before it can accept the floatobject as its subparseas an example the question quot do you like your salad dressing quot contains a eqsubject quothow oilyquot that is an adjectivethe absorber predadjective accepts the available floatobject as its subparse but only after confirming that pos is adjectivethe currentfocus has a number of other uses besides its role in movementit always contains the subject whenever a verb is proposed including verbs that are predicative objects of another verb as in quoti want to go to chinaquot it has also been found to be very effective for passing semantic information to be constrained by a future node and it can play an integral role in pronoun referencefor instance a reflexive pronoun nearly always refers back to the currentfocus whereas a nonreflexive form never does unless it is in the nominative case252 semantic filteringin the more recent versions of the grammar we have implemented a number of semantic constraints using procedures very similar to those used for syntactic constraintswe found it effective to filter on the currentfocus semantic category as well as to constrain absorbers in the gap mechanism to require a match on semantics before they could accept a floatobjectsemantic categories were parse tree for the sentence quotwhat street is the hyatt onquot implemented in a hierarchy such that for example restaurant automatically inherits the more general properties building and placewe also introduced semantically loaded categories at the low levels of the parse treeit seems that as in syntax there is a tradeoff between the number of unique nodetypes and the number of constraint filtering operationsat low levels of the parse tree it seems more efficient to label the categories whereas information that must pass through higher levels of the hierarchy is better done through constraint filtersas an example consider the sentence quot is the hyatt on quot shown in figure 4the qsubject places quotwhat streetquot into the currentfocus slot but this unit is activated to floatobject status by the subsequent bequestionthe subject node refills the now empty currentfocus with quotthe hyattquot the node astreet an absorber can accept the floatobject as a solution but only if there is tight agreement in semantics ie it requires the identifier streetthus a sentence such as quotwhat restaurant is the hyatt onquot would fail on semantic groundsfurthermore the node onstreet imposes strict semantic restrictions on the currentfocusthus the sentence quot is cambridge on quot would fail because onstreet does not permit region as the semantic category for the currentfocus quotcambridgequot one place where semantic filtering can play a powerful role is in subjectverb relationshipsthis is easily accomplished within tina framework because the currentfocus slot always contains the subject of a verb at the time of the verb instantiationthis is obvious in the case of a simple statement or complete clause since the subject node generates a currentfocus which is available as the subject of the terminal verb node in the subsequent predicatethe same subject currentfocus is also available as the subject of a verb in a predicative object of another verb as in quoti want to go to chinaquot for the case where a verb takes an object and an infinitive phrase as arguments the object node replaces the currentfocus with its subparse such that when the verb of the infinitive phrase is proposed the correct subject is availablethis handles cases like quoti asked jane to helpquot with this mechanism the two sentences quoti want to goquot and quoti want john to goquot can share the same parse node for the verb wantcertain sentences exhibit a structure that superficially resembles the verbobjectinfinitivephrase pattern but should not be represented this way such as quoti avoid cigarettes to stay healthyquot here clearly quotiquot is the subject of quotstayquot this can be realized in tina by having a toplevel rule the object node for quotcigarettesquot replaces the currentfocus but the replacement does not get propagated back up to the predicate node thus the currentfocus quotiquot is passed on from the predicate to the adjunct and eventually to the verb quotstayquot finally in the case of passive voice the currentfocus slot is empty at the time the verb is proposed because the currentfocus which was the surfaceform subject has been moved to the floatobject positionin this case the verb has no information concerning its subject and so it identifies it as an unbound pronounsemantic filters can also be used to prevent multiple versions of the same case frame showing up as complementsfor instance the set of complements fromplace toplace and attime are freely ordered following a movement verb such as quotleavequot thus a flight can quotleave for chicago from boston at ninequot or equivalently quotleave at nine for chicago from bostonquot if these complements are each allowed to follow the other then in tina an infinite sequence of fromplace toplaces and attimes is possiblethis is of course unacceptable but it is straightforward to have each node as it occurs or in a semantic bit specifying its case frame and in turn fail if that bit has already been setwe have found that this strategy in conjunction with the capability of erasing all semantic bits whenever a new clause is entered serves the desired goal of eliminating the unwanted redundanciesthus far we have added all semantic filters by hand and they are implemented in a hardfail mode ie if the semantic restrictions fail the node diesthis strategy seems to be adequate for the limited domains that we have worked with thus far but they will probably be inadequate for more complex domainsin principle one could parse a large set of sentences with semantics turned off collecting the semantic conditions that occurred at each node of interestthen the system could propose to a human expert a set of filters for each node based on its observations and the human could make the final decision on whether to accept the proposalsthis approach resembles the work by grishman et al and hirschman et al on selectional restrictionsthe semantic conditions that pass could even ultimately be associated with probabilities obtained by frequency counts on their occurrencesthere is obviously a great deal more work to be done in this important areathis section addresses some performance measures for a grammar including coverage portability perplexity and trainabilityperplexity roughly defined as the geometric mean of the number of alternative word hypotheses that may follow each word in the sentence is of particular concern in spoken language tasksportability and trainability concern the ease with which an existing grammar can be ported to a new task as well as the amount of training data necessary before the grammar is able to generalize well to unseen datato date four distinct domainspecific versions of tina have been implementedthe first version was developed for the 450 phonetically rich sentences of the timit database the second version concerns the resource management task that has been popular within the darpa community in recent yearsthe third version serves as an interface both with a recognizer and with a functioning database backend the voyager system can answer a number of different types of questions concerning navigation within a city as well as provide certain information about hotels restaurants libraries etc within the regiona fourth domainspecific version is under development for the atis task which has recently been designated as the new common task for the darpa communitywe tested ease of portability for tina by beginning with a grammar built from the 450 timit sentences and then deriving a grammar for the rm taskthese two tasks represent very different sentence typesfor instance the overwhelming majority of the timit sentences are statements whereas the rm task is made up exclusively of questions and requeststhe process of conversion to a new grammar involves parsing the new sentences one by one and adding contextfree rules whenever a parse failsthe person entering the rules must be very familiar with the grammar structure but for the most part it is straightforward to identify and incrementally add missing rulesthe parser identifies where in the sentence it fails and also maintains a record of the successful partial parsesthese pieces of information usually are adequate to pinpoint the problemonce the grammar has been expanded to accomodate the new set of sentences a subset grammar can be created automatically that only contains rules needed in the new domain eliminating any rules that were particular to the original domainit required less than one personmonth to convert the grammar from timit to the rm taska set of 791 sentences within the rm task have been designated as training sentences and a separate set of 200 sentences as the test setwe built a subset grammar from the 791 parsed training sentences and then used this grammar to test coverage and perplexity on the unseen test sentencesthe grammar could parse 100 of the training sentences and 84 of the test sentencesa formula for the test set perplexity is13 where the wi are the sequence of all words in all sentences n is the total number of words including an quotendquot word after each sentence and p is the probability of the ith word given all preceding words14 if all words are assumed equally likely then p can be determined by counting all the words that could follow each word in the sentence along all workable partial theoriesif the grammar contains probability estimates then these can be used in place of the equally likely assumptionif the grammar estimates reflect reality the estimated probabilities will result in a reduction in the total perplexity an average perplexity for the 167 test sentences that were parsable was computed for the two conditions without and with the estimated probabilitiesthe result was a perplexity of 368 for case 1 but only 415 for case 2 as summarized in table 1this is with a total vocabulary size of 985 words and with a grammar that included some semantically restricted classes such as shipname and readinesscategorythe incorporation of arc probabilities reduced the perplexity by a factor of nine a clear indicator that a proper mechanism for utilizing probabilities in a grammar can help significantlyan even lower perplexity could be realized within this domain by increasing the number of semantic nodesin fact this is a trend that we have increasingly adopted as we move to new domainswe did not look at the test sentences while designing the grammar nor have we yet looked at those sentences that failed to parsehowever we decided to examine the parse trees for those sentences that produced at least one parse to determine the depth of the first reasonable parsethe results were essentially the same for the training and the test sentences as shown in table 2both gave a reasonable parse as either the first or second proposed parse 96 of the timetwo of the test sentences never gave a correct parsewe have recently developed a subdomain for tina that has been incorporated into a complete spoken language system called voyagerthe system provides directions on how to get from one place to another within an urban region and also gives information such as phone number or address for places such as restaurants hotels libraries etcwe have made extensive use of semantic filters within this domain in order to reduce the perplexity of the recognition task as much as possibleto obtain training and test data for this task we had a number of naive subjects use the system as if they were trying to obtain actual informationtheir speech was recorded in a simulation mode in which the speech recognition component was excludedinstead an experimenter in a separate room typed in the utterances as spoken by the subjectsubsequent processing by the natural language and response generation components was done automatically by the computer we were able to collect a total of nearly 5000 utterances in this fashionthe speech material was then used to train the recognizer component and the text material was used to train the natural language and backend componentswe designated a subset of 3312 sentences as the training set and augmented the original rules so as to cover a number of sentences that appeared to stay within the domain of the backendwe did not try to expand the rules to cover sentences that the backend could not deal with because we wanted to keep the natural language component tightly restricted to sentences with a likely overall successin this way we were able to increase the coverage of an independent test set of 560 utterances from 69 to 76 with a corresponding increase in perplexity as shown in table 3perplexity was quite low even without probabilities this is due mainly to an extensive semantic filtering schemeprobabilities decreased the perplexity by a factor of three however which is still quite significantan encouraging result was that both perplexity and coverage were of comparable values for the training and test sets as shown in the tableas mentioned previously generation mode has been a very useful device for detecting overgeneralization problems in a grammarafter the addition of a number of semantically loaded nodes and semantic filters the voyager version of the grammar is now restricted mainly to sentences that are semantically as well as syntactically legitimateto illustrate this point we show in table 4 five examples of consecutively generated sentencessince these were not selectively drawn from a larger set they accurately reflect the current performance levelwe also used generation mode to construct a wordpair grammar automatically for the recognizer component of our voyager systemto do this over 100000 sentences were generated and wordpair links were established for all words sharing the same terminal category which uses a segmentalbased framework and includes an auditory model in the frontend processingthe lexicon is entered as phonetic pronunciations that are then augmented to account for a number of phonological rulesthe search algorithm is the standard viterbi search except that the match involves a networktonetwork alignment problem rather than sequencetosequencewhen we first integrated this recognizer with tina we used a quotwirequot connection in that the recognizer produced a single best output which was then passed to tina for parsinga simple wordpair grammar constrained the search spaceif the parse failed then the sentence was rejectedwe have since improved the interface by incorporating a capability in the recognizer to propose additional solutions in turn once the first one fails to parse to produce these quotnbestquot alternatives we make use of a standard a search algorithm both the a and the viterbi search are lefttoright search algorithmshowever the a search is contrasted with the viterbi search in that the set of active hypotheses take up unequal segments of timethat is when a hypothesis is scoring well it is allowed to procede forward whereas poorer scoring hypotheses are kept on holdwe have thus far developed two versions of the control strategy a quotloosely coupledquot system and a quottightly coupledquot systemboth versions begin with a viterbi search all the way to the end of the sentence resulting in not only the first candidate solution but also partial scores for a large set of other hypothesesif this first solution fails to parse then the bestscoring partial theory is allowed to procede forward incrementallyin an a search the main issue is how to get an estimate of the score for the unseen portion of the sentencein our case we can use the viterbi path to the end as the estimate of the future scorethis path is guaranteed to be the best way to get to the end however it may not parsehence it is a tight upper bound on the true score for the rest of the sentencethe recognizer can continue to propose hypotheses until one successfully parses or until a quitting criterion is reached such as an upper bound on n whereas in the loosely coupled system the parser acts as a filter only on completed candidate solutions the tightly coupled system allows the parser to discard partial theories that have no way of continuingfollowing the viterbi search each partial theory is first extended by the parser to specify possible next words which are then scored by the recognizerwe have not yet made use of tina probabilities in adjusting the recognizer scores on the fly but we have been able to incorporate linguistic scores to resort nbest outputs giving a significant improvement in performance ultimately we want to incorporate tina probabilities directly into the a search but it is as yet unclear how to provide an appropriate upper bound for the probability estimate of the unseen portion of the linguistic model once a parser has produced an analysis of a particular sentence the next step is to convert it to a meaning representation form that can be used to perform whatever operations the user intended by speaking the sentencewe currently achieve this translation step in a secondpass treewalk through the completed parse treealthough the generation of semantic frames could be done on the fly as the parse is being proposed it seems inappropriate to go through all of that extra work for large numbers of incorrect partial theories due to the uncertainty as to the identity of the terminal word strings inherent in spoken inputwe have taken the point of view that all syntactic and semantic information can be represented uniformly in strictly hierarchical structures in the parse treethus the parse tree contains nodes such as subject and dirobject that represent structural roles as well as nodes such as onstreet and aschool representing specific semantic categoriesthere are no separate semantic rules off to the side rather the semantic information is encoded directly as names attached to nodes in the treeexactly how to get from the parse tree to an appropriate meaning representation is a current research topic in our grouphowever the method we are currently using in the atis domain represents our most promising approach to this problemwe have decided to limit semantic frame types to a small set of choices such as clause predicate reference and qset the process of obtaining a completed semantic frame amounts to passing frames along from node to node through the completed parse treeeach node receives a frame in both a topdown and a bottomup cycle and modifies the frame according to specifications based on its broadclass identity for example a subject is a nounphrase node with the label quottopicquot during the topdown cycle it creates a blank frame and inserts it into a quottopicquot slot in the frame that was handed to itit passes the blank frame to its children who will then fill it appropriately labeling it as a qset or as a referenceit then passes along to the right sibling the same frame that was handed to it from above with the completed topic slot filled with the information delivered by the childrenthe raw frame that is realized through the treewalk is postprocessed to simplify some of the structure as well as to augment or interpret expressions such as relative timefor example the predicate modifier in quotflights leaving at ten amquot is simplified from a predicate leave to a modifier slot labeled departuretimean expression such as quotnext tuesdayquot is interpreted relative to today date to fill in an actual month date and yearfollowing this postanalysis step the frame is merged with references contained in a history record to fold in information from the previous discoursethe completed semantic frame is used in atis both to generate an sql command to access the database and to generate a text output to be spoken in the interactive dialogthe sql pattern is controlled through lists of frame patterns to match and query fragments to generate given the matchtext generation is done by assigning appropriate temporal ordering for modifiers on nouns and for the main nounthe modifiers are contained in slots associated with the qset framecertain frames such as clocktime have special print functions that produce the appropriate piece of text associated with the contentsthis paper describes a new natural language system that addresses issues of concern in building a fully integrated spoken language systemthe formalism provides an integrated approach to representations for syntax and for semantics and produces a highly constraining language model to a speech recognizerthe grammar includes arc probabilities reflecting the frequency of occurrence of patterns within the domainthese probabilities are used to control the order in which hypotheses are considered and are trained automatically from a set of parsed sentences making it straightforward to tailor the grammar to a particular needultimately one could imagine the existence of a very large grammar that could parse almost anything which would be subsetted for a particular task by simply providing it with a set of example sentences within that domainthe grammar makes use of a number of other principles that we felt were importantfirst of all it explicitly incorporates into the parse tree semantic categories intermixed with syntactic ones rather than having a set of semantic rules provided separatelythe semantic nodes are dealt with in the same way as the syntactic nodes the consequence is that the node names alone carry essentially all of the information necessary to extract a meaning representation from the sentencethe grammar is not a semantic grammar in the usual sense because it does include high level nodes of a syntactic nature such as nounclause subject predicate etca second important feature is that unifications are performed in a onedimensional frameworkthat is to say features delivered to a node by a close relative are unified with particular feature values associated with that nodethe x variable in an xy relationship is not explicitly mentioned but rather is assigned to be quotwhatever was delivered by the relativequot thus for example a node such as subject unifies in exactly the same way regardless of the rule under constructionanother important feature of tina is that the same grammar can be run in generation mode making up random sentences by tossing the dicethis has been found to be extremely useful for revealing overgeneralization problems in the grammar as well as for automatically acquiring a wordpair grammar for a recognizer and producing sentences to test the backend capabilitywe discussed a number of different application domains and gave some performance statistics in terms of perplexity coverage overgeneralization within some of these domainsthe most interesting result was obtained within the voyager domain the perplexity decreased from 70 to 28 to 8 when the grammar changed from wordpair to parser without probabilities to parser with probabilitieswe currently have two application domains that can carry on a spoken dialog with a userone the voyager domain answers questions about places of interest in an urban area in our case the vicinity of mit and harvard universitythe second one atis is a system for accessing data in the official airline guide and booking flightswork continues on improving all aspects of these domainsour current research is directed at a number of different remaining issuesas of this writing we have a fully integrated version of the voyager system using an a search algorithm the parser produces a set of nextword candidates dynamically for each partial theorywe have not yet incorporated probabilities from tina into the search but they are used effectively to resort the final output sentence candidatesin order to incorporate the probabilities into the search we need a tight upper bound on the future linguistic score for the unseen portion of each hypothesisthis is a current research topic in our groupwe also plan to experiment with further reductions in perplexity based on a discourse statethis should be particularly effective within the atis domain where the system often asks directed questions about as yet unresolved particulars to the flightthis appendix walks through a pedagogical example to parse spoken digit sequences up to three long as in quotthree hundred and sixteenquot included is a set of initial contextfree rules a set of training sentences an illustration of how to compute the path probabilities from the training sentences and an illustration of both parsing and perplexity computation for a test sentencesince there are only five training sentences a number of the arcs of the original grammar are lost after trainingthis is a problem to be aware of in building grammars from example sentencesin the absence of a sufficient amount of training data some arcs will inevitably be zeroed outunless it is desired to intentionally filter these out as being outside of the new domain one can insert some arbitrarily small probability for these arcs using for example an ngram backoff model and and the training sentences 1 144 quotone hundred and forty fourquot the training pairs for quothundredsplacequot above that have quothundredsplacequot on the lhs the count array for quothundredsplacequot digits hundred and end a total start 3 0 0 0 1 4 digits 0 1 0 2 0 3 hundred 0 0 1 1 0 2 and 0 0 0 1 0 1 a 0 1 0 0 0 1 the probability of a transition from start to digits within the parent node quothundredsplacequot is just 34 the ratio of the number of times quothundredsplacequot started with quotdigitsquot over the number of times it started with anythingparsing the phrase quotfour fifteenquot with the trained parser the initial stack15 childlparent left sibling path probability hundredsplaceinumber start 45 tensplaceinumber start 15 after quothundredsplacequot gets popped and expanded digitslhundredsplace start 4534 tensplaceinumber start 15 alhundredsplace start 4514 after quotdigits hundredsplacequot is popped and a match with quotfourquot is found endihundredsplace digits 23 hundredihundredsplace digits 13 tensplaceinumber start 15 at hundredsplace start 4514 after quotendjhundredsplace digitsquot is popped quothundredsplacequot has a solution in hand quotfourquot it now activates its only right sibling quottensplacequot this is a different instance of quottensplacequot from the one at the third place in the stackits left sibling is quothundredsplacequot rather than quotstartquot tensplaceinumber hundredsplace hundredihundredsplace digits tensplaceinumber start at hundredsplace start after quottensplacequot is expanded we have tensitensplace start 2335 hundredihundredsplace digits 13 tensplaceinumber start 15 at hundredsplace start 4514 teensitensplace start 2315 ohitensplace start 2315 quottensquot and quothundredquot will both get popped off and rejected because there is no match with the word quotfifteenquot quottensplacequot will also get popped and eventually rejected because nothing within quottensplacequot matches the digit quotfourquot a similar fate meets the quotaquot hypothesisfinally quotteensquot will be popped off and matched and quotendi tensplace teensquot will be inserted at the top with probability 10this answer will be returned to the parent quottensplacequot and two new hypotheses will be inserted at the top of the paths through the parse tree for the phrase quotfour fifteenquot with associated probabilities derived from the training data stack as follows onesplace number tensplace 35 end number tensplace 25 after the first one is rejected the second one finds a completed quotnumberquot rule and an empty input streamthe correct solution is now in handnotice that because quotteensquot was a relatively rare occurrence a number of incorrect hypotheses had to be pursued before the correct one was consideredcomputation of perplexity for the phrase quotfour fifteenquot these are the three transitions with associated probabilities following the appropriate paths in figure a1 transition probability thus for this example test sentence this comes out to about 14 words on average following a given word for this stephanie seneff tina a natural language system for spoken language applications particular phrasethis is higher than the norm for numbers given the grammar again because of the rare occurrence of the quotteensquot node as well as the fact that there is no onesplacethis example is a bit too simple in general there would be multiple ways to get to a particular next word and there are also constraints which kill certain paths and make it necessary to readjust probabilities on the flyin practice one must find all possible ways to extend a word sequence computing total path probability for each one and then renormalize to assure that with probability 10 there is an advance to some next wordit is the normalized probability contribution of all paths that can reach the next word that is used to update the log p calculationthis research has benefited significantly from interactions with lynette hirschman and victor zuein addition jim glass david goodine and christine pao have all made significant contributions to the programming of the tina system for which i am deeply gratefuli would also like to thank several anonymous reviewers for their careful critiques the outcome of which was a substantially improved document
J92-1004
tina a natural language system for spoken language applicationsa new natural language system tina has been developed for applications involving spoken language taskstina integrates key ideas from context free grammars augmented transition networks and the unification concepttina provides a seamless interface between syntactic and semantic analysis and also produces a highly constraining probabilistic language model to improve recognition performancean initial set of contextfree rewrite rules provided by hand is first converted to a network structureprobability assignments on all arcs in the network are obtained automatically from a set of example sentencesthe parser uses a stack decoding search strategy with a topdown control flow and includes a featurepassing mechanism to deal with longdistance movement agreement and semantic constraintstina provides an automatic sentence generation capability that has been effective for identifying overgeneralization problems as well as in producing a wordpair language model for a recognizerthe parser is currently integrated with mit summit recognizer for use in two application domains with the parser screening recognizer outputs either at the sentential level or to filter partial theories during the active search processwe propose the language understanding system tina that integrates key ideas context free grammar augmented transition network and unification concepts
classbased ngram models of natural language we address the problem of predicting a word from previous words in a sample of text in particular we discuss ngram models based on classes of words we also discuss several statistical algorithms for assigning words to classes based on the frequency of their cooccurrence with other words we find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings depending on the nature of the underlying statistics ibm t j watson research center we address the problem of predicting a word from previous words in a sample of textin particular we discuss ngram models based on classes of wordswe also discuss several statistical algorithms for assigning words to classes based on the frequency of their cooccurrence with other wordswe find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings depending on the nature of the underlying statisticsin a number of natural language processing tasks we face the problem of recovering a string of english words after it has been garbled by passage through a noisy channelto tackle this problem successfully we must be able to estimate the probability with which any particular string of english words will be presented as input to the noisy channelin this paper we discuss a method for making such estimateswe also discuss the related topic of assigning words to classes according to statistical behavior in a large body of textin the next section we review the concept of a language model and give a definition of ngram modelsin section 3 we look at the subset of ngram models in which the words are divided into classeswe show that for n 2 the maximum likelihood assignment of words to classes is equivalent to the assignment for which the average mutual information of adjacent classes is greatestfinding an optimal assignment of words to classes is computationally hard but we describe two algorithms for finding a suboptimal assignmentin section 4 we apply mutual information to two other forms of word clusteringfirst we use it to find pairs of words that function together as a single lexical entitythen by examining the probability that two words will appear within a reasonable distance of one another we use it to find classes that have some loose semantic coherencein describing our work we draw freely on terminology and notation from the mathematical theory of communicationthe reader who is unfamiliar with this field or who has allowed his or her facility with some of its concepts to fall into disrepair may profit from a brief perusal of feller and gallagher in the first of these the reader should focus on conditional probabilities and on markov chains in the second on entropy and mutual informationsourcechannel setupfigure 1 shows a model that has long been used in automatic speech recognition and has recently been proposed for machine translation and for automatic spelling correction in automatic speech recognition y is an acoustic signal in machine translation y is a sequence of words in another language and in spelling correction y is a sequence of characters produced by a possibly imperfect typistin all three applications given a signal y we seek to determine the string of english words w which gave rise to itin general many different word strings can give rise to the same signal and so we cannot hope to recover w successfully in all caseswe can however minimize our probability of error by choosing as our estimate of w that string w for which the a posteriori probability of w given y is greatestfor a fixed choice of y this probability is proportional to the joint probability of and y which as shown in figure 1 is the product of two terms the a priori probability of w and the probability that y will appear at the output of the channel when is placed at the inputthe a priori probability of w pr is the probability that the string w will arise in englishwe do not attempt a formal definition of english or of the concept of arising in englishrather we blithely assume that the production of english text can be characterized by a set of conditional probabilities pr in terms of which the probability of a string of words w can be expressed as a product here wki1 represents the string wi w2 wk_i in the conditional probability pr we call wk1 the history and wk the predictionwe refer to a computational mechanism for obtaining these conditional probabilities as a language modeloften we must choose which of two different language models is the better onethe performance of a language model in a complete system depends on a delicate interplay between the language model and other components of the systemone language model may surpass another as part of a speech recognition system but perform less well in a translation systemhowever because it is expensive to evaluate a language model in the context of a complete system we are led to seek an intrinsic measure of the quality of a language modelwe might for example use each language model to compute the joint probability of some collection of strings and judge as better the language model that yields the greater probability the perplexity of a language model with respect to a sample of text s is the reciprocal of the geometric average of the probabilities of the predictions in s if s has i s i words then the perplexity is pr 11s1thus the language model with the smaller perplexity will be the one that assigns the larger probability to s because the perplexity depends not only on the language model but also on the text with respect to which it is measured it is important that the text be representative of that for which the language model is intendedbecause perplexity is subject to sampling error making fine distinctions between language models may require that the perplexity be measured with respect to a large samplein an ngram language model we treat two histories as equivalent if they end in the same n 1 words ie we assume that for k n pr is equal to pr for a vocabulary of size v a 1gram model has v 1 independent parameters one for each word minus one for the constraint that all of the probabilities add up to 1a 2gram model has v independent parameters of the form pr and v 1 of the form pr for a total of v2 1 independent parametersin general an ngram model has vquot 1 independent parameters vquot1 of the form pr which we call the ordern parameters plus the 17n11 parameters of an gram modelwe estimate the parameters of an ngram model by examining a sample of text tf which we call the training text in a process called trainingif c is the number of times that the string w occurs in the string 1t then for a 1gram language model the maximum likelihood estimate for the parameter pr is ctto estimate the parameters of an ngram model we estimate the parameters of the gram model that it contains and then choose the ordern parameters so as to maximize pr thus the ordern parameters are we call this method of parameter estimation sequential maximum likelihood estimationwe can think of the ordern parameters of an ngram model as constituting the transition matrix of a markov model the states of which are sequences of n 1 wordsthus the probability of a transition between the state w1w2 wn1 and the state w2w3 wn is pr the steadystate distribution for this transition matrix assigns a probability to each gram which we denote swe say that an ngram language model is consistent if for each string w71 the probability that the model assigns to win1 is ssequential maximum likelihood estimation does not in general lead to a consistent model although for large values of t the model will be very nearly consistentmaximum likelihood estimation of the parameters of a consistent ngram language model is an interesting topic but is beyond the scope of this paperthe vocabulary of english is very large and so even for small values of n the number of parameters in an ngram model is enormousthe ibm tangora speech recognition system has a vocabulary of about 20000 words and employs a 3gram language model with over eight trillion parameters we can illustrate the problems attendant to parameter estimation for a 3gram language model with the data in table 1here we show the number of 1 2 and 3grams appearing with various frequencies in a sample of 365893263 words of english text from a variety of sourcesthe vocabulary consists of the 260740 different words plus a special number of ngrams with various frequencies in 365893263 words of running text unknown word into which all other words are mappedof the 6799 x 1010 2grams that might have occurred in the data only 14494217 actually did occur and of these 8045024 occurred only once eachsimilarly of the 1773 x 1016 3grams that might have occurred only 75349888 actually did occur and of these 53737350 occurred only once eachfrom these data and turing formula we can expect that maximum likelihood estimates will be 0 for 147 percent of the 3grams and for 22 percent of the 2grams in a new sample of english textwe can be confident that any 3gram that does not appear in our sample is in fact rare but there are so many of them that their aggregate probability is substantialas n increases the accuracy of an ngram model increases but the reliability of our parameter estimates drawn as they must be from a limited training text decreasesjelinek and mercer describe a technique called interpolated estimation that combines the estimates of several language models so as to use the estimates of the more accurate models where they are reliable and where they are unreliable to fall back on the more reliable estimates of less accurate modelsif pri is the conditional probability as determined by the jth language model then the interpolated estimate pr is given by given values for pr 0 the a1 are chosen with the help of the them algorithm so as to maximize the probability of some additional sample of text called the heldout data when we use interpolated estimation to combine the estimates from 1 2 and 3gram models we choose the as to depend on the history w1 only through the count of the 2gram we expect that where the count of the 2gram is high the 3gram estimates will be reliable and where the count is low the estimates will be unreliablewe have constructed an interpolated 3gram model in which we have divided the as into 1782 different sets according to the 2gram countswe estimated these as from a heldout sample of 4630934 wordswe measure the performance of our model on the brown corpus which contains a variety of english text and is not included in either our training or heldout data the brown corpus contains 1014312 words and has a perplexity of 244 with respect to our interpolated modelclearly some words are similar to other words in their meaning and syntactic functionwe would not be surprised to learn that the probability distribution of words in the vicinity of thursday is very much like that for words in the vicinity of fridayof peter f brown and vincent j della pietra classbased ngram models of natural language course they will not be identical we rarely hear someone say thank god it is thursday or worry about thursday the 13thif we can successfully assign words to classes it may be possible to make more reasonable predictions for histories that we have not previously seen by assuming that they are similar to other histories that we have seensuppose that we partition a vocabulary of v words into c classes using a function 7r which maps a word wi into its class ciwe say that a language model is an ngram class model if it is an ngram language model and if in addition for 1 k n independent parameters v c of the form pr plus the cquot 1 independent parameters of an ngram language model for a vocabulary of size c thus except in the trivial cases in which c v or n 1 an ngram class language model always has fewer independent parameters than a general ngram language modelgiven training text tr the maximum likelihood estimates of the parameters of a 1gram class model are where by c we mean the number of words in tf for which the class is c from these equations we see that since c r pr pr pr ctfor a 1gram class model the choice of the mapping it has no effectfor a 2gram class model the sequential maximum likelihood estimates of the order2 parameters maximize pr or equivalently log pr and are given by by definition pr pr pr and so for sequential maximum likelihood estimation we have since c and ec c tends to the relative frequency of ci c2 as consecutive classes in the training texttherefore since ew c tends to the relative frequency of w2 in the training text and hence to pr we must have in the limit where h is the entropy of the 1gram word distribution and is the average mutual information of adjacent classesbecause l depends on 7r only through this average mutual information the partition that maximizes l is in the limit also the partition that maximizes the average mutual information of adjacent classeswe know of no practical method for finding one of the partitions that maximize the average mutual informationindeed given such a partition we know of no practical method for demonstrating that it does in fact maximize the average mutual informationwe have however obtained interesting results using a greedy algorithminitially we assign each word to a distinct class and compute the average mutual information between adjacent classeswe then merge that pair of classes for which the loss in average mutual information is leastafter v c of these merges c classes remainoften we find that for classes obtained in this way the average mutual information can be made larger by moving some words from one class to anothertherefore after having derived a set of classes from successive merges we cycle through the vocabulary moving each word to the class for which the resulting partition has the greatest average mutual informationeventually no potential reassignment of a word leads to a partition with greater average mutual informationat this point we stopit may be possible to find a partition with higher average mutual information by simultaneously reassigning two or more words but we regard such a search as too costly to be feasibleto make even this suboptimal algorithm practical one must exercise a certain care in implementationthere are approximately ck ck and that we now wish to investigate the merge of ck with ck pr ie the probability that a word in class ck follows a word in class cklet and let the average mutual information remaining after v k merges is we use the notation i j to represent the cluster obtained by merging ck and ck if we know ikso and sk then the majority of the time involved in computing ik is devoted to computing the sums on the second line of equation each of these sums has approximately v k terms and so we have reduced the problem of evaluating ik from one of order v2 to one of order v we can improve this further by keeping track of those pairs 1m for which pk is different from 0we recall from table 1 for example that of the 6799 x 1010 2grams that might have occurred in the training data only 14494217 actually did occurthus in this case the sums required in equation have on average only about 56 nonzero terms instead of 260741 as we might expect from the size of the vocabulary by examining all pairs we can find that pair i j for which the loss in average mutual information lk ik is leastwe complete the step by merging ck and ck to form a new cluster ck_i if j k we rename ck as ck_i and for 1 ij we set cki to ckobviously iki ikthe values of pk1 prk_i and qk_1 can be obtained easily from pk plk prk and qkif 1 and m both denote indices neither of which is equal to either i or j then it is easy to establish that finally we must evaluate sk_1 and lk_1 from equations 15 and 16thus the entire update process requires something on the order of v2 computations in the course of which we will determine the next pair of clusters to mergethe algorithm then is of order v3although we have described this algorithm as one for finding clusters we actually determine much moreif we continue the algorithm for v 1 merges then we will have a single cluster which of course will be the entire vocabularythe order in which clusters are merged however determines a binary tree the root of which corresponds reps representatives representative rep sample subtrees from a 1000word mutual information tree to this single cluster and the leaves of which correspond to the words in the vocabularyintermediate nodes of the tree correspond to groupings of words intermediate between single words and the entire vocabularywords that are statistically similar with respect to their immediate neighbors in running text will be close together in the treewe have applied this treebuilding algorithm to vocabularies of up to 5000 wordsfigure 2 shows some of the substructures in a tree constructed in this manner for the 1000 most frequent words in a collection of office correspondencebeyond 5000 words this algorithm also fails of practicalityto obtain clusters for larger vocabularies we proceed as followswe arrange the words in the vocabulary in order of frequency with the most frequent words first and assign each of the first c words to its own distinct classat the first step of the algorithm we assign the st most probable word to a new class and merge that pair among the resulting c 1 classes for which the loss in average mutual information is leastat the kth step of the algorithm we assign the th most probable word to a new classthis restores the number of classes to c 1 and we again merge that pair for which the loss in average mutual information is leastafter v c steps each of the words in the vocabulary will have been assigned to one of c classeswe have used this algorithm to divide the 260741word vocabulary of table 1 into 1000 classestable 2 contains examples of classes that we find particularly interestingtable 3 contains examples that were selected at randomeach of the lines in the tables contains members of a different classthe average class has 260 words and so to make the table manageable we include only words that occur at least ten times and we include no more than the ten most frequent words of any class the degree to which the classes capture both syntactic and semantic aspects of english is quite surprising given that they were constructed from nothing more than counts of bigramsthe class that tha theat is interesting because although tha and theat are not english words the computer has discovered that in our data each of them is most often a mistyped thattable 4 shows the number of class 1 2 and 3grams occurring in the text with various frequencieswe can expect from these data that maximum likelihood estimates will assign a probability of 0 to about 38 percent of the class 3grams and to about 02 percent of the class 2grams in a new sample of english textthis is a substantial improvement over the corresponding numbers for a 3gram language model which are 147 percent for word 3grams and 22 percent for word 2grams but we have achieved this at the expense of precision in the modelwith a class model we distinguish between two different words of the same class only according to their relative frequencies in the text as a wholelooking at the classes in tables 2 and 3 we feel that this is reasonable for pairs like john and george or liberal and conservative but perhaps less so for pairs like little and prima or minister and moverwe used these classes to construct an interpolated 3gram class model using the same training text and heldout data as we used for the wordbased language model we discussed abovewe measured the perplexity of the brown corpus with respect to this model and found it to be 271we then interpolated the classbased estimators with the wordbased estimators and found the perplexity of the test data to be 236 which is a small improvement over the perplexity of 244 we obtained with the wordbased modelin the previous section we discussed some methods for grouping words together according to the statistical similarity of their surroundingshere we discuss two additional types of relations between words that can be discovered by examining various cooccurrence statisticsthe mutual information of the pair w1 and w2 as adjacent words is if w2 follows wi less often than we would expect on the basis of their independent frequencies then the mutual information is negativeif w2 follows wi more often than we would expect then the mutual information is positivewe say that the pair w1 w2 is sticky if the mutual information for the pair is substantially greater than 0in table 5 we list the 20 stickiest pairs of words found in a 59537595word sample of text from the canadian parliamentthe mutual information for each pair is given in bits which corresponds to using 2 as the base of the logarithm in equation 18most of the pairs are proper names such as pontius pilate or foreign phrases that have been adopted into english such as mutatis mutandis and avant gardethe mutual information for hum pty dumpty 225 bits means that the pair occurs roughly 6000000 times more than one would expect from the individual frequencies of hum pty and dumptynotice that the property of being a sticky pair is not symmetric and so while hum pty dumpty forms a sticky pair dumpty hum pty does notinstead of seeking pairs of words that occur next to one another more than we would expect we can seek pairs of words that simply occur near one another more than we would expectwe avoid finding sticky pairs again by not considering pairs of words that occur too close to one anotherto be precise let prnear be the probability that a word chosen at random from the text is w1 and that a second word chosen at random from a window of 1001 words centered on wi but excluding the words in a window of 5 centered on w1 is w2we say that w1 and w2 are semantically sticky if prnear is much larger than pr pr unlike stickiness semantic stickiness is symmetric so that if w1 sticks semantically to w2 then w2 sticks semantically to w1in table 6 we show some interesting classes that we constructed using prnear in a manner similar to that described in the preceding sectionsome classes group together words having the same morphological stem such as performance performed perform performs and performingother classes contain words that are semantically related but have different stems such as attorney counsel trial court and judgewe have described several methods here that we feel clearly demonstrate the value of simple statistical techniques as allies in the struggle to tease from words their linguistic secretshowever we have not as yet demonstrated the full value of the secrets thus gleanedat the expense of a slightly greater perplexity the 3gram model with word classes requires only about onethird as much storage as the 3gram language model in which each word is treated as a unique individual even when we combine the two models we are not able to achieve much improvement in the perplexitynonetheless we are confident that we will eventually be able to make significant improvements to 3gram language models with the help of classes of the kind that we have described herethe authors would like to thank john lafferty for his assistance in constructing word classes described in this paper
J92-4003
classbased ngram models of natural languagewe address the problem of predicting a word from previous words in a sample of textin particular we discuss ngram models based on classes of words we also discuss several statistical algorithms for assigning words to classes based on the frequency of their cooccurrence with other wordswe find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings depending on the nature of the underlying statisticswe propose a window method introducing the concept of semantic stickiness of two words as the relatively frequent close occurrence between them
a problem for rst the need for multilevel discourse analysis quotplanning text for advisory dialogues capturing intentional rhetorical and attentional informationquot rhetorical structure theory argues that in most coherent discourse consecutive discourse elements are related by a small set of rhetorical relationsmoreover rst suggests that the information conveyed in a discourse over and above what is conveyed in its component clauses can be derived from the rhetorical relationbased structure of the discoursea large number of natural language generation systems rely on the rhetorical relations defined in rst to impose structure on multisentential text in addition many descriptive studies of discourse have employed rst however recent work by moore and paris noted that rst cannot be used as the sole means of controlling discourse structure in an interactive dialogue system because rst representations provide insufficient information to support the generation of appropriate responses to quotfollowup questionsquot the basic problem is that an rst representation of a discourse does not fully specify the intentional structure of that discourseintentional structure is crucial for responding effectively to questions that address a previous utterance without a record of what an utterance was intended to achieve it is impossible to elaborate or clarify that utterance1 further consideration has led us to conclude that the difficulty observed by moore and paris stems from a more fundamental problem with rst analysesrst presumes that in general there will be a single preferred rhetorical relation holding between consecutive discourse elementsin fact as has been noted in other work on discourse structure discourse elements are related simultaneously on multiple levelsin this paper we focus on two levels of analysisthe first involves the relation between the information conveyed in consecutive elements of a coherent discoursethus for example one utterance may describe an event that can be presumed to be the because of another event described in the subsequent utterancethis causal relation is at what we will call the informational levelthe second level of relation results from the fact that discourses are produced to effect changes in the mental state of the discourse participantsin coherent discourse a speaker is carrying out a consistent plan to achieve the intended changes and consecutive discourse elements are related to one another by means of the ways in which they participate in that planthus one utterance may be intended to increase the likelihood that the hearer will come to believe the subsequent utterance we might say that the first utterance is intended to provide evidence for the secondsuch an evidence relation is at what we will call the intentional levelrst acknowledges that there are two types of relations between discourse elements distinguishing between subject matter and presentational relationsaccording to mann and thompson islubject matter relations are those whose intended effect is that the hearer recognize the relation in question presentational relations are those whose intended effect is to increase some inclination in the hearerlquot 2 thus subject matter relations are informational presentational relations are intentionalhowever rst analyses presume that for any two consecutive elements of a coherent discourse one rhetorical relation will be primarythis means that in an rst analysis of a discourse consecutive elements will either be related by an informational or an intentional relationin this paper we argue that a complete computational model of discourse structure cannot depend upon analyses in which the informational and intentional levels of relation are in competitionrather it is essential that a discourse model include both levels of analysiswe show that the assumption of a single rhetorical relation between consecutive discourse elements is one of the reasons that rst analyses are inherently ambiguouswe also show that this same assumption underlies the problem observed by moore and parisfinally we point out that a straightforward approach to revising rst by modifying the definitions of the subject matter relations to indicate associated presentational analyses cannot succeedsuch an approach presumes a onetoone mapping between the ways in which information can be related and the ways in which intentions combine into a coherent plan to affect a hearer mental stateand no such mapping existswe thus conclude that in rst and indeed in any viable theory of discourse structure analyses at the informational and the intentional levels must coexistto illustrate the problem consider the following examplean example example 1 a plausible rst analysis of is that there is an evidence relation between utterance the nucleus of the relation and utterance the satellitethis analysis is licensed by the definition of this relation relation name evidence constraints on nucleus h might not believe nucleus to a degree satisfactory to s constraints on satellite h believes satellite or will find it credibleconstraints on nucleus satellite combination h comprehending satellite increases h belief of nucleuseffect h belief of nucleus is increasedhowever an equally plausible analysis of this discourse is that utterance is the nucleus of a volitional because relation as licensed by the definition constraints on nucleus presents a volitional action or else a situation that could have arisen from a volitional actionconstraints on nucleus satellite combination satellite presents a situation that could have caused the agent of the volitional action in nucleus to perform that action without the presentation of satellite h might not regard the action as motivated or know the particular motivation nucleus is more central to s purposes in putting forth the nucleussatellite combination than satellite iseffect h recognizes the situation presented in satellite as a because for the volitional action presented in nucleusit seems clear that example 1 satisfies both the definition of evidence a presentational relation and volitional because a subject matter relationin their formulation of rst mann and thompson note that potential ambiguities such as this can arise in rst but they argue that one analysis will be preferred depending on the intent that the analyst ascribes to the speaker imagine that a satellite provides evidence for a particular proposition expressed in its nucleus and happens to do so by citing an attribute of some element expressed in the nucleusthen the conditions for both evidence and elaboration are fulfilledif the analyst sees the speaker purpose as increasing the hearer belief of the nuclear propositions and not as getting the hearer to recognize the object attribute relationship then the only analysis is the one with the evidence relation this argument is problematicthe purpose of all discourse is ultimately to affect a change in the mental state of the hearereven if a speaker aims to get a hearer to recognize some ob j ect attribute relationship she has some underlying intention for doing that she wants to enable the hearer to perform some action or to increase the hearer belief in some proposition etctaken seriously mann and thompson strategy for dealing with potential ambiguities between presentational and subject matter relations would result in analyses that contain only presentational relations since these are what most directly express the speaker purposebut as we argue below a complete model of discourse structure must maintain both levels of relationwe begin by showing that in discourse interpretation recognition may flow from the informational level to the intentional level or vice versain other words a hearer may be able to determine what the speaker is trying to do because of what the hearer knows about the world or what she knows about what the speaker believes about the worldalternatively the hearer may be able to figure out what the speaker believes about the world by recognizing what the speaker is trying to do in the discoursethis point has previously been made by grosz and sidner 4 returning to our initial example suppose that the hearer knows that house bill 1711 places stringent environmental controls on manufacturing processesfrom this she can infer that supporting big business will cause one to oppose this billthen because she knows that one way for the speaker to increase a hearer belief in a proposition is to describe a plausible because of that proposition she can conclude that is intended to increase her belief in ie is evidence for the hearer reasons from informational coherence to intentional coherencealternatively suppose that the hearer has no idea what house bill 1711 legislateshowever she is in a conversational situation in which she expects the speaker to support the claim that bush will veto itfor instance the speaker and hearer are arguing and the hearer has asserted that bush will not veto any additional bills before the next electionagain using the knowledge that one way for the speaker to increase her belief in a proposition is to describe a plausible because of that proposition the hearer in this case can conclude that house bill 1711 must be something that a big business supporter would opposein other words that may be a because of here the reasoning is from intentional coherence to informational coherencenote that this situation illustrates how a discourse can convey more than the sum of its partsthe speaker not only conveys the propositional content of and but also the implication relation between and supporting big business entails opposition to house bill 17116 it is clear from this example that any interpretation system must be capable of recognizing both intentional and informational relations between discourse elements and must be able to use relations recognized at either level to facilitate recognition at the other levelwe are not claiming that interpretation always depends on the recognition of relations at both levels but rather that there are obvious cases where it doesan interpretation system therefore needs the capability of maintaining both levels of relationit is also crucial that a generation system have access to both the intentional and informational relations underlying the discourses it producesfor example consider the following discourse s come home by 500 then we can go to the hardware store before it closesh we do not need to go to the hardware store i borrowed a saw from janeat the informational level specifies a condition for doing getting to the hardware store before it closes depends on h coming home by 5007 how should s respond when h indicates in and that it is not necessary to go to the hardware storethis depends on what s intentions are in uttering and in uttering s may be trying to increase h ability to perform the act described in s believes that h does not realize that the hardware store closes early tonightin this case s may respond to h by saying on the other hand in and s may be trying to motivate h to come home early say because s is planning a surprise party for h then she may respond to h with something like the following s come home by 500 anyway or else you will get caught in the storm that is moving inwhat this example illustrates is that a generation system cannot rely only on informational level analyses of the discourse it producesthis is precisely the point that moore and paris have noted if the generation system is playing the role of s then it needs a record of the intentions underlying utterances and in order to determine how to respond to and of course if the system can recover the intentional relations from the informational ones then it will suffice for the system to record only the latterhowever as moore and paris have argued such recovery is not possible because there is not a onetoone mapping between intentional and informational relationsthe current example illustrates this last pointat the informational level utterance is a condition for but on one reading of the discourse there is an enablement relation at the intentional level between and while on another reading there is a motivation relationmoreover the nucleussatellite structure of the informational level relation is maintained only on one of these readingsutterance is the nucleus of the condition relation and similarly it is the nucleus of the enablement relation on the first readinghowever on the second reading it is utterance that is the nucleus of the motivation relationjust as one cannot always recover intentional relations from informational ones neither can one always recover informational relations from intentional onesin the second reading of the current example the intentional level motivation relation is realized first with a condition relation between and and later with an otherwise relation in and we have illustrated that natural language interpretation and natural language generation require discourse models that include both the informational and the intentional relations between consecutive discourse elementsrst includes relations of both types but commits to discourse analyses in which a single relation holds between each pair of elementsone might imagine modifying rst to include multirelation definitions ie definitions that ascribe both an intentional and an informational relation to consecutive discourse elementssuch an approach was suggested by hovy who augmented rhetorical relation definitions to include a quotresultsquot fieldalthough hovy did not cleanly separate intentional from informational level relations a version of his approach might be developed in which definitions are given only for informational level relations and the results field of each definition is used to specify an associated intentional relationhowever this approach cannot succeed for several reasonsfirst as we have argued there is not a fixed onetoone mapping between intentional and informational level relationswe showed for example that a condition relation may hold at the informational level between consecutive discourse elements at the same time as either an enablement or a motivation relation holds at the intentional levelsimilarly we illustrated that either a condition or an otherwise relation may hold at the informational level at the same time as a motivational relation holds at the intentional levelthus an approach such as hovy that is based on multirelation definitions will result in a proliferation of definitionsindeed there will be potentially n x m relations created from a theory that initially includes n informational relations and m intentional relationsmoreover by combining informational and intentional relations into single definitions one makes it difficult to perform the discourse analysis in a modular fashionas we showed earlier it is sometimes useful first to recognize a relation at one level and to use this relation in recognizing the discourse relation at the other levelin addition the multirelation definition approach faces an even more severe challengein some discourses the intentional structure is not merely a relabeling of the informational structurea simple extension of our previous example illustrates the point s come home by 500 then we can go to the hardware store before it closes that way we can finish the bookshelves tonighta plausible intentional level analysis of this discourse which follows the second reading we gave earlier is that finishing the bookshelves motivates going to the hardware store and that and together motivate coming home by 500 coming home by 500 is the nucleus of the entire discourse it is the action that s wishes h to perform this structure is illustrated below motivation motivation at the informational level this discourse has a different structurefinishing the bookshelves is the nuclear propositioncoming home by 500 is a condition on going to the hardware store and together these are a condition on finishing the bookshelves the intentional and informational structures for this discourse are not isomorphicthus they cannot be produced simultaneously by the application of multiplerelation definitions that assign two labels to consecutive discourse elementsthe most obvious quotfixquot to rst will not workrst failure to adequately support multiple levels of analysis is a serious problem for the theory both from a computational and a descriptive point of viewwe are grateful to barbara grosz kathy mccoy cecile paris donia scott karen sparck jones and an anonymous reviewer for their comments on this researchjohanna moore work on this project is being supported by grants from the office of naval research cognitive and neural sciences division and the national science foundation
J92-4007
a problem for rst the need for multilevel discourse analysiswe note that rhetorical structure theory conflates the informational and intentional levels of discoursewe argue that both informational and intentional relations can hold between clause simultaneously and independently
introduction to the special issue on computational linguistics using large corpora the 1990s have witnessed a resurgence of interest in 1950sstyle empirical and statistical methods of language analysisempiricism was at its peak in the 1950s dominating a broad set of fields ranging from psychology to electrical engineering at that time it was common practice in linguistics to classify words not only on the basis of their meanings but also on the basis of their cooccurrence with other wordsfirth a leading figure in british linguistics during the 1950s summarized the approach with the memorable line quotyou shall know a word by the company it keepsquot regrettably interest in empiricism faded in the late 1950s and early 1960s with a number of significant events including chomsky criticism of ngrams in syntactic structures and minsky and papert criticism of neural networks in perceptrons perhaps the most immediate reason for this empirical renaissance is the availability of massive quantities of data more text is available than ever beforejust ten years ago the onemillion word brown corpus was considered large but even then there were much larger corpora such as the birmingham corpus today many locations have samples of text running into the hundreds of millions or even billions of wordscollections of this magnitude are becoming widely available thanks to data collection efforts such as the association for computational linguistics data collection initiative the european corpus initiative icame the british national corpus the linguistic data consortium the consortium for lexical research electronic dictionary research and standardization efforts such as the text encoding initiative coverage of unrestricted text rather than deep analysis of restricted domains att bell laboratories office 2b421 600 mountain ave murray hill nj 07974ibm tj watson research center pobox 704 j2h24 yorktown heights ny 105981 for more information on the acldci contact felicia hurewitz acldci room 619 williams hall university of pennsylvania philadelphia pa 191046305 usa 2158980083 2155732091 felunagicisupennedufor more information on the ldc contact elizabeth hodas linguistic data consortium room 441 williams hall university of pennsylvania philadelphia pa 191046305 usa 2158980464 2155732175 ehodasunagicisupennedusend email to smbowievaxoxfordacuk for information on the bnc to lexicalnrnsuedu for information on the clr and to eucorpcogsciedinburghacuk for information on the eciinformation on the londonlund corpus and other corpora available through icame can be found in the icame journal edited by stig johansson department of english university of oslo norwaythe case for the resurgence of empiricism in computational linguistics is nicely summarized in susan warwickarmstrong callforpapers for this special issue the increasing availability of machinereadable corpora has suggested new methods for studies in a variety of areas such as lexical knowledge acquisition grammar construction and machine translationthough common in the speech community the use of statistical and probabilistic methods to discover and organize data is relatively new to the field at largethe various initiatives currently under way to locate and collect machinereadable corpora have recognized the potential of using this data and are working toward making these materials available to the research communitygiven the growing interest in corpus studies it seems timely to devote an issue of cl to this topicin section 1 we review the experience of the speech recognition communitystochastic methods based on shannon noisy channel model have become the methods of choice within the speech communityknowledgebased approaches were tried during the first darpa speech recognition project in the early 1970s but have largely been abandoned in favor of stochastic approaches that have become the main focus of darpa more recent effortsin section 2 we discuss how this experience is influencing the language communitymany of the most successful speech techniques are achieving major improvements in performance in the language areain particular probabilistic taggers based on shannon noisy channel model are becoming the method of choice because they correctly tag 95 of the words in a new text a major improvement over earlier technologies that ignored lexical probabilities and other preferences that can be estimated statistically from corpus evidencein section 3 we discuss a number of frequencybased preferences such as collocations and word associationsalthough often ignored in the computational linguistics literature because they are difficult to capture with traditional parsing technology they can easily overwhelm syntactic factors four articles in this special issue take a first step toward preferencebased parsing an empirical alternative to the rational tradition of principlebased parsing atns unification etcin section 4 we discuss entropy and evaluation issues which have become relatively important in recent yearsin section 5 we discuss the application of noisy channel models to bilingual applications such as machine translation and bilingual lexicographyin section 6 we discuss the use of empirical methods in monolingual lexicography contrasting the exploratory data analysis view of statistics with other perspectives such as hypothesis testing and supervisedunsupervised learningtrainingthere are five articles in this special issue on computational lexicography using both the exploratory and the selforganizing approaches to statisticsover the past 20 years the speech community has reached a consensus in favor of empirical methodsas observed by waibel and lee in the introduction to their collection of reprints on speech recognition chapter 5 describes the knowledgebased approach proposed in the 1970s and early 1980sthe pure knowledgebased approach emulates human speech knowledge using expert systemsrulebased systems have had only limited success chapter 6 describes the stochastic approachmost successful largescale systems today use a stochastic approach a number of data collection efforts have helped to bring about this change in the speech community especially the texas instruments digit corpus timit and the darpa resource management database according to the linguistic data consortium the rm database was used by every paper that reported speech recognition results in the 1988 proceedings of ieee icassp the major technical society meeting where speech recognition results are reportedthis is especially significant given that abstracts for this meeting were due just a few months after the release of the corpus attesting to the speech recognition community hunger for standard corpora for development and evaluationback in the 1970s the more dataintensive methods were probably beyond the means of many researchers especially those working in universitiesperhaps some of these researchers turned to the knowledgebased approach because they could not afford the alternativeit is an interesting fact that most of the authors of the knowledgebased papers in chapter 5 of waibel and lee have a university affiliation whereas most of the authors of the dataintensive papers in chapter 6 have an industrial affiliationfortunately as a result of improvements in computer technology and the increasing availability of data due to numerous data collection efforts the dataintensive methods are no longer restricted to those working in affluent industrial laboratoriesat the time of course the knowledgebased approach was not advocated on economic groundsrather the knowledgebased approach was advocated as necessary in order to deal with the lack of allophonic invariancethe mapping between phonemes and their allophonic realizations is highly variable and ambiguousthe phoneme t for example may be realized as a released stop in quottomquot as a flap in quotbutterquot or as a glottal stop in quotbottlequot two different phonemes may lead to the same allophonic variant in some contextsfor example quotwriterquot and quotriderquot are nearly identical in many dialects of american englishresidual differences such as the length of the preconsonantal vowel are easily overwhelmed by the context in which the word appearsthus if one says quotjoe is a rider of novelsquot listeners hear quotjoe is a writer of novelsquot while if one says quotjoe is a writer of horsesquot listeners hear quotjoe is a rider of horsesquot listeners usually have little problem with the wild variability and ambiguity of speech because they know what the speaker is likely to sayin most systems for sentence recognition such modifications must be viewed as a kind of noise that makes it more difficult to hypothesize lexical candidates given an input phonetic transcriptionto see that this must be the case we note that each phonological rule fin the utterance quotdid you hit it to tomquot results in irreversible ambiguity the phonological rule does not have a unique inverse that could be used to recover the underlying phonemic representation for a lexical itemfor example mlle tongue flap could have come from a t or a d the first darpa speech understanding project emphasized the use of highlevel constraints as a tool to disambiguate the allophonic information in the speech signal by understanding the messageat bbn researchers called their system hwim for they hoped to use nlp techniques such as atns to understand the sentences that they were trying to recognize even though the output of their front end was highly variable and ambiguousthe emphasis today on empirical methods in the speech recognition community is a reaction to the failure of knowledgebased approaches of the 1970sit has become popular once again to focus on highlevel natural language constraints in order to reduce the search spacebut this time ngram methods have become the methods of choice because they seem to work better than the alternatives at least when the search space is measured in terms of entropyideally we might hope that someday parsers might reduce entropy beyond that of ngrams but right now parsers seem to be more useful for other tasks such as understanding who did what to whom and less useful for predicting what the speaker is likely to sayin the midst of all of this excitement over highlevel knowledgedbased nlp techniques ibm formed a new speech group around the nucleus of an existing group that was moved from raleigh north carolina to yorktown heights early in 1972the raleigh group brought to yorktown a working speech recognition system that had been designed in accordance with prevailing antiempiricist attitudes of the time though it would soon serve as a foundation for the revival of empiricism in the speech and language communitiesthe front end of the raleigh system converted the speech signal first into a sequence of 80 filter bank outputs and then into a sequence of phonemelike labels using an elaborate set of handtuned rules that would soon be replaced with an automatically trained procedurethe back end converted these labels into a sequence of words using an artificial finitestate grammar that was so small that the finitestate machine could be written down on a single piece of papertoday many systems attempt to model unrestricted language using methods that will be discussed in section 3 but at the time it was standard practice to work with artificial grammars of this kindwhen it worked perfectly the front end produced a transcription of the speech signal such as might be produced by a human phonetician listening carefully to the original speechunfortunately it almost never worked perfectly even on so small a stretch as a single wordrapid phones such as flaps were often missed long phones such as liquids and stressed vowels were often broken into several separate segments and very often phones were simply mislabeledthe back end was designed to overcome these problems by navigating through the finitestate network applying a complicated set of handtuned penalties and bonuses to the various paths in order to favor those paths where the lowlevel acoustics matched the highlevel grammatical constraintsthis system of handtuned penalties and bonuses correctly recognized 35 of the sentences in the test setat the time this level of performance was actually quite impressive but these days one would expect much more now that most systems use parameters trained on real data rather than a complicated set of handtuned penalties and bonusesalthough the penalties and bonuses were sometimes thought of as probabilities the early raleigh system lacked a complete and unified probabilistic frameworkin a radical departure from the prevailing attitudes of the time the yorktown group turned to shannon theory of communication in the presence of noise and recast the speech recognition problem in terms of transmission through a noisy channelshannon theory of communication also known as information theory was originally developed at att bell laboratories to model communication along a noisy channel such as a telephone linesee fano for a wellknown secondary source on the subject or cover and thomas or bell cleary and witten for more recent treatmentsthe noisy channel paradigm can be applied to other recognition applications such as optical character recognition and spelling correctionimagine a noisy channel such as a speech recognition machine that almost hears an optical character recognition machine that almost reads or a typist who almost typesa sequence of good text goes into the channel and a sequence of corrupted text comes out the other endhow can an automatic procedure recover the good input text i from the corrupted output 0in principle one can recover the most likely input i by hypothesizing all possible input texts i and selecting the input text with the highest score prsymbolically where argmax finds the argument with the maximum scorethe prior probability pr is the probability that i will be presented at the input to the channelin speech recognition it is the probability that the talker utters i in spelling correction it is the probability that the typist intends to type iin practice the prior probability is unavailable and consequently we have to make do with a model of the prior probability such as the trigram modelthe parameters of the language model are usually estimated by computing various statistics over a large sample of textthe channel probability pr is the probability that 0 will appear at the output of the channel when i is presented at the input it is large if i is similar in some appropriate sense to 0 and small otherwisethe channel probability depends on the applicationin speech recognition for example the output for the word quotwriterquot may look similar to the word quotriderquot in character recognition this will not be the caseother examples are shown in table 1rather than rely on guesses for the values of the bonuses and penalties as the raleigh group had done the yorktown group used three levels of hidden markov models to compute the conditional probabilities necessary for the noisy channela markov model is a finite state machine with probabilities governing transitions between states and controlling the emission of output symbolsif the sequence of state transitions cannot be determined when the sequence of outputs is known the markov model is said to be hiddenin practice the forwardbackward algorithm is often used to estimate the values of the transition and emission parameters on the basis of corpus evidencesee furui for a brief description of the forwardbackward algorithm and for a longer tutorial on hmmsthe general procedure of which the forwardbackward algorithm is a special case was first published and shown to converge by baum the first level of the raleigh system converted spelling to phonemic base forms rather like a dictionary the second level dealt with the problems of allophonic variation mentioned above the third level modeled the front endat first the values of the parameters in these hmms were carefully constructed by hand but eventually they would all be replaced with estimates obtained by training on real data using statistical estimation procedures such as the forwardbackward algorithmthe advantages of training are apparent in table 2note the astounding improvement in performancedespite a few decoding problems which indicate limitations in the heuristic search procedure employed by the recognizer sentence accuracy had improved from 35 to 8283moreover training turned out to be important for speeding up the searchthe first row shows the results for the initial estimates which were very carefully prepared by two members of the group over several weeksdespite all of the careful hand work the search was so slow that only 10 of the 100 test sentences could be recognizedthe initial estimates were unusable without at least some trainingthese days most researchers find that they do not need to be nearly so careful in obtaining initial estimatesemboldened by this success the group began to explore other areas where training might be helpfulthey began by throwing out the phonological rulesthus they accepted only a single pronunciation for each wordbyoutter had to be pronounced butter and something had to be pronounced something and that was thatany change in these pronunciations was treated as a mislabeling from the front endafter training this simplified system correctly decoded 75 of 100 test sentences which was very encouragingfinally they removed the dictionary lookup hmm taking for the pronunciation of each word its spellingthus a word like throyough was assumed to have a pronunciation like tuh huh ruh oh uu guh huhafter training the system learned that with words like 1ate the front end often missed the e similarly it learned that g and h were often silentthis crippled system was still able to recognize 43 of 100 test sentences correctly as compared with 35 for the original raleigh systemthese results firmly established the importance of a coherent probabilistic approach to speech recognition and the importance of data for estimating the parameters of a probabilistic modelone by one pieces of the system that had been assiduously assembled by speech experts yielded to probabilistic modelingeven the elaborate set of handtuned rules for segmenting the frequency bank outputs into phonemesized segments would be replaced with training by the summer of 1977 performance had reached 95 correct by sentence and 994 correct by word a considerable improvement over the same system with handtuned segmentation rules progress in speech recognition at yorktown and almost everywhere else as well has continued along the lines drawn in these early experimentsas computers increased in power ever greater tracts of the heuristic wasteland opened up for colonization by probabilistic modelsas greater quantities of recorded data became available these areas were tamed by automatic training techniquestoday as indicated in the introduction of waibel and lee almost every aspect of most speech recognition systems is dominated by probabilistic models with parameters determined from datamany of the very same methods are being applied to problems in natural language processing by many of the very same researchersas a result the empirical approach has been adopted by almost all contemporary partofspeech programs bahl and mercer leech garside and atwell jelinek deroualt and merialdo garside leech and sampson church derose hindle kupiec ayuso et al demarcken karlsson boggess agarwal and davis merialdo and voutilainen heikkila and anttila these programs input a sequence of words eg the chair will table the motion and output a sequence of partofspeech tags eg art noun modal verb art nounmost of these programs correctly tag at least 95 of the words with practically no restrictions on the input text and with very modest space and time requirementsperhaps the most important indication of success is that many of these statistical tagging programs are now being used on large volumes of data in a number of different application areas including speech synthesis speech recognition information retrieval sense disambiguation and computational lexicography apparently these programs must be addressing some important needs of the research community or else they would not be as widely cited as they aremany of the papers in this special issue refer to these taggersas in speech recognition data collection efforts have played a pivotal role in advancing dataintensive approaches to partofspeech taggingthe brown corpus and similar efforts within the icame community have created invaluable opportunitiesthe penn treebank is currently being distributed by the acldcithe european corpus initiative plans to distribute similar material in a variety of languageseven greater resources are expected from the linguistic data consortium and the consortium for lexical research is helping to make dictionaries more accessible to the research communityfor information on contacting these organizations see footnote 1many of the tagging programs mentioned above are based on shannon noisy channel modelimagine that a sequence of parts of speech p is presented at the input to the channel and for some crazy reason it appears at the output of the channel in a corrupted form as a sequence of words w our job is to determine p given w by analogy with the noisy channel formulation of the speech recognition problem the most probable partofspeech sequence 15 is given by in theory with the proper choice for the probability distributions pr and pr this algorithm will perform as well as or better than any possible alternative that one could imagineunfortunately the probability distributions pr and pr are enormously complex pr is a table giving for every pair w and p of the same length a number between 0 and 1 that is the probability that a sequence of words chosen at random from english text and found to have the partofspeech sequence p will turn out to be the word sequence w changing even a single word or partofspeech in a long sequence may change this number by many orders of magnitudehowever experience has shown that surprisingly high tagging accuracy can be achieved in practice using very simple approximations to pr and prin particular it is possible to replace pr by a trigram approximation and to replace pr by an approximation in which each word depends only on its own part of speech in these equations pi is the ith part of speech in the sequence p and w is the ith word in w the parameters of this model the lexical probabilities pr and the contextual probabilities pr are generally estimated by computing various statistics over large bodies of textone can view the first set of parameters as a dictionary and the second set of parameters as a grammartraditional methods have tended to ignore lexical preferences which are the singlemost important source of constraint for partofspeech tagging and are sufficient by themselves to resolve 90 of the tagsconsider the trivial sentence quoti see a birdquot where every word is almost unambiguousin the brown corpus the word quotiquot appears as a pronoun in 5131 times out of 5132 100 quotseequot appears as a verb in 771 times out of 772 quotaquot appears as an article in 22938 times out of 22944 and quotbirdquot appears as a noun in 25 times out of 25 100however in addition to the desired tag many dictionaries such as webster ninth new collegiate dictionary also list a number of extremely rare alternatives as illustrated in table 3these alternatives can usually be eliminated on the basis of the statistical preferences but traditional parsers do not and consequently run into serious difficultiesattempts to eliminate unwanted tags on syntactic grounds have not been very successfulfor example ilnoun seenoun anoun birdnoun cannot be ruled out as syntactically illformed because the parser must accept sequences of four nouns in other situations city school committee meetingapparently syntactic rules are not nearly as effective as lexical preferences at least for this applicationthe tradition of ignoring preferences dates back to chomsky introduction of the competence approximation recall that chomsky was concerned that approximations such as shannon ngram approximation which was very much in vogue at the time were inappropriate for his needs and therefore he introduced an alternative with complementary strengths and weaknessesthe competence approximation is more appropriate for modeling longdistance dependences such as agreement constraints and whmovement but at the cost of missing certain crucial local constraints especially the kinds of preferences that are extremely important for partofspeech tagging23 using statistics to fit probabilistic models to data probabilistic models provide a theoretical abstraction of language very much like chomsky competence modelthey are designed to capture the more important aspects of language and ignore the less important aspects where what counts as important depends on the applicationstatistics are often used to estimate the values of the parameters in these probabilistic modelsthus for example we might estimate the probability distribution for the word kennedy in the brown corpus by modeling the distribution with a binomial and then use the frequency of kennedy in the brown corpus to fit the model to the datathe classic example of a binomial process is coin tossingsuppose that the coin comes up heads with probability p then the probability that it will come up heads exactly m times in n tosses is here which is called the binomial coefficient is the number of ways the m positions can be chosen from the n coin tossesit is equal to where n is equal to 1 x 2 x x n for example tossing a fair coin three times will result in 0 1 2 and 3 heads with probability 18 38 38 and 18 respectivelythis set of probabilities is called the binomial distribution for n and p the expected value of the binomial distribution is np and the variance is o2 npthus tossing a fair coin three times will produce an average of 32 heads with a variance of 34how can the binomial be used to model the distribution of kennedylet p be the probability that a word chosen at random in english text is kennedywe can think of a series of words in english text as analogous to tosses of a coin that comes up heads with probability p the coin is heads if the word is kennedy and is tails otherwiseof course we do not really know the value of p but in a sample of n words we should expect to find about np occurrences of kennedythere are 140 occurrences of kennedy in the brown corpus for which n is approximately 1000000therefore we can argue that 1000 000p must be about 140 and we can make an estimate p of p equal to 1401 000 000if we really believe that words in english text come up like heads when we flip a biased coin then p is the value of p that makes the brown corpus as probable as possibletherefore this method of estimating parameters is called maximum likelihood estimation for simple models mle is very easy to implement and produces reasonable estimates in many casesmore elaborate methods such as the goodturing method or deleted estimation should be used when the frequencies are small it is often convenient to use these statistical estimates as if they are the same as the true probabilities but this practice can lead to trouble especially when the data do not fit the model very wellin fact content words do not fit a binomial very well because content words tend to appear in quotburstsquot that is content words are like buses in new york city they are social animals and like to travel in packsin particular if the word kennedy appears once in a unit of text then it is much more likely than chance to appear a second time in the same unit of textfunction words also deviate from the binomial though for different reasons these bursts might serve a useful purposepeople seem to be able to use these bursts to speed up reaction times in various taskspsycholinguists use the term priming to refer to this effectbursts might also be useful in a number of practical applications such as information retrieval there have been a number of attempts over the years to model these burststhe negative binomial distribution for example was explored in considerable detail in the classic study of the authorship of the federalist papers mosteller and wallace a mustread for anyone interested in statistical analyses of large corporawe can show that the distribution of kennedy is very bursty in the brown corpus by dividing the corpus into k segments and showing that the probability varies radically from one segment to anotherfor example if we divide the brown corpus into 10 segments of 100000 words each we find that the frequency of kennedy is 58 57 2 12 6 1 4 0 0 0the variance of these 10 numbers is 539under the binomial assumption we obtain a very different estimate of the variancein a sample of n 100000 words with fr 140 per million we would expect a variance of np 14the large discrepancy between the empirically derived estimate of the variance and the one based on the binomial assumption indicates that the binomial assumption does not fit the data very wellwhen the data do not fit the model very well we may wish to look for alternative modelsfour articles in this special issue propose empirical alternatives to traditional parsing methods based on the competence modelas we have seen the competence model does not fit the partofspeech application very well because of the model failure to capture certain lexical preferencesthe model also runs into trouble in a number of other nlp applicationsconsider for example the problem of deciding between the words form and farm in the ocr application when they appear in the context of pure form most people would have little difficulty deciding that form was the intended wordneither does an ocr system that employs a trigram language model because preferences such as collocations fall naturally within the scope of the ngram approximationtraditional nlp techniques on the other hand fail here because the competence approximation does not capture the crucial collocational constraintslexicographers use the terms collocation cooccurrence and lexis to describe various constraints on pairs of wordsthe words strong and powerful are perhaps the canonical examplehalliday noted that although strong and powerful have similar syntax and semantics there are contexts where one is much more appropriate than the other psycholinguists have a similar concept which they call word associationstwo frequently cited examples of highly associated words are breadbutter and doctor nursesee palermo and jenkins for tables of associations measured for 200 words factored by grade level and sexin general subjects respond more quickly to a word such as butter when it follows a highly associated word such as breadsome results and implications are summarized from reactiontime experiments in which subjects either classified successive strings of letters as words and nonwords or pronounced the stringsboth types of response to words were consistently faster when preceded by associated words rather than unassociated words more likely alternatives the this one two a three please in are will the would also do have know do the this these problems the document question first thing point to these constraints are rarely discussed in computational linguistics because they are not captured very well with traditional nlp techniques especially those based on the competence approximationof course it is not hard to build computational models that capture at least some of these preferenceseven the trigram model despite all of its obvious shortcomings does better than many traditional methods in this regardthe power of the trigram approximation is illustrated in table 4 for the sentence fragment we need to resolve all of the important issues selected from a 90 millionword corpus of ibm office correspondenceseach row shows the correct word the rank of the correct word as predicted by the trigram model and then the list of words judged by the trigram model to be more probable than the correct wordthus we is the 9th most probable word to begin a sentenceat this point in the sentence in the absence of any other context the trigram model is as good as any model we could havefollowing we at the beginning of the sentence need is the 7th most probable word ranking behind are will the would also and dohere again the trigram model still accounts for all of the context there is and so should be doing as well as any model canfollowing we need to is the most probable wordalthough by now the trigram model has lost track of the complete context it is still doing very welltable 4 shows that the trigram model captures a number of important frequencybased constraints that would be missed by most traditional parsersfor example the trigram model captures the fact that issues is particularly predictable in the collocation important issuesin general highfrequency function words like to and the which are acoustically short are more predictable than content words like resolve and important which are longerthis is convenient for speech recognition because it means that the language model provides more powerful constraints just when the acoustic model is having the toughest timeone suspects that this is not an accident but rather a natural result of the evolution of speech to fill the human needs for reliable communication in the presence of noisethe ideal nlp model would combine the strengths of both the competence approximation and the ngram approximationone possible solution might be the inside outside algorithm a generalization of the forward backward algorithm that estimates the parameters of a hidden stochastic contextfree grammar rather than a hidden markov modelfour alternatives are proposed in these special issues brent briscoe and carroll hindle and rooth and weischedel et al briscoe and carroll contribution is very much in the spirit of the insideoutside algorithm whereas hindle and rooth contribution for example takes an approach that is much closer to the concerns of lexicography and makes use of preferences involving words rather than preferences that ignore words and focus exclusively on syntactic structureshindle and rooth show how cooccurrence statistics can be used to improve the performance of the parser on sentences such as wanted she placed the dress on the rack put where lexical preferences are crucial to resolving the ambiguity of prepositional phrase attachment hindle and rooth show that a parser can enforce these preferences by comparing the statistical association of the verbpreposition with the association of the objectpreposition when attaching the prepositional phrasethis work is just a first step toward preferencebased parsing an empirically motivated alternative to traditional rational approaches such as atns unification parsers and principlebased parsershow do we decide if one language model is better than anotherin the 1940s shannon defined entropy a measure of the information content of a probabilistic source and used it to quantify such concepts as noise redundancy the capacity of a communication channel and the efficiency of a codethe standard unit of entropy is the bit or binary digitsee bell cleary and witten for a more discussion on entropy section 225 shows how to compute the entropy of a model and section 4 discusses how shannon and others have estimated the entropy of englishfrom the point of view of speech recognition or ocr we would like to be able to characterize the size of the search space the number of binary questions that the recognizer will have to answer on average in order to decode a messagecross entropy is a useful yardstick for measuring the ability of a language model to predict a source of dataif the language model is very good at predicting the future output of the source then the cross entropy will be smallno matter how good the language model is though the cross entropy cannot be reduced below a lower bound known as the entropy of the source the cross entropy of the source with itselfone can also think of the cross entropy between a language model and a probabilistic source as the number of bits that will be needed on average to encode a symbol from the source when it is assumed albeit mistakenly that the language model is a perfect probabilistic characterization of the sourcethus there is a close connection between a language model and a coding schemetable 5 below lists a number of coding schemes along with estimates of their cross entropies with english textthe standard ascii code requires 8 bits per characterit would be a perfect code if the source produced each of the 28 256 symbols equally often and independently of contexthowever english is not like thisfor an english source it is possible to reduce the average length of the code by assigning shorter codes to more frequent symbols and longer codes to less frequent symbols using a coding scheme such as a huffman code other codes such as lempelziv and ngram models on words achieve even better compression by taking advantage of context though none of these codes seem to perform as well as people do in predicting the next letter the cross entropy h of a code and a source is given by where pr is the joint probability of a symbol s following a history h given the sourcepr is the conditional probability of s given the history h and the codein the special case of ascii where pr 1256 we can actually carry out the indicated sum and find not surprisingly that ascii requires 8 bits per character in more difficult cases cross entropy is estimated by a sampling proceduretwo independent samples of the source are collected si and 52the first sample si is used to fit the values of the parameters of the code and second sample s2 is used to test the fitfor example to determine the value of 5 bits per character for the huffman code in table 5 we counted the number of times that each of the 256 ascii characters appeared in si a sample of ni characters selected from the wall street journal text distributed by the acldcithese counts were used to determine pr since the huffman code does not depend on hthen we collected a second sample sz of n2 characters and tested the fit with the formula where 52 i is the ith character in the second sampleit is important in this procedure to use two different samples of textif we were to use the same sample for both testing and training we would obtain an overly optimistic estimate of how well the code performsthe other codes in table 5 make better use of context and therefore they achieve better compressionfor example huffman coding on words is more than twice as compact as huffman coding on characters the unigram model is also more than twice as good as lempelziv demonstrating that compress a popular unixtm tool for compressing files could be improved by a factor of two the trigram model the method of choice in speech recognition achieves 176 bits per character outperforming the practical alternatives in table 5 but falling half a bit shy of shannon estimate of human performancesomeday parsers might help squeeze out some of this remaining half bit between the trigram model and shannon bound but thus far parsing has had little impactlan i and young for example conducted a number of experiments with stochastic contextfree grammars and concluded that quotmlle experiments on word recognition showed that although scfgs are effective their complex training routine prohibits them from directly replacing the simpler hmmbased recognizersquot they then proceeded to argue quite sensibly that parsers are probably more appropriate for tasks where phrase structure is more directly relevant than in word recognitionin general phrase structure is probably more important for understanding who did what to whom than recognizing what was saidsome tasks are probably more appropriate for chomsky rational approach to language and other tasks are probably more appropriate for shannon empirical approach to languagetable 6 summarizes some of the differences between the two approachesis machine translation more suitable for rationalism or empiricismboth approaches have been investigatedweaver was the first to propose an information theoretic approach to mtthe empirical approach was also practiced at georgetown during the 1950s and 1960s in a system that eventually became known as systranrecently most work in mt has tended to favor rationalism though there are some important exceptions such as examplebased mt the issue remains as controversial as ever as evidenced by the lively debate on rationalism versus empiricism at tmi92 a recent conference on mtthe paper by brown et al revives weaver information theoretic approach to mtit requires a bit more squeezing and twisting to fit machine translation into the noisy channel mold to translate for example from french to english one imagines that the native speaker of french has thought up what he or she wants to say in english and then translates mentally into french before actually saying itthe task of the translation system is to recover the original english e from the observed french f while this may seem a bit farfetched it differs little in principle from using english as an interlingua or as a meaning representation languageas before one minimizes one chance of error by choosing e according to the formula argmaxpr pr as before the parameters of the model are estimated by computing various statistics over large samples of textthe prior probability pr is estimated in exactly the same way as discussed above for the speech recognition applicationthe parameters of the channel model pr are estimated from a parallel text that has been aligned by an automatic procedure that figures out which parts of the source text correspond to which parts of the target textsee brown et al for more details on the estimation of the parametersthe information theoretic approach to mt may fail for reasons advanced by chomsky and others in the 1950sbut regardless of its ultimate success or failure there is a growing community of researchers in corpusbased linguistics who believe that it will produce a number of lexical resources that may be of great valuein particular there has been quite a bit of discussion of bilingual concordances recently including the 1990 and 1991 lexicography conferences sponsored by oxford university press and waterloo universitya bilingual concordance is like a monolingual concordance except that each line in the concordance is followed by a line of text in a second languagethere are also some hopes that the approach might produce tools that could be useful for human translators there are three papers in these special issues on aligning bilingual texts such as the canadian hansards that are available in both english and french brown et al gale and church and kay and rosenschein warwickarmstrong and russell have also been interested in the alignment problem except for brown et al this work is focused on the less controversial applications in lexicography and human translation rather than mtthere has been a long tradition of empiricist approaches in lexicography both bilingual and monolingual dating back to johnson and murrayas corpus data and machinereadable dictionaries become more and more available it is becoming easier to compile lexicons for computers and dictionaries for peoplethis is a particularly exciting area in computational linguistics as evidenced by the large number of contributions in these special issues biber brent hindle and rooth pustejovsky et al and smadja starting with the cobuild dictionary it is now becoming more and more common to find lexicographers working directly with corpus datasinclair makes an excellent case for the use of corpus evidence in the preface to the cobuild dictionary for the first time a dictionary has been compiled by the thorough examination of a representative group of english texts spoken and written running to many millions of wordsthis means that in addition to all the tools of the conventional dictionary makerswide reading and experience of english other dictionaries and of course eyes and earsthis dictionary is based on hard measurable evidence the experience of writing the cobuild dictionary is documented in sinclair a collection of articles from the cobuild project see boguraev for a strong positive review of this collectionat the time the corpusbased approach to lexicography was considered pioneering even somewhat controversial today quite a number of the major lexicography houses are collecting large amounts of corpus datathe traditional alternative to corpora are citation indexes boxes of interesting citations collected on index cards by large numbers of human readersunfortunately citation indexes tend to be a bit like butterfly collections full of rare and unusual specimens but severely lacking in ordinary gardenvariety mothsmurray the editor of the oxford english dictionary complained the editor or his assistants have to search for precious hours for examples of common words which readers passed bythus of abus ion we found in the slips about 50 instances of abuse not five he then went on to say quotthere was not a single quotation for imaginable a word used by chaucer sir thomas more and miltonquot from a statistical point of view citation indexes have serious sampling problems they tend to produce a sample that is heavily skewed away from the quotcentral and typicalquot facts of the language that every speaker is expected to knowlarge corpus studies such as the cobuild dictionary offer the hope that it might be possible to base a dictionary on a large and representative sample of the language as it is actually usedideally we would like to use a large and representative sample of general language but if we have to choose between large and representative which is more importantthere was a debate on a similar question between prof john sinclair and sir randolf quirk at the 1991 lexicography conference sponsored by oxford university press and waterloo university where the house voted perhaps surprisingly that a corpus does not need to be balancedalthough the house was probably predisposed to side with quirk position sinclair was able to point out a number of serious problems with the balancing positionit may not be possible to properly balance a corpusand moreover if we insist on throwing out idiosyncratic data we may find it very difficult to collect any data at all since all corpora have their quirksin some sense the question comes down to a tradeoff between quality and quantityamerican industrial laboratories tend to favor quantity whereas the bnc nerc and many dictionary publishers especially in europe tend to favor qualitythe paper by biber argues for quality suggesting that we ought to use the same kinds of sampling methods that statisticians use when studying the economy or predicting the results of an electionpoor sampling methods inappropriate assumptions and other statistical errors can produce misleading results quotthere are lies damn lies and statisticsquot unfortunately sampling methods can be expensive it is not clear whether we can justify the expense for the kinds of applications that we have in mindtable 7 might lend some support for the quantity position for murray example of imaginablenote that there is plenty of evidence in the larger corpora but not in the smaller onesthus it would appear that quotmore data are better dataquot at least for the purpose of finding exemplars of words like imaginablesimilar comments hold for collocation studies as illustrated in table 8 which shows mutual information values for several collocations in a number of different corporamutual information compares the probability of observing word x and word y together to the probability of observing x and y independently most of the mutual information values in table 8 are much larger than zero indicating as we would hope that the collocations appear much more often in these corpora than one would expect by chancethe probabilities pr and pr are estimated by counting the number of observations of x and y in a corpus f and f respectively and normalizing by n the size of the corpusthe joint probability pr is estimated by counting the number of times that x is immediately followed by y in the corpus f and normalizing by n unfortunately mutual information values become unstable if the counts are too smallfor this reason small counts are shown in parenthesesa dash is used when there is no evidence for the collocationlike table 7 table 8 also shows that quotmore data are better dataquot that is there is plenty of evidence in the larger corpora but not in the smaller onesquotonly a large corpus of natural language enables us to identify recurring patterns in the language and to observe collocational and lexical restrictions accuratelyquot however in order to make use of this evidence we have to find ways to compensate for the obvious problems of working with unbalanced datafor example in the canadian hansards there are a number of unwanted phrases such as quothouse of commonsquot quotfree trade agreementquot quothonour and duty to presentquot and quothearhearquot fortunately though it is extremely unlikely that these unwanted phrases will appear much more often than chance across a range of other corpora such as department of energy abstracts or the associated press newsif such a phrase were to appear relatively often across a range of such diverse corpora then it is probably worthy of further investigationthus it is not required that the corpora be balanced but rather that their quirks be uncorrelated across a range of different corporathis is a much weaker and more realistic requirement than the more standard practice of balancing and purging quirksstatistics can be used for many different purposestraditionally statistics such as student ttests were developed to test a particular hypothesisfor example suppose that we were concerned that strong enough should not be considered a collocationa ttest could be used to compare the hypothesis that strong enough appears too often to be a fluke against the null hypothesis that the observations can be attributed to chancethe tscore compares the two hypotheses by taking the difference of the means of the two probability distributions and normalizing appropriately by the variances so that the result can be interpreted as a number of standard deviationstheoretically if the tscore is larger than 165 standard deviations then we ought to believe that the cooccurrences are significant and we can reject the null hypothesis with 95 confidence though in practice we might look for a tscore of 2 or more standard deviations since tscores are often inflated see dunning for a critique of the assumption that the probabilities are normally distributed and an alternative parameterization of the probability distributionsin the brown corpus it happens that f 11 f 194 f 426 and n 1 181 041using these values we estimate t 33 which is larger than 165 and therefore we can confidently reject the null hypothesis and conclude that the cooccurrence is significantly larger than chancethe estimation uses the approximation a2 f n2 which can be justified under appropriate binomial assumptionsit is also assumed that a2 pr is very small and can be omittedalthough statistics are often used to test a particular hypothesis as we have just seen statistics can also be used to explore the space of possible hypotheses or to discover new hypotheses see tukey and mosteller and tukey for two textbooks on exploratory data analysis and jelinek for a very nice review paper on selforganizing statisticsboth the exploratory and selforganizing views are represented in these special issuespustejovsky et al use an eda approach to investigate certain questions in lexical semanticsbrent in contrast adopts a selforganizing approach to identify subcategorization featurestable 9 shows how the tscore can be used in an exploratory mode to extract large numbers of words from the associated press news that cooccur more often with strong than with powerful and vice versait is an interesting question whether collocations are simply idiosyncratic as halliday and many others have generally assumed hypothesized that strong is an intrinsic quality whereas powerful is an extrinsic onethus for example any worthwhile politician or because can expect strong supporters who are enthusiastic convinced vociferous etc but far more valuable are powerful supporters who will bring others with themthey are also according to the ap news much rareror at any rate much less often mentionedthis is a fascinating hypothesis that deserves further investigationsummary statistics such as mutual information and tscores may have an important role to play in helping lexicographers to discover significant patterns of collocations though the position remains somewhat controversialsome lexicographers prefer mutual information some prefer tscores and some are unconvinced that either of them is any goodchurch et al argued that different statistics have different strengths and weaknesses and that it requires human judgment and exploration to decide which statistic is best for a particular problemothers such as jelinek would prefer a selforganizing approach where there is no need for human judgmentthe flourishing renaissance of empiricism in computational linguistics grew out of the experience of the speech recognition community during the 1970s and 1980smany of the same statistical techniques entropy mutual information student tscore have appeared in one form or another often first in speech and then soon thereafter in languagemany of the same researchers have applied these methods to a variety of application areas ranging from language modeling for noisy channel applications to partofspeech tagging parsing translation lexicography text compression and information retrieval empiricism is of course a very old traditionback in the 1950s and 1960s long before the speech work of the 1970s and 1980s there was skinner behaviorism in psychology shannon information theory in electrical engineering and harris distributional hypothesis in american linguistics and the firthian approach in british linguistics it is possible that much of this work was actually inspired by turing codebreaking efforts during world war ii but we may never know for sure given the necessity for secrecythe recent revival in empiricism has been fueled by three developmentsfirst computers are much more powerful and more available than they were in the 1950s when empiricist ideas were first applied to problems in language or in the 1970s and 1980s when dataintensive methods were too expensive for researchers working in universitiessecond data have become much more available than ever beforeas a result of a number of data collection and related efforts such as acldci bnc clr ed edr ldc icame nerc and tei most researchers should now be able to make use of a number of very respectable machinereadable dictionaries and text corporadataintensive methods are no longer restricted to those working in affluent industrial laboratoriesthird and perhaps most importantly due to various political and economic changes around the world there is a greater emphasis these days on deliverables and evaluationdata collection efforts have been relatively successful in responding to these pressures by delivering massive quantities of datatext analysis has also prospered because of its tradition of evaluating performance with theoretically motivated numerical measures such as entropy
J93-1001
introduction to the special issue on computational linguistics using large corporaa historical account of this empirical renaissance is provide in this workmuch recent research in the field of natural language processing has focused on an empirical corpusbased approach
generalized probabilistic lr parsing of natural language with unificationbased grammars the first issue to consider is what the analysis will be used for and what constraints this places on its form the corpus analysis literature contains a variety of proposals ranging from partofspeech tagging to assignment of a unique sophisticated syntactic analysis our eventual goal is to recover a semantically and pragmatically appropriate syntactic analysis capable of supporting semantic interpretation two stringent requirements follow immediately firstly the analyses assigned must determinately represent the syntactic relations that hold between all constituents in the input secondly they be drawn from an priori wellformed set of possible syntactic analyses otherwise semantic interpretation of the resultant analyses cannot be guaranteed to be unambiguous and the semantic operations defined cannot be guaranteed to match and yield an interpretation these requirements immediately suggest that approaches that recover only lexical tags or a syntactic analysis that is the closest fit to some previously defined set of possible analyses are inadequate pioneering approaches to corpus analysis proceeded on the assumption that computationally tractable generative grammars of sufficiently general coverage could not be developed however the development of widecoverage declarative and computationally tractable grammars makes this assumption questionable for example the anlt word and sentence grammar consists of an english lexicon of approximately 40000 lexemes and a compiled fixedarity term unification grammar containing around 700 phrase structure rules taylor grover and briscoe demonstrate that an earlier version of this grammar was capable of assigning the correct analysis to 968 of a corpus of 10000 noun phrases extracted from a variety of corpora however although taylor and show that the anlt grammar very wide coverage they abstract away from issues of lexical idiosyncrasy by formimg equivalence classes of noun phrases and parsing a single token of each class and they do not address the issues of 1 tuning a grammar to a particular corpus or sublanguage 2 selecting the correct analysis from the set licensed by the grammar and 3 providing reliable analyses of input outside the coverage of the grammar firstly it is clear that vocabulary idiom and conventionalized constructions used in say legal language and dictionary definitions will differ both in terms of the range and frequency of words and constructions deployed secondly church and patil demonstrate that for a realistic grammar parsing realistic input the set of possible analyses licensed by the grammar can be in the thousands finally it is extremely unlikely that any generative grammar will ever be capable of correctly analyzing all naturally occurring input even when tuned for a particular corpus or sublanguage this paper we describe our to the first and second problems and make some preliminary remarks concerning the third problem our apto grammar tuning is based on a semiautomatic parsing phase which additions to the grammar are made manually and statistical information concerning the frequency of use of grammar rules is acquired using this statistical information and modified grammar a breadthfirst probabilistic parser is constructed the latter is capable of ranking the possible parses identified by the grammar in a useful manner however sentences whose correct analysis is outside the coverage of the grammar reriain a problem the feasibility and usefulness of our approach has been investigated in a preliminary way by analyzing a small corpus of 26 ted briscoe and john carroll generalized probabilistic lr parsing definitions drawn from the dictionary of contemporary english this corpus was chosen because the vocabulary employed is restricted average definition length is about 10 words and each definition is independent allowing us to ignore phenomena such as ellipsis in addition the language of definitions represents a recognizable sublanguage allowing us to explore the task of tuning a general purpose grammar the results reported below suggest that probabilistic information concerning the frequency of occurrence of syntactic rules correlates in a useful way with the semantically and pragmatically most plausible analysis in section 2 we briefly review extant work on probabilistic approaches to corpus analysis and parsing and argue the need for a more refined probabilistic model to distinguish distinct derivations section 3 discusses work on lr parsing of natural language and presents our technique for automatic construction of lr parsers for unificationbased grammars section 4 presents the method and results for constructing a lalr parse table for the anlt grammar and discusses these in the light of both computational complexity and other empirical results concerning parse table size and construction time section 5 motivates our interactive and incremental approach to semiautomatic production of a disambiguated training corpus and describes the variant of the lr parser used for this task section 6 describes our implementation of a breadthfirst lr parser and compares its performance empirically to a highly optimized chart parser for the same grammar suggesting that lr parsing is more efficient in practice for the anlt grammar despite exponential worst case complexity results section 7 explains the technique we employ for deriving a probabilistic version of the lr parse table from the training corpus and demonstrates that this leads to a more refined and parsecontextdependent probabilistic model capable of distinguishing derivations that in a probabilistic contextfree model would be equally probable section 8 describes and presents the results of our first experiment parsing ldoce noun definitions and section 9 draws some preliminary conclusions and outlines ways in which the work described should be modified and extended 2 probabilistic approaches to parsing in the field of speech recognition statistical techniques based on hidden markov mod we describe work toward the construction of a very widecoverage probabilistic parsing system for natural language based on lr parsing techniquesthe system is intended to rank the large number of syntactic analyses produced by nl grammars according to the frequency of occurrence of the individual rules deployed in each analysiswe discuss a fully automatic procedure for constructing an lr parse table from a unificationbased grammar formalism and consider the suitability of alternative lalr parse table construction methods for large grammarsthe parse table is used as the basis for two parsers a userdriven interactive system that provides a computationally tractable and laborefficient method of supervised training of the statistical information required to drive the probabilistic parserthe latter is constructed by associating probabilities with the lr parse table directlythis technique is superior to parsers based on probabilistic lexical tagging or probabilistic contextfree grammar because it allows for a more contextdependent probabilistic language model as well as use of a more linguistically adequate grammar formalismwe compare the performance of an optimized variant of tomita generalized lr parsing algorithm to an chart parserwe report promising results of a pilot study training on 150 noun definitions from the longman dictionary of contemporary english and retesting on these plus a further 55 definitionsfinally we discuss limitations of the current system and possible extensions to deal with lexical frequency of occurrencethe task of syntactically analyzing substantial corpora of naturally occurring text and transcribed speech has become a focus of recent workanalyzed corpora would be of great benefit in the gathering of statistical data regarding language use for example to train speech recognition devices in more general linguistic research and as a first step toward robust widecoverage semantic interpretationthe alvey natural language tools system is a widecoverage lexical morphological and syntactic analysis system for english previous work has demonstrated that the anlt system is in principle able to assign the correct parse to a high proportion of english noun phrases drawn from a variety of corporathe goal of the work reported here is to develop a practical parser capable of returning probabilistically highly ranked analyses for material drawn from a specific corpus on the basis of minimal training and manual modificationthe first issue to consider is what the analysis will be used for and what constraints this places on its formthe corpus analysis literature contains a variety of proposals ranging from partofspeech tagging to assignment of a unique sophisticated syntactic analysisour eventual goal is to recover a semantically and pragmatically appropriate syntactic analysis capable of supporting semantic interpretationtwo stringent requirements follow immediately firstly the analyses assigned must determinately represent the syntactic relations that hold between all constituents in the input secondly they must be drawn from an a priori defined wellformed set of possible syntactic analyses otherwise semantic interpretation of the resultant analyses cannot be guaranteed to be unambiguous and the semantic operations defined cannot be guaranteed to match and yield an interpretationthese requirements immediately suggest that approaches that recover only lexical tags or a syntactic analysis that is the closest fit to some previously defined set of possible analyses are inadequate pioneering approaches to corpus analysis proceeded on the assumption that computationally tractable generative grammars of sufficiently general coverage could not be developed however the development of widecoverage declarative and computationally tractable grammars makes this assumption questionablefor example the anlt word and sentence grammar consists of an english lexicon of approximately 40000 lexemes and a compiled fixedarity term unification grammar containing around 700 phrase structure rulestaylor grover and briscoe demonstrate that an earlier version of this grammar was capable of assigning the correct analysis to 968 of a corpus of 10000 noun phrases extracted from a variety of corporahowever although taylor grover and briscoe show that the anlt grammar has very wide coverage they abstract away from issues of lexical idiosyncrasy by formimg equivalence classes of noun phrases and parsing a single token of each class and they do not address the issues of 1 tuning a grammar to a particular corpus or sublanguage 2 selecting the correct analysis from the set licensed by the grammar and 3 providing reliable analyses of input outside the coverage of the grammarfirstly it is clear that vocabulary idiom and conventionalized constructions used in say legal language and dictionary definitions will differ both in terms of the range and frequency of words and constructions deployedsecondly church and patil demonstrate that for a realistic grammar parsing realistic input the set of possible analyses licensed by the grammar can be in the thousandsfinally it is extremely unlikely that any generative grammar will ever be capable of correctly analyzing all naturally occurring input even when tuned for a particular corpus or sublanguage in this paper we describe our approach to the first and second problems and make some preliminary remarks concerning the third problemour approach to grammar tuning is based on a semiautomatic parsing phase during which additions to the grammar are made manually and statistical information concerning the frequency of use of grammar rules is acquiredusing this statistical information and modified grammar a breadthfirst probabilistic parser is constructedthe latter is capable of ranking the possible parses identified by the grammar in a useful mannerhowever sentences whose correct analysis is outside the coverage of the grammar reriain a problemthe feasibility and usefulness of our approach has been investigated in a preliminary way by analyzing a small corpus of noun definitions drawn from the longman dictionary of contemporary english this corpus was chosen because the vocabulary employed is restricted average definition length is about 10 words and each definition is independent allowing us to ignore phenomena such as ellipsisin addition the language of definitions represents a recognizable sublanguage allowing us to explore the task of tuning a general purpose grammarthe results reported below suggest that probabilistic information concerning the frequency of occurrence of syntactic rules correlates in a useful way with the semantically and pragmatically most plausible analysisin section 2 we briefly review extant work on probabilistic approaches to corpus analysis and parsing and argue the need for a more refined probabilistic model to distinguish distinct derivationssection 3 discusses work on lr parsing of natural language and presents our technique for automatic construction of lr parsers for unificationbased grammarssection 4 presents the method and results for constructing a lalr parse table for the anlt grammar and discusses these in the light of both computational complexity and other empirical results concerning parse table size and construction timesection 5 motivates our interactive and incremental approach to semiautomatic production of a disambiguated training corpus and describes the variant of the lr parser used for this tasksection 6 describes our implementation of a breadthfirst lr parser and compares its performance empirically to a highly optimized chart parser for the same grammar suggesting that lr parsing is more efficient in practice for the anlt grammar despite exponential worst case complexity resultssection 7 explains the technique we employ for deriving a probabilistic version of the lr parse table from the training corpus and demonstrates that this leads to a more refined and parsecontextdependent probabilistic model capable of distinguishing derivations that in a probabilistic contextfree model would be equally probablesection 8 describes and presents the results of our first experiment parsing ldoce noun definitions and section 9 draws some preliminary conclusions and outlines ways in which the work described should be modified and extendedin the field of speech recognition statistical techniques based on hidden markov modeling are well established the two main algorithms utilized are the viterbi algorithm and the baumwelch algorithm these algorithms provide polynomial solutions to the tasks of finding the most probable derivation for a given input and a stochastic regular grammar and of performing iterative reestimation of the parameters of a stochastic regular grammar by considering all possible derivations over a corpus of inputs respectivelybaker demonstrates that baumwelch reestimation can be extended to contextfree grammars in chomsky normal form fujisaki et al demonstrate that the viterbi algorithm can be used in conjunction with the cyk parsing algorithm and a cfg in cnf to efficiently select the most probable derivation of a given inputkupiec extends baumwelch reestimation to arbitrary cfgsbaumwelch reestimation can be used with restricted or unrestricted grammarsmodels in the sense that some of the parameters corresponding to possible productions over a given terminal category setset of states can be given an initial probability of zerounrestricted grammarsmodels quickly become impractical because the number of parameters requiring estimation becomes large and these algorithms are polynomial in the length of the input and number of free parameterstypically in applications of markov modeling in speech recognition the derivation used to analyze a given input is not of interest rather what is sought is the best model of the inputin any application of these or similar techniques to parsing though the derivation selected is of prime interestbaum proves that baumwelch reestimation will converge to a local optimum in the sense that the initial probabilities will be modified to increase the likelihood of the corpus given the grammar and tabilize within some threshold after a number of iterations over the training corpushowever there is no guarantee that the global optimum will be found and the a priori initial probabilities chosen are critical for convergence on useful probabilities the main application of these techniques to written input has been in the robust lexical tagging of corpora with partofspeech labels fujisaki et al describe a corpus analysis experiment using a probabilistic cnf cfg containing 7550 rules on a corpus of 4206 sentences the unsupervised training process involved automatically assigning probabilities to each cf rule on the basis of their frequency of occurrence in all possible analyses of each sentence of the corpusthese probabilities were iteratively reestimated using a variant of the baumwelch algorithm and the viterbi algorithm was used in conjunction with the cyk parsing algorithm to efficiently select the most probable analysis after trainingthus the model was restricted in that many of the possible parameters defined over the terminal category set were initially set to zero and training was used only to estimate new probabilities for a set of predefined rulesfujisaki et al suggest that the stable probabilities will model semantic and pragmatic constraints in the corpus but this will only be so if these correlate with the frequency of rules in correct analyses and also if the noise in the training data created by the incorrect parses is effectively factored outwhether this is so will depend on the number of false positive examples with only incorrect analyses the degree of heterogeneity in the training corpus and so forthfujisaki et al report some results based on testing the parser on the corpus used for trainingin 72 out of 84 sentences examined the most probable analysis was also the correct analysisof the remainder 6 were false positives and did not receive a correct parse while the other 6 did but it was not the most probablea success rate of 85 is apparently impressive but it is difficult to evaluate properly in the absence of full details concerning the nature of the corpusfor example if the corpus contains many simple and similar constructions unsupervised training is more likely to converge quickly on a useful set of probabilitiessharman jelinek and mercer conducted a similar experiment with a grammar in idlp format idlp grammars separate the two types of information encoded in cf rulesimmediate dominance and immediate precedenceinto two rule types that together define a cfgthis allows probabilities concerning dominance associated with id rules to be factored out from those concerning precedence associated with lp rulesin this experiment a supervised training regime was employeda grammar containing 100 terminals and 16 nonterminals and initial probabilities based on the frequency of id and lp relations was extracted from a manually parsed corpus of about one million words of textthe resulting probabilistic idlp grammar was used to parse 42 sentences of 30 words or less drawn from the same corpusin addition lexical syntactic probabilities were integrated with the probability of the idlp relations to rank parseseighteen of the parses were identical to the original manual analyses while a further 19 were imilar yielding a success rate of 88what is noticeable about this experiment is that the results are no better than fujisaki et al unsupervised training experiment discussed above despite the use of supervised training and a more sophisticated grammatical modelit is likely that these differences derive from the corpus material used for training and testing and that the results reported by fujisaki et al will not be achieved with all corporapereira and schabes report an experiment using baumwelch reestimation to infer a grammar and associated rule probabilities from a category set containing 15 nonterminals and 48 terminals corresponding to the penn treebank lexical tagset the training data was 770 sentences represented as tag sequences drawn from the treebankthey trained the system in an unsupervised mode and also in a emisupervised mode in which the manually parsed version of the corpus was used to constrain the set of analyses used during reestimationin supervised training analyses were accepted if they produced bracketings consistent but not necessarily identical with those assigned manuallythey demonstrate that in supervised mode training not only converges faster but also results in a grammar in which the most probable analysis is compatible with the manually assigned analysis of further test sentences drawn from the tree bank in a much greater percentage of cases78 as opposed to 35this result indicates very clearly the importance of supervised training particularly in a context where the grammar itself is being inferred in addition to the probability of individual rulesin our work we are concerned to utilize the existing widecoverage anlt grammar therefore we have concentrated initially on exploring how an adequate probabilistic model can be derived for a unificationbased grammar and trained in a supervised mode to effectively select useful analyses from the large space of syntactically legitimate possibilitiesthere are several inherent problems with probabilistic cfg based systemsfirstly although cfg is an adequate model of the majority of constructions occurring in natural language it is clear that widecoverage cfgs will need to be very large indeed and this will lead to difficulties of development of consistent grammars and possibly to computational intractability at parse time secondly associating probabilities with cf rules means that information about the probability of a rule applying at a particular point in a parse derivation is lostthis leads to complications distinguishing the probability of different derivations when the same rule can be applied several times in more than one waygrammar 1 below is an example of a probabilistic cfg in which each production is associated with a probability and the probabilities of all rules expanding a given nonterminal category sum to onegrammar 1 the probability of a particular parse is the product of the probabilities of each rule used in the derivationthus the probability of parse a in figure 1 is 00336the probability of parse b or c must be identical though because the same rule is applied twice in each casesimilarly the probability of d and e is also identical for essentially the same reasonhowever these rules are natural treatments of noun compounding and prepositional phrase attachment in english and the different derivations correlate with different interpretationsfor example b would be an appropriate analysis for toy coffee grinder while c would be appropriate for cat food tin and each of d and e yields one of the two possible interpretations of the man in the park with the telescopewe want to keep these structural configurations probabilistically distinct in case there are structurally conditioned differences in their frequency of occurrence as would be predicted for example by the theory of parsing strategies fujisaki et al propose a rather inelegant solution for the noun compound case which involves creating 5582 instances of 4 morphosyntactically identical rules for classes of word forms with distinct bracketing behavior in nounnoun compoundshowever we would like to avoid enlarging the grammar and eventually to integrate probabilistic lexical information with probabilistic structural information in a more modular fashionprobabilistic cfgs also will not model the context dependence of rule use for example an np is more likely to be expanded as a pronoun in subject position than elsewhere but only one global probability can be associated with the relevant cf productionthus the probabilistic cfg model predicts that a and f will have the same probability of occurrencethese considerations suggest that we need a technique that allows use of a more adequate grammatical formalism than cfg and a more contextdependent probabilistic modelour approach is to use the lr parsing technique as a natural way to obtain a finitestate representation of a nonfinitestate grammar incorporating information about parse contextin the following sections we introduce the lr parser and in section 8 ted briscoe and john carroll generalized probabilistic lr parsing we demonstrate that lr parse tables do provide an appropriate amount of contextual information to solve the problems described abovethe heart of the lr parsing technique is the parse table construction algorithm which is the most complex and computationally expensive aspect of lr parsingmuch of the attraction of the technique stems from the fact that the real work takes place in a precompilation phase and the run time behavior of the resulting parser is relatively simple and directedan lr parser finds the rightmost derivation in reverse for a given string and cf grammarthe precompilation process results in a parser control mechanism that enables the parser to identify the handle or appropriate substring in the input to reduce and the appropriate rule of the grammar with which to perform the reductionthe control information is standardly encoded as a parse table with rows representing parse states and columns terminal and nonterminal symbols of the grammarthis representation defines a finitestate automatonfigure 2 gives the lalr parse table for grammar 1 is the most commonly used variant of lr since it usually provides the best tradeoff between directed rule invocation and parse table sizeif the grammar is in the appropriate lr class the automaton will be deterministic however some algorithms for parse table construction are also able to build nondeterministic automata containing action conflicts for ambiguous cfgsparse table construction is discussed further in section 4tomita describes a system for nondeterministic lr parsing of contextfree grammars consisting of atomic categories in which each cf production may be augmented with a set of tests at parse time whenever a sequence of constituents is about to be reduced into a higherlevel constituent using a production the augmentation associated with the production is invoked to check syntactic or semantic constraints such as agreement pass attribute values between constituents and construct a representation of the higherlevel constituentthe parser is driven by an lr parse table however the table is constructed solely from the cf portion of the grammar and so none of the extra information embodied in the augmentations is taken into account during its constructionthus the predictive power of the parser to select the appropriate rule given a specific parse history is limited to the cf portion of the grammar which must be defined manually by the grammar writerthis requirement places a greater load on the grammar writer and is inconsistent with most recent unificationbased grammar formalisms which represent grammatical categories entirely as feature bundles in addition it violates the principle that grammatical formalisms should be declarative and defined independently of parsing procedure since different definitions of the cf portion of the grammar will at least effect the efficiency of the resulting parser and might in principle lead to nontermination on certain inputs in a manner similar to that described by shieber in what follows we will assume that the unificationbased grammars we are considering are represented in the anlt object grammar formalism this formalism is a notational variant of definite clause grammar in which rules consist of a mother category and one or more daughter categories defining possible phrase structure configurationscategories consist of sets of feature namevalue pairs with the possibility of variable values which may be bound within a rule and of categoryvalued featurescategories are combined using fixedarity term unification the results and techniques we report below should generalize to many other unificationbased formalismsan example of a possible anlt object grammar rule is this rule provides a analysis of the structure of english clauses corresponding to s np vp using a feature system based loosely on that of gpsg in tomita lr parsing framework each such rule must be manually converted into a rule of the following form in which some subpart of each category has been replaced by an atomic symbolvbbar 2 per x plu y vform z 4 nn bar 2 per x plu y case nom vb bar 1 per x plu y vform z however it is not obvious which features should be so replacedwhy not include bar and caseit will be difficult for the grammar writer to make such substitutions in a consistent way and still more difficult to make them in an optimal way for the purposes of lr parsing since both steps involve consideration and comparison of all the categories mentioned in each rule of the grammarted briscoe and john carroll generalized probabilistic lr parsing constructing the lr parse table directly and automatically from a unification grammar would avoid these drawbacksin this case the lr parse table would be based on complex categories with unification of complex categories taking the place of equality of atomic ones in the standard lr parse table construction algorithm however this approach is computationally prohibitively expensive osborne reports that his implementation takes almost 24 hours to construct the lr states for a unification grammar of just 75 productionsour approach described below not only extracts unification information from complex categories but is computationally tractable for realistic sized grammars and also safe from inconsistencywe start with a unification grammar and automatically construct a cf backbone of rules containing categories with atomic names and an associated residue of feature namevalue pairseach backbone grammar rule is generally in direct onetoone correspondence with a single unification grammar rulethe lr parse table is then constructed from the cf backbone grammarthe parser is driven by this table but in addition when reducing a sequence of constituents the parser performs the unifications specified in the relevant unification grammar rule to form the category representing the higherlevel constituent and the derivation fails if one of the unifications failsour parser is thus similar to tomita except that it performs unifications rather than invoking cf rule augmentations however the main difference between our approach and tomita is the way in which the cf grammar that drives the parser comes into beingeven though a unification grammar will be at best equivalent to a very large set of atomiccategory cf productions in practice we have obtained lr parsers that perform well from backbone grammars containing only about 30 more productions than the original unification grammarthe construction method ensures that for any given grammar the cf backbone captures at least as much information as the optimal cfg that contains the same number of rules as the unification grammarthus the construction method guarantees that the resulting lr parser will terminate and will be as predictive as the source grammar in principle allowsbuilding the backbone grammar is a twostage process backbone grammar corresponding to object grammar2for each unification grammar rule create a backbone grammar rule containing atomic categories each atomic category being the name assigned to the category in the disjoint category set that unifies with the corresponding category in the unification grammar rule for each rule r of form cl 4 c2 cn in unification grammar add a rule b of form 51 4 b2 bn to backbone grammar where bi is the name assigned to the category in disjointset which unifies with ci for i1 n for example for the rules in figure 3 step 1 would create the disjointset shown in figure 4figure 5 shows the backbone rules that would be built in step 2algorithms for creating lr parse tables assume that the terminal vocabulary of the grammar is distinct from the nonterminal one so the procedure described above will not deal properly with a unification grammar rule whose mother category is assumed elsewhere in the grammar to be a lexical categorythe modification we make is to automatically associate two different atomic categories one terminal and one nonterminal with such categories and to augment the backbone grammar with a unary rule expanding the nonterminal category to the terminaltwo other aspects of the anlt grammar formalism require further minor elaborations to the basic algorithm firstly a rule may introduce a gap by including the feature specification null on the gapped daughterfor each such daughter an extra rule is added to the backbone grammar expanding the gap category to the null string secondly the formalism allows kleene star and plus operators in the anlt grammar these operators are utilized in rules for coordinationa rule containing kleene star daughters is treated as two rules one omitting the daughters concerned and one with the daughters being kleene plusa new nonterminal category is created for each distinct kleene plus category and two extra rules are added to the backbone grammar to form a rightbranching binary tree structure for it a parser can easily be modified to flatten this out during processing into the intended flat sequence of categoriesfigure 6 gives an example of what such a backbone tree looks likegrammars written in other more lowlevel unification grammar formalisms such as patrii commonly employ treatments of the type just described to deal with phenomena such as gapping coordination and compoundinghowever this method both allows the grammar writer to continue to use the full facilities of the anlt formalism and allows the algorithmic derivation of an appropriate backbone grammar to support lr parsingthe major task of the backbone grammar is to encode sufficient information from the unification grammar to constrain the application of the latter rules at parse timethe nearly onetoone mapping of unification grammar rules to backbone grammar rules described above works quite well for the anlt grammar with only a couple of exceptions that create spurious shiftreduce conflicts during parsing resulting in an unacceptable degradation in performancethe phenomena concerned are coordination and unbounded dependency constructionsin the anlt grammar three very general rules are used to form nominal adjectival and prepositional phrases following a conjunction the categories in these rules lead to otherwise disjoint categories for conjuncts being merged giving rise to a set of overly general backbone grammar rulesfor example the rule in the anlt grammar for forming a noun phrase conjunct introduced by a conjunction is n2 conj con subcat con conjn 3 h2the variable value for the conj feature in the mother means that all n2 categories specified for this feature are generalized to the same categorythis results in the backbone rules when parsing either kim or lee helps being unable after forming a n2 conj either for either kim to discriminate between the alternatives of preparing to iterate this constituent or shifting the next word or to start a new constituentwe solve this problem by declaring conj to be a feature that may not have a variable value in an element of the disjoint category setthis directs the system to expand out each unification grammar rule that has a category containing this feature with a variable value into a number of rules fully specified for the feature and to create backbone rules for each of thesethere are eight possible values for conj in the grammar so the general rule for forming a nominal conjunct given above for example ends up being represented by a set of eight specialized backbone grammar rulesin the grammar unbounded dependency constructions are analyzed by propagating the preposed constituent through the parse tree as the value of the slash feature to link it with the gap that appears in the constituent normal positionall nonlexical major categories contain the feature rules in the grammar propagating it between mother and a single daughter other daughters are marked slash noslash 1 indicating that the daughter is not gappedbackbone grammar construction would normally lose the information in the unification grammar about where gaps are allowed to occur significantly degrading the performance of a parserto carry the information over into the backbone we declare that wherever slash occurs with a variable value the value should be expanded out into two values noslash and a notional value unifying with anything except noslash 3we have also experimented with a smaller grammar employing gap threading an alternative treatment of ubcswe were able to use the same techniques for expanding out and inference on the values of the features used for threading the gaps to produce a backbone grammar that had the same constraining power with respect to gaps as the original grammarto date we have not attempted to compute cf backbones for grammars written in formalisms with minimal phrase structure components and completely general categories such as hpsg and ucg more extensive inference on patterns of possible unification within nested categories and appropriate expandingout of the categories concerned would be necessary for an lr parser to work effectivelythis and other areas of complexity in unificationbased formalisms need further investigation before we can claim to have developed a system capable of producing a useful lr parse table for any unificationbased grammarin particular declaring certain categoryvalued features so that they cannot take variable values may lead to nontermination in the backbone construction for some grammarshowever it should be possible to restrict the set of features that are considered in categoryvalued features in an analogous way to shieber restrictors for earley algorithm so that a parse table can still be constructedted briscoe and john carroll generalized probabilistic lr parsing the backbone grammar generated from the anlt grammar is large it contains almost 500 distinct categories and more than 1600 productionswhen we construct the lalr parse table we therefore require an algorithm with practical time and space requirementsin the lr parsing literature there are essentially two approaches to constructing lalr parse tablesone approach is graphbased transforming the parse table construction problem to a set of wellknown directed graph problems which in turn are solvable by efficient algorithmsunfortunately this approach does not work for grammars that are not lr for any k for example ambiguous grammarswe therefore broadly follow the alternative approach of aho sethi and ullman but with a number of optimizations pairs and 670000 reduce actions however of the goto entries only 2600 are distinct and of the shift actions only 1100 are distinct most states contain just reduce or just shift actions and in any one state very few different rules are involved in reduce actionsthe majority of states contain just reduce or just shift actions and in any one state very few different rules are involved in reduce actionstaking advantage of the characteristics of this distribution in each state we represent for the grammars we have investigated this representation achieves a similar order of space saving to the comb vector representation suggested by aho sethi and ullman for unambiguous grammars the parse table for the anlt grammar occupies approximately 360 kbytes of memory and so represents each action in an average of less than 23 bitsin contrast to conventional techniques though we maintain a faithful representation of the parse table not replacing error entries with more convenient nonerror ones in order to save extra spaceour parsers are thus able to detect failures as soon as theoretically possible an important efficiency feature when parsing nondeterministically with ambiguous grammars and a timesaving feature when parsing interactively with them table 1 compares the size of the lalr parse table for the anlt grammar with others reported in the literaturefrom these figures the anlt grammar is more than twice the size of tomita grammar for japanese the grammar itself is about one order of magnitude bigger than that of a typical programming language but the lalr parse table in terms of number of actions is two orders of magnitude biggeralthough tomita anticipates lr parsing techniques being applied to large nl grammars written in formalisms such as gpsg the sizes of parse tables for such grammars grow more rapidly than he predictshowever for large realworld nl grammars such as the anlt the table size is still quite manageable despite johnson worstcase complexity result of the number of lr states being exponential on grammar size we have therefore not found it necessary to use schabes lrlike tables as might be expected and table 2 illustrates parse table construction for large grammars is cpuintensiveas a rough guide grosch quotes lalr table construction for a grammar for modula2 taking from about 5 to 50 seconds so scaling up two orders of magnitude our timings for the anlt grammar fall in the expected regionthe major problem with attempting to employ a disambiguated training corpus is to find a way of constructing this corpus in an errorfree and resourceefficient fashioneven manual assignment of lexical categories is slow laborintensive and errorpronethe greater complexity of constructing a complete parse makes the totally manual approach very unattractive if not impracticalsampson reports that it took 2 personyears to produce the lob tree bank of 50000 wordsfurthermore in that project no attempt was made to ensure that the analyses were well formed with respect to a generative grammarattempting to manually construct analyses consistent with a grammar of any size and sophistication would place an enormous additional load on the analystleech and garside discuss the problems that arise in manual parsing of corpora concerning accuracy and consistency of analyses across time and analyst the laborintensive nature of producing detailed analyses and so forththey advocate an approach in which simple keleton parses are produced by hand from previously tagged material with checking for consistency between analyststhese skeleton analyses can then be augmented automatically with further information implicit in the lexical tagswhile this approach may well be the best that can be achieved with fully manual techniques it is still unsatisfactory in several respectsfirstly the analyses are crude while we would like to automatically parse with a grammar capable of assigning sophisticated semantically interpretable ones but it is not clear how to train an existing grammar with such unrelated analysessecondly the quality of any grammar obtained automatically from the parsed corpus is likely to be poor because of the lack of any rigorous checks on the form of the skeleton parsessuch a grammar might in principle be trained from the parsed corpus but there are still likely to be small mismatches between the actual analysis assigned manually and any assigned automaticallyfor these reasons we decided to attempt to produce a training corpus using the grammar that we wished ultimately to trainas long as the method employed ensured that any analysis assigned was a member of the set defined by the grammar these problems during training should not arisefollowing our experience of constructing a substantial lexicon for the anlt grammar from unreliable and indeterminate data we decided to construct the disambiguated training corpus semiautomatically restricting manual interaction to selection between alternatives defined by the anlt grammarone obvious technique would be to generate all possible parses with a conventional parser and to have the analyst select the correct parse from the set returned however this approach places a great load on the analyst who will routinely need to examine large numbers of parses for given sentencesin addition computation of all possible analyses is likely to be expensive and in the limit intractablebriscoe demonstrates that the structure of the search space in parse derivations makes a lefttoright incremental mode of parse selection most efficientfor example in noun compounds analyzed using a recursive binarybranching rule the number of analyses correlates with the catalan series so a 3word compound has 2 analyses 4 has 5 5 has 14 9 has 1430 and so forthhowever briscoe shows that with a simple bounded context parser set up to request help whenever a parse indeterminacy arises it is possible to select any of the 14 analyses of a 5word compound with a maximum of 5 interactions and any of the 1430 analyses of a 9word compound with around 13 interactionsin general resolution of the first indeterminacy in the input will rule out approximately half the potential analyses resolution of the next half of the remaining ones and so onfor worst case cf ambiguities complexity this approach to parse selection appears empirically to involve numbers of interactions that increase at little more than linear rate with respect to the length of the inputit is possible to exploit this insight in two waysone method would be to compute all possible analyses represented as a parse forest and ask the user to select between competing subanalyses that have been incorporated into a successful analysis of the inputin this way only genuine global syntactic ambiguities would need to be considered by the userhowever the disadvantage of this approach is that it relies on a prior online computation of the full set of analysesthe second method involves incremental interaction with the parser during the parse to guide it through the search space of possibilitiesthis has the advantage of being guaranteed to be computationally tractable but the potential disadvantage of requiring the user to resolve many local syntactic ambiguities that will not be incorporated into a successful analysisnevertheless using lr techniques this problem can be minimized and because we do not wish to develop a system that must be able to compute all possible analyses in order to return the most plausible one we have chosen the latter incremental methodthe interactive incremental parsing system that we implemented asks the user for a decision at each choice point during the parsehowever to be usable in practice such a system must avoid as far as possible presenting the user with spurious choices that could be ruled out either by using more of the left context or by looking at words yet to be parsedour approach goes some way to addressing these points since the parser is as predictive as the backbone grammar and lr technique allow and the lalr parse table allows one word lookahead to resolve some ambiguities in fact lr parsing is the most effectively predictive parsing technique for which an automatic compilation procedure is known but this is somewhat undermined by our use of features which will block some derivations so that the valid prefix property will no longer hold extensions to the lr technique for example those using lrregular grammars might be used to further cut down on interactions however computation of the parse tables to drive such extended lr parsers may prove intractable for large nl grammars an lr parser faces an indeterminacy when it enters a state in which there is more than one possible action given the current lookaheadin a particular state there cannot be more than one shift or accept action but there can be several reduce actions each specifying a reduction with a different rulewhen parsing each shift or reduce choice must lead to a different final structure and so the indeterminacy represents a point of syntactic ambiguity in the anlt grammar and lexicon lexical ambiguity is at least as pervasive as structural ambiguitya naive implementation of an interactive lr parser would ask the user the correct category for each ambiguous word as it was shifted many openclass words are assigned upwards of twenty lexical categories by the anlt lexicon with comparatively fine distinctions between them so this strategy would be completely impracticableto avoid asking the user about lexical ambiguity we use the technique of preterminal delaying in which the assignment of an atomic preterminal category to a lexical item is not made until the choice is forced by the use of a particular production in a later reduce actionafter shifting an ambiguous lexical item the parser enters a state corresponding to the union of states that would be entered on shifting the individual lexical categoriessince in general several unification grammar categories for a single word may be subsumed by a single atomic preterminal category we extend shieber technique so that it deals with a grammar containing complex categories by associating a set of alternative analyses with each state and letting the choice between them be forced by later reduce actions just as with atomic preterminal categoriesin order not to overload the user with spurious choices concerning local ambiguities the parser does not request help immediately after it reaches a parse action conflictinstead the parser pursues each option in a limited breadthfirst fashion and only requests help with analysis paths that remain activein our current system this type of lookahead is limited to up to four indeterminacies aheadsuch checking is cheap in terms of machine resources and very effective in cutting down both the number of choice points the user is forced to consider and also the average number of options in each onetable 3 shows the reduction in user interaction achieved by increasing the amount of lookahead in our systemcomputation of the backbone grammar generates extra rules that do not correspond directly to single unification grammar rulesat choice points reductions involving these rules are not presented to the user instead the system applies the reductions automatically proceeding until the next shift action or choice point is reached including these options in those presented to the userthe final set of measures taken to reduce the amount of interaction required with the user is to ask if the phrase being parsed contains one or more gaps or instances of coordination before presenting choices involving either of these phenomena blocking consideration of rules on the basis of the presence of particular featurevalue pairsfigure 7 shows the system parsing a phrase with a fourchoice lookaheadthe resulting parse tree is displayed with category aliases substituted for the actual complex categoriesthe requests for manual selection of the analysis path are displayed to the analyst in as terse a manner as possible and require knowledge of the anlt grammar and lexicon to be resolved effectivelyfigure 8 summarizes the amount of interaction required in the experiment reported below for parsing a set of 150 ldoce noun definitions with the anlt grammarto date the largest number of interactions we have observed for a single phrase is 55 for the ldoce definition for youth hostel achieving the correct analysis interactively took the first author about 40 minutes definitions of this length will often have many hundreds or even thousands of parses computing just the parse forest for this definition takes of the order of two hours of cpu time since in a more general corpus of written material the average sentence length is likely to be 3040 words this example illustrates clearly the problems with any approach based on post hoc online selection of the correct parsehowever using numbers of definitions requiring particular amounts of interaction the incremental approach to semiautomatic parsing we have been able to demonstrate that the correct analysis is among this setfurthermore a probabilistic parser such as the one described later may well be able to compute this analysis in a tractable fashion by extracting it from the parse forestthe parse histories resulting from semiautomatic parsing are automatically stored and can be used to derive the probabilistic information that will guide the parser after trainingwe return to a discussion of the manner in which this information is utilized in section 7as well as building an interactive parsing system incorporating the anlt grammar we have implemented a breadthfirst nondeterministic lr parser for unification grammarsthis parser is integrated with the grammar development environment in the anlt system and provided as an alternative parser for use with stable grammars for batch parsing of large bodies of textthe existing chart parser although slower has been retained since it is more suited to grammar development because of the speed with which modifications to the grammar can be compiled and its better debugging facilities our nondeterministic lr parser is based on kipps reformulation of tomita parsing algorithm and uses a graphstructured stack in the same wayour parser is driven by the lalr state table computed from the backbone grammar but in addition on each reduction the parser performs the unifications appropriate to the unification grammar version of the backbone rule involvedthe analysis being pursued fails if one of the unifications failsthe parser performs subanalysis sharing and local ambiguity packing however we generalize the technique of atomic category packing described by tomita driven by atomic category names to complex featurebased categories following alshawi the packing of subanalyses is driven by the subsumption relationship between the feature values in their top nodesan analysis is only packed into one that has already been found if its top node is subsumed by or is equal to that of the one already foundan analysis once packed will thus never need to be unpacked during parsing since the value of each feature will always be uniquely determinedour use of local ambiguity packing does not in practice seem to result in exponentially bad performance with respect to sentence length since we have been able to generate packed parse forests for sentences of over 30 words having many thousands of parseswe have implemented a unification version of schabes chartbased lrlike parser but experiments with the anlt grammar suggest that it offers no practical advantages over our tomitastyle parser and schabes table construction algorithm yields less finegrained and therefore less predictive parse tablesnevertheless searching the parse forest exhaustively to recover each distinct analysis proved computationally intractable for sentences over about 22 words in lengthwright wrigley and sharman describe a viterbilike algorithm for unpacking parse forests containing probabilities of analyses to find the nbest analyses but this approach does not generalize to our approach in which unification failure on the different extensions of packed nodes cannot be computed locallyin subsequent work we have developed such a heuristic technique for bestfirst search of the parse forest which in practice makes the recovery of the most probable analyses much more efficient we noticed during preliminary experiments with our unification lr parser that it was often the case that the same unifications were being performed repeatedly even during the course of a single reduce actionthe duplication was happening in cases where two or more pairs of states in the graphstructured stack had identical complex categories between them during a reduction with a given rule the categories between each pair of states in a backwards traversal of the stack are collected and unified with the appropriate daughters of the ruleidentical categories appearing here between traversed pairs of states leads to duplication of unificationsby caching unification results we eliminated this wasted effort and improved the initially poor performance of the parser by a factor of about threeas for actual parse times table 4 compares those for the gde chart parser the semiautomatic userdirected lr parser and the nondeterministic lr parserour general experience is that although the nondeterministic lr parser is only around 3050 faster than the chart parser it often generates as little as a third the amount of garbageefficient use of space is obviously an important factor for practical parsing of long and ambiguous texts7lr parsing with probabilistic disambiguation several researchers have proposed using lr parsers as a practical method of parsing with a probabilistic contextfree grammarthis approach assumes that probabilities are already associated with a cfg and describes techniques for distributing those probabilities around the lr parse table in such a way that a probabilistic ranking of alternative analyses can be computed quickly at parse time and probabilities assigned to analyses will be identical to those defined by the original probabilistic cfghowever our method of constructing the training corpus allows us to associate probabilities with an lr parse table directly rather than simply with rules of the grammaran lr parse state encodes information about the left and right context of the current parsederiving probabilities relative to the parse context will allow the probabilistic parser to distinguish situations in which identical rules reapply in different ways across different derivations or apply with differing probabilities in different contextssemiautomatic parsing of the training corpus yields a set of lr parse histories that are used to construct the probabilistic version of the lalr parse tablethe parse table is a nondeterministic finitestate automaton so it is possible to apply markov modeling techniques to the parse table each row of the parse table corresponds to the possible transitions out of the state represented by that row and each transition is associated with a particular lookahead item and a parse actionnondeterminism arises when more than one action and hence transition is possible given a particular lookahead itemthe most straightforward technique for associating probabilities with the parse table is to assign a probability to each action in the action part of the table 5 if probabilities are associated directly with the parse table rather than derived from a probabilistic cfg or equivalent global pairing of probabilities to rules then the resulting probabilistic model will be more sensitive to parse contextfor example in a derivation for the sentence he loves her using grammar 1 the distinction between reducing the first pronoun and second pronoun to npusing rule 5 can be maintained in terms of the different lookahead items paired with the reduce actions relating to this rule in the first case the lookahead item will be vi and in the second however this approach does not make maximal use of the context encoded into a transition in the parse table and it is possible to devise situations in which the reduction of a pronoun in subject position and elsewhere would be indistinguishable in terms of lookahead alone for example if we added appropriate rules for adverbs to grammar 1 then this reduction would be possible with lookahead adv in sentences such as he passionately loves her and he loves her passionatelya slightly less obvious approach is to further subdivide reduce actions according to the state reached after the reduce action has appliedthis state is used together with the resultant nonterminal to define the state transition in the goto part of the parse tablethus this move corresponds to associating probabilities with transitions in the automaton rather than with actions in the action part of the tablefor example a reduction of pronoun to np in subject position in the parse table for grammar 1 in figure 2 always results in the parser returning to state 0 reduction to np of a pronoun in object position always results in the parser returning to state 11thus training on a corpus with more subject than nonsubject pronominal nps will now result in a probabilistic preference for reductions that return to presubject states with postsubject lookaheadsof course this does not mean that it will be impossible to devise grammars in which reductions cannot be kept distinct that might in principle have different frequencies of occurrencehowever this approach appears to be the natural stochastic probabilistic model that emerges when using a lalr tableany further sensitivity to context would require sensitivity to patterns in larger sections of a parse derivation than can be defined in terms of such a tablethe probabilities required to create the probabilistic version of the parse table can be derived from the set of parse histories resulting from the training phase described in section 5 by computing the frequency with which each transition from a particular state has been taken and converting these to probabilities such that the probabilities a probabilistic version of the parse table for grammar 1 assigned to each transition from a given state sum to onein figure 9 we show a probabilistic lalr parse table for grammar 1 derived from a simple partial training phasein this version of the table a probability is associated with each shift action in the standard way but separate probabilities are associated with reduce parse derivations for the winter holiday camp closed actions depending on the state reached after the action for example in state 4 with lookahead n the probability of reducing with rule 10 is 017 if the state reached is 3 and 022 if the state reached is 5the actions that have no associated probabilities are ones that have not been utilized during the training phase each is assigned a smoothed probability that is the reciprocal of the result of adding one to the total number of observations of actions actually taken in that statedifferential probabilities are thus assigned to unseen events in a manner analogous to the goodturing techniquefor this reason the explicit probabilities for each row add up to less than onethe goto part of the table is not shown because it is always deterministic and therefore we do not associate probabilities with goto transitionsthe difference between our approach and one based on probabilistic cfg can be brought out by considering various probabilistic derivations using the probabilistic parse table for grammar 1assuming that we are using probabilities simply to rank parses we can compute the total probability of an analysis by multiplying together the probabilities of each transition we take during its derivationin figure 10 we give the two possible complete derivations for a sentence such as the winter holiday camp closed consisting of a determiner three nouns and an intransitive verbthe ambiguity concerns whether the noun compound is left or rightbranching and as we saw in section 2 a probabilistic cfg cannot distinguish these two derivationsthe probability of each step can be read off the action table and is shown after the lookahead item in the figurein step 8 a shiftreduce conflict occurs so the stack plits while the left and rightbranching analyses of the noun compound are constructedthe a branch corresponds ted briscoe and john carroll generalized probabilistic lr parsing to the rightbranching derivation and the product of the probabilities is 46 x 108 while the product for the leftbranching b derivation is 51 x 107since the table was constructed from parse histories with a preponderance of leftbranching structures this is the desired resultin practice this technique is able to distinguish and train accurately on 3 of the 5 possible structures for a 4word nounnoun compound but it inaccurately prefers a completely leftbranching analysis over structures of the form and nonce we move to 5word nounnoun compounds performance degrades furtherhowever this level of performance on such structural configurations is likely to be adequate because correct resolution of most ambiguity in such constructions is likely to be dominated by the actual lexical items that occur in individual textsnevertheless if there are systematic structural tendencies evident in corpora then the probabilistic model is sensitive enough to discriminate themin practice we take the geometric mean of the probabilities rather than their product to rank parse derivationsotherwise it would be difficult to prevent the system from always developing a bias in favor of analyses involving fewer rules or equivalently maller trees almost regardless of the training materialof course the need for this step reflects the fact that although the model is more contextdependent than probabilistic cfg it is by no means a perfect probabilistic model of nl7 for example the stochastic nature of the model and the fact that the entire left context of a parse derivation is not encoded in lr state information means that the probabilistic model cannot take account of say the pattern of resolution of earlier conflicts in the current derivationanother respect in which the model is approximate is that we are associating probabilities with the contextfree backbone of the unification grammarsuccessful unification of features at parse time does not affect the probability of a analysis while unification failure in effect sets the probability of any such analysis to zeroas long as we only use the probabilistic model to rank successful analyses this is not particularly problematichowever parser control regimes that attempt some form of bestfirst search using probabilistic information associated with transitions might not yield the desired result given this propertyfor example it is not possible to use viterbistyle optimization of search for the maximally probable parse because this derivation may contain a subanalysis that will be pruned locally before a subsequent unification failure renders the current most probable analysis impossiblein general the current breadthfirst probabilistic parser is more efficient than its nonprobabilistic counterpart described in the previous sectionin contrast to the parser described by ng and tomita our probabilistic parser is able to merge configurations and in all cases still maintain a full record of all the probabilities computed up to that point since it associates probabilities with partial analyses of the input so far rather than with nodes in the graphstructured stackwe are currently experimenting with techniques for probabilistically unpacking the packed parse forest to recover the first few most probable derivations without the need for exhaustive search or full expansionin order to test the techniques and ideas described in previous sections we undertook a preliminary experiment using a subset of ldoce noun definitions as our test corpusa corpus of approximately 32000 noun definitions was created from ldoce by extracting the definition fields and normalizing the definitions to remove punctuation font control information and so fortha lexicon was created for this corpus by extracting the appropriate lemmas and matching these against entries in the anlt lexiconthe 10600 resultant entries were loaded into the anlt morphological system and this sublexicon and the full anlt grammar formed the starting point for the training processa total of 246 definitions selected without regard for their syntactic form were parsed semiautomatically using the parser described in section 5during this process further rules and lexical entries were created for some definitions that failed to parseof the total number 150 were successfully parsed and 63 lexical entries and 14 rules were addedsome of the rules required reflected general inadequacies in the anlt grammar for example we added rules to deal with new partitives and prepositional phrase and verb complementationhowever 7 of these rules cover relatively idiosyncratic properties of the definition sublanguage for example the postmodification of pronouns by relative clause and prepositional phrase in definitions beginning something that that of parenthetical phrases headed by adverbs such as the period esp the period and coordinations without explicit conjunctions ending with etc and so forthfurther special rules will be required to deal with brackets in definitions to cover conventions such as a man or woman who lives in a monastery which we ignored for this testnevertheless the number of new rules required is not great and the need for most was identified very early in the training processlexical entries are more problematic since there is little sign that the number of new entries required will tail offhowever many of the entries required reflect systematic inadequacies in the anlt lexicon rather than idiosyncrasies of the corpusit took approximately one personmonth to produce this training corpusas a rough guide it takes an average of 15 seconds to resolve a single interaction with the parserhowever the time a parse takes can often be lengthened by incorrect choices and by the process of adding lexical entries and occasional rulesthe resultant parse histories were used to construct the probabilistic parser this parser was then used to reparse the training corpus and the most highly ranked analyses were automatically compared with the original parse historieswe have been able to reparse in a breadthfirst fashion all but 3 of the 150 definitions that were parsed manually measures based on bigram and trigram word models and an estimate of an infinite model were pp 104 pp 41 and pp 8 25 words in lengththere are 22 definitions one word in length all of these trivially receive correct analysesthere are 89 definitions between two and ten words in length inclusive of these in 68 cases the correct analysis is also the most highly rankedin 13 of the 21 remaining cases the correct analysis is the second or third most highly ranked analysislooking at these 21 cases in more detail in 8 there is an inappropriate structural preference for low or local attachment in 4 an inappropriate preference for compounds and in 6 of the remaining 9 cases the highest ranked result contains a misanalysis of a single constituent two or three words in lengthif these results are interpreted in terms of a goodness of fit measure such as that of sampson haigh and atwell the measure would be better than 96if we take correct parsesentence as our measure then the result is 76for definitions longer than 10 words this latter figure tails off mainly due to misapplication of such statistically induced but nevertheless structural attachment preferencesfigure 11 summarizes these resultswe also parsed a further 55 ldoce noun definitions not drawn from the training corpus each containing up to 10 words of these in 41 cases the correct parse is the most highly ranked in 6 cases it is the second or third most highly ranked and in the remaining 8 cases it is not in the first three analysesthis yields a correct parsesentence measure of 75examination of the failures again reveals that a preference for local attachment of postmodifiers accounts for 5 cases a preference for compounds for 1 and the misanalysis of a single constituent for 2the others are mostly caused by the lack of lexical entries with appropriate subcat featuresin figure 12 we show the analysis for the unseen definition of affectation which has 20 parses of which the most highly ranked is correctparse tree for a person or thing that supports or helpsfigure 13 shows the highestranked analysis assigned to one definition of aidthis is an example of a false positive which in this case is caused by the lack of a lexical entry for support as an intransitive verbconsequently the parser finds and ranks highest an analysis in which supports and helps are treated as transitive verbs forming verb phrases with object np gaps and that supports or helps as a zero relative clause with that analyzed as a prenominal subjectcompare a person or thing that that supports or helpsit is difficult to fault this analysis and the same is true for the other false positives we have looked atsuch false positives present the biggest challenge to the type of system we are attempting to developone hopeful sign is that the analyses assigned such examples appear to have low probabilities relative to most probable correct analyses of other exampleshowever considerably more data will be required before we can decide whether this trend is robust enough to provide the basis for automatic identification of false positivesusing a manually disambiguated training corpus and manually tuned grammar appears feasible with the definitions sublanguageresults comparable to those obtained by fujisaki et al and sharman jelinek and mercer are possible on the basis of a quite modest amount of manual effort and a very much smaller training corpus because the parse histories contain little noise and usefully reflect the semantically and pragmatically appropriate analysis in the training corpus and because the number of failures of coverage were reduced to some extent by adding the rules specifically motivated by the training corpusunlike fujisalci et al or sharman jelinek and mercer we did not integrate information about lexemes into the rule probabilities or make use of lexical syntactic probabilityit seems likely that the structural preference for local attachment might be overruled in appropriate contexts if lexeme information were taken into accountthe slightly worse results obtained for the unseen data appear to be caused more by the nonexistence of a correct analysis in a number of cases rather than by a marked decline in the usefulness of the rule probabilitiesthis again highlights the need to deal effectively with examples outside the coverage of the grammarthe system that we have developed offers partial and practical solutions to two of the three problems of corpus analysis we identified in the introductionthe problem of tuning an existing grammar to a particular corpus or sublanguage is addressed partly by manual extensions to the grammar and lexicon during the semiautomatic training phase and partly by use of statistical information regarding frequency of rule use gathered during this phasethe results of the experiment reported in the last section suggest that syntactic peculiarities of a sublanguage or corpus surface quite rapidly so that manual additions to the grammar during the training phase are practicalhowever lexical idiosyncrasies are far less likely to be exhausted during the training phase suggesting that it will be necessary to develop an automatic method of dealing with themin addition the current system does not take account of differing frequencies of occurrence of lexical entries for example in the lob corpus the verb believe occurs with a finite sentential complement in 90 of citations although it is grammatical with at least five further patterns of complementationthis type of lexical information which will very likely vary between sublanguages should be integrated into the probabilistic modelthis will be straightforward in terms of the model since it merely involves associating probabilities with each distinct lexical entry for a lexeme and carrying these forward in the computation of the likelihood of each parsehowever the acquisition of the statistical information from which these probabilities can be derived is more problematicexisting lexical taggers are unable to assign tags that reliably encode subcategorization informationit seems likely that automatic acquisition of such information must await successful techniques for robust parsing of at least phrases in corpora the task of selecting the correct analysis from the set licensed by the grammar is also partially solved by the systemit is clear from the results of the preliminary experiment reported in the previous section that it is possible to make the semantically and pragmatically correct analysis highly ranked and even most highly ranked in many cases just by exploiting the frequency of occurrence of the syntactic rules in the training datahowever it is also clear that this approach will not succeed in all cases for example in the experiment the system appears to have developed a preference for local attachment of prepositional phrases which is inappropriate in a significant number of casesit is not surprising that probabilities based solely on the frequency of syntactic rules are not capable of resolving this type of ambiguity in an example such as john saw the man on monday again it is the temporal interpretation of monday that favors the adverbial interpretation such examples are syntactically identical to ones such as john saw the man on the bus again in which the possibility of a locative interpretation creates a mild preference for the adjectival reading and local attachmentto select the correct analysis in such cases it will be necessary to integrate information concerning word sense collocations into the probabilistic analysisin this case we are interested in collocations between the head of a pp complement a preposition and the head of the phrase being postmodifiedin general these words will not be adjacent in the text so it will not be possible to use existing approaches unmodified because these apply to adjacent words in unanalyzed texthindle and rooth report good results using a mutual information measure of collocation applied within such a structurally defined context and their approach should carry over to our framework straightforwardlyone way of integrating tructural collocational information into the system presented above would be to make use of the semantic component of the grammarthis component pairs logical forms with each distinct syntactic analysis that represent among other things the predicateargument structure of the inputin the resolution of pp attachment and similar ambiguities it is collocation at this level of representation that appears to be most relevantintegrating a probabilistic ranking of the resultant logical forms with the probabilistic ranking of the distinct syntactic analyses presents no problems in principlehowever once again the acquisition of the relevant statistical information will be difficult because it will require considerable quantities of analyzed text as training materialone way to ameliorate the problem might be to reduce the size of the vocabulary for which statistics need to be gathered by replacing lexical items with their superordinate terms copestake describes a program capable of extracting the genus term of a definition from an ldoce definition resolving the sense of such terms and constructing hierarchical taxonomies of the resulting word sensestaxonomies of this form might be used to replace pp complement heads and postmodified heads in corpus data with a smaller number of superordinate conceptsthis would make the statistical data concerning trigrams of headprepositionhead less sparse and easier to gather from a corpusnevertheless it will only be possible to gather such data from determinately syntactically analyzed materialthe third problem of dealing usefully with examples outside the coverage of the grammar even after training is not addressed by the system we have developednevertheless the results of the preliminary experiment for unseen examples indicate that it is a significant problem at least with respect to lexical entriesa large part of the problem with such examples is identifying them automaticallysome such examples will not receive any parse and will therefore be easy to spotmany though will receive incorrect parses and can therefore only be identified manually jensen et al describe an approach to parsing such examples based on parse fitting or rule relaxation to deal with illformed inputan approach of this type might work with input that receives no parse but cannot help with the identification of those that only receive an incorrect onein addition it involves annotating each grammar rule about what should be relaxed and requires that semantic interpretation can be extended to fitted or partial parses sampson haigh and atwell propose a more thoroughgoing probabilistic approach in which the parser uses a statistically defined measure of closest fit to the set of analyses contained in a tree bank of training data to assign an analysisthis approach attempts to ensure that analyses of new data will conform as closely as possible to existing ones but does not require that analyses assigned are well formed with respect to any given generative grammar implicit in the tree bank analysessampson haigh and atwell report some preliminary results for a parser of this type that uses the technique of simulated annealing to assign the closest fitting analysis on the basis of initial training on the lob treebank and automatic updating of its statistical data on the basis of further parsed examplessampson haigh and atwell give their results in terms of a similarity measure with respect to correct analyses assigned by handfor a 13sentence sample the mean similarity measure was 80 and only one example received a fully correct analysisthese results suggest that the technique is not reliable enough for practical corpus analysis to datein addition the analyses assigned on the basis of the lob treebank scheme are not syntactically determinate a more promising approach with similar potential robustness would be to infer a probabilistic grammar using baumwelch reestimation from a given training corpus and predefined category set following lan i and young and pereira and schabes this approach has the advantage that the resulting grammar defines a welldefined set of analyses for which rules of compositional interpretation might be developedhowever the technique is limited in several ways firstly such grammars are restricted to small cnf cfgs because of the computational cost of iterative reestimation with an algorithm polynomial in sentence length and nonterminal category size and secondly because some form of supervised training will be essential if the analyses assigned by the grammar are to be linguistically motivatedimmediate prospects for applying such techniques to realistic nl grammars do not seem promisingthe anlt backbone grammar discussed in section 4 contains almost 500 categorieshowever briscoe and waegner describe an experiment in which firstly baumwelch reestimation was used in conjunction with other more linguistically motivated constraints on the class of grammars that could be inferred such as headedness and secondly initial probabilities were heavily biased in favor of manually coded linguistically highly plausible rulesthis approach resulted in a simple tag sequence grammar often able to assign coherent and semanticallypragmatically plausible analyses to tag sequences drawn from the spoken english corpusby combining such techniques and relaxing the cnf constraint for example by adopting the trellis algorithm version of baumwelch reestimation it might be possible to create a computationally tractable system operating with a realistic nl grammar that would only infer a new rule from a finite space of linguistically motivated possibilities in the face of parse failure or improbabilityin the shorter term such techniques combined with simple tag sequence grammars might yield robust phraselevel keleton parsers that could be used as corpus analysis toolsthe utility of the system reported here would be considerably improved by a more tractable approach to probabilistically unpacking the packed parse forest than exhaustive searchfinding the nbest analyses would allow us to recover analyses for longer sentences where a parse forest is constructed and would make the approach generally more efficientcarroll and briscoe present a heuristic algorithm for parse forest unpacking that interleaves normalization of competing subanalyses with bestfirst extraction of the n most probable analysesnormalization of competing subanalyses with respect to the longest derivation both allows us to prune the search probabilistically and to treat the probability of analyses as the product of the probability of their subanalyses without biasing the system in favor of shorter derivationsthis modified version of the system presented here is able to return analyses for sentences over 31 words in length yields slightly better results on a replication of the experiment reported in section 8 and the resultant parser is approximately three times faster at returning the three highestranked parsers than that presented herein conclusion the main positive points of the paper are that 1 lr parse tables can be used to define a more contextdependent and adequate probabilistic model of nl 2 predictive lr parse tables can be constructed automatically from unificationbased grammars in standard notation 3 effective parse table construction and representation techniques can be defined for realistically sized ambiguous nl grammars 4 semiautomatic lr based parse techniques can be used to efficiently construct training corpora and 5 the lr parser and anlt grammar jointly define a useful probabilistic model into which probabilities concerning lexical subcategorization and structurally defined word sense collocations could be integratedthis research is supported by sercdtiied project 411261 extensions to the alvey natural language tools and by esprit bra 3030 acquisition of lexical information from machinereadable dictionarieswe would like to thank longman group ltd for allowing us access to the ldoce mrd and ann copestake and antonio sanfilippo for considerable help in the analysis of the ldoce noun definition corpusrichard sharman kindly calculated the perplexity measures for this corpusin addition hiyan alshawi david weir and steve young have helped clarify our thinking and made several suggestions that have influenced the way this research has developedalex lascarides and four anonymous reviewers comments on earlier drafts were very helpful to us in preparing the final versionall errors and mistakes remain our responsibility
J93-1002
generalized probabilistic lr parsing of natural language with unificationbased grammarswe describe work toward the construction of a very widecoverage probabilistic parsing system for natural language based on lr parsing techniquesthe system is intended to rank the large number of syntactic analyses produced by nl grammars according to the frequency of occurrence of the individual rules deployed in each analysiswe discuss a fully automatic procedure for constructing an lr parse table from a unificationbased grammar formalism and consider the suitability of alternative lalr parse table construction methods for large grammarsthe parse table is used as the basis for two parsers a userdriven interactive system that provides a computationally tractable and laborefficient method of supervised training of the statistical information required to drive the probabilistic parserthe latter is constructed by associating probabilities with the lr parse table directlythis technique is superior to parsers based on probabilistic lexical tagging or probabilistic contextfree grammar because it allows for a more contextdependent probabilistic language model as well as use of a more linguistically adequate grammar formalismwe compare the performance of an optimized variant of tomita generalized lr parsing algorithm to an chart parserwe report promising results of a pilot study training on 150 noun definitions from the longman dictionary of contemporary english and retesting on these plus a further 55 definitionsfinally we discuss limitations of the current system and possible extensions to deal with lexical frequency of occurrenceour work on statistical parsing uses an adapted version of the system which is able to process tagged input ignoring the words in order to parse sequences of tagsour statistical parser is an extension of the anlt grammar development system
accurate methods for the statistics of surprise and coincidence much work has been done on the statistical analysis of text in some cases reported in the literature inappropriate statistical methods have been used and statistical significance of results have not been addressed in particular asymptotic normality assumptions have often been used unjustifiably leading to flawed results this assumption of normal distribution limits the ability to analyze rare events unfortunately rare events do make up a large fraction of real text however more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples these tests can be implemented efficiently and have been used for the detection of composite terms and for the determination of domainspecific terms in some cases these measures perform much better than the methods previously used in cases where traditional contingency table methods work well the likelihood ratio tests described here are nearly identical much work has been done on the statistical analysis of textin some cases reported in the literature inappropriate statistical methods have been used and statistical significance of results have not been addressedin particular asymptotic normality assumptions have often been used unjustifiably leading to flawed resultsthis assumption of normal distribution limits the ability to analyze rare eventsunfortunately rare events do make up a large fraction of real texthowever more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samplesthese tests can be implemented efficiently and have been used for the detection of composite terms and for the determination of domainspecific termsin some cases these measures perform much better than the methods previously usedin cases where traditional contingency table methods work well the likelihood ratio tests described here are nearly identicalthis paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of textthere has been a recent trend back towards the statistical analysis of textthis trend has resulted in a number of researchers doing good work in information retrieval and natural language processing in generalunfortunately much of their work has been characterized by a cavalier approach to the statistical issues raised by the resultsthe approaches taken by such researchers can be divided into three rough categoriesthe first approach is the one taken by the ibm group researching statistical approaches to machine translation they have collected nearly one billion words of english text from such diverse sources as internal memos technical manuals and romance novels and have aligned most of the electronically available portion of the record of debate in the canadian parliament their efforts have been augean and they have been well rewarded by interesting resultsthe statistical significance of most of their work is above reproach but the required volumes of text are simply impractical in many settingsthe second approach is typified by much of the work of gale and church many of the results from their work are entirely usable and the measures they use work well for the examples given in their papersin general though their methods lead to problemsfor example mutual information estimates based directly on counts are subject to overestimation when the counts involved are small and zscores substantially overestimate the significance of rare eventsthe third approach is typified by virtually all of the informationretrieval literatureeven recent and very innovative work such as that using latent semantic indexing and pathfinder networks has not addressed the statistical reliability of the internal processingthey do however use good statistical methods to analyze the overall effectiveness of their approacheven such wellaccepted techniques as inverse document frequency weighting of terms in text retrieval is generally only justified on very sketchy groundsthe goal of this paper is to present a practical measure that is motivated by statistical considerations and that can be used in a number of settingsthis measure works reasonably well with both large and small text samples and allows direct comparison of the significance of rare and common phenomenathis comparison is possible because the measure described in this paper has better asymptotic behavior than more traditional measuresin the following some sections are composed largely of background material or mathematical details and can probably be skipped by the reader familiar with statistics or by the reader in a hurrythe sections that should not be skipped are marked with those with substantial background with and detailed derivations are unmarkedthis good parts convention should make this paper more useful to the implementer or reader only wishing to skim the paperthe assumption that simple functions of the random variables being sampled are distributed normally or approximately normally underlies many common statistical teststhis particularly includes pearson x2 test and zscore teststhis assumption is absolutely valid in many casesdue to the simplification of the methods involved it is entirely justifiable even in marginal caseswhen comparing the rates of occurrence of rare events the assumptions on which these tests are based break down because texts are composed largely of such rare eventsfor example simple word counts made on a moderatesized corpus show that words that have a frequency of less than one in 50000 words make up about 2030 of typical english language newswire reportsthis rare quarter of english includes many of the contentbearing words and nearly all the technical jargonas an illustration the following is a random selection of approximately 02 of the words found at least once but fewer than five times in a sample of a half million words of reuters reportsthe only word in this list that is in the least obscure is poi if we were to sample 50000 words instead of the half million used to create the list above then the expected number of occurrences of any of the words in this list would be less than one half well below the point where commonly used tests should be usedif such ordinary words are rare any statistical work with texts must deal with the reality of rare eventsit is interesting that while most of the words in running text are common ones most of the words in the total vocabulary are rareunfortunately the foundational assumption of most common statistical analyses used in computational linguistics is that the events being analyzed are relatively commonfor a sample of 50000 words from the reuters corpus mentioned previously none of the words in the table above is common enough to expect such analyses to work wellin text analysis the statistically based measures that have been used have usually been based on test statistics that are useful because given certain assumptions they have a known distributionthis distribution is most commonly either the normal or x2 distributionthese measures are very useful and can be used to accurately assess significance in a number of different settingsthey are based however on several assumptions that do not hold for most textual analysesthe details of how and why the assumptions behind these measures do not hold is of interest primarily to the statistician but the result is of interest to the statistical consumer more applicable techniques are important in textual analysisthe next section describes one such technique implementation of this technique is described in later sectionsbinomial distributions arise commonly in statistical analysis when the data to be analyzed are derived by counting the number of positive outcomes of repeated identical and independent experimentsflipping a coin is the prototypical experiment of this sortthe task of counting words can be cast into the form of a repeated sequence of such binary trials comparing each word in a text with the word being countedthese comparisons can be viewed as a sequence of binary experiments similar to coin flippingin text each comparison is clearly not independent of all others but the dependency falls off rapidly with distanceanother assumption that works relatively well in practice is that the probability of seeing a particular word does not varyof course this is not really true since changes in topic may cause this frequency to varyindeed it is the mild failure of this assumption that makes shallow information retrieval techniques possibleto the extent that these assumptions of independence and stationarity are valid we can switch to an abstract discourse concerning bernoulli trials instead of words in text and a number of standard results can be useda bernoulli trial is the statistical idealization of a coin flip in which there is a fixed probability of a successful outcome that does not vary from flip to flipin particular if the actual probability that the next word matches a prototype is p then the number of matches generated in the next n words is a random variable with binomial distribution n k whose mean is np and whose variance is npif np 5 then the distribution of this variable will be approximately normal and as np increases beyond that point the distribution becomes more and more like a normal distributionthis can be seen in figure 1 above where the binomial distribution is plotted along with the approximating normal distributions for np set to 5 10 and 20 with n fixed at 100larger values of n with np held constant give curves that are not visibly different from those shownfor these cases npr npthis agreement between the binomial and normal distributions is exactly what makes test statistics based on assumptions of normality so useful in the analysis of experiments based on countingin the case of the binomial distribution normality assumptions are generally considered to hold well enough when np 5the situation is different when np is less than 5 and is dramatically different when np is less than 1first it makes much less sense to approximate a discrete distribution such as the binomial with a continuous distribution such as the normalsecond the probabilities computed using the normal approximation are less and less accuratetable 1 shows the probability that one or more matches are found in 100 words of text as computed using the binomial and normal distributions for np 0001 np 001 np 01 and np 1 where n 100most words are sufficiently rare so that even for samples of text where n is as large as several thousand np will be at the bottom of this rangeshort phrases are so numerous that np 1 is a common problemthere is another class of tests that do not depend so critically on assumptions of normalityinstead they use the asymptotic distribution of the generalized likelihood ratiofor text analysis and similar problems the use of likelihood ratios leads to very much improved statistical resultsthe practical effect of this improvement is that statistical textual analysis can be done effectively with very much smaller volumes of text than is necessary for conventional tests based on assumed normal distributions and it allows comparisons to be made between the significance of the occurrences of both rare and common phenomenonlikelihood ratio tests are based on the idea that statistical hypotheses can be said to specify subspaces of the space described by the unknown parameters of the statistical model being usedthese tests assume that the model is known but that the parameters of the model are unknownsuch a test is called parametricother tests are available that make no assumptions about the underlying model at all they are called distributionfreeonly one particular parametric test is described heremore information on parametric and distributionfree tests is available in bradley and mood graybill and boes the probability that a given experimental outcome described by k1kn will be observed for a given model described by a number of parameters p1132 is called the likelihood function for the model and is written as where all arguments of h left of the semicolon are model parameters and all arguments right of the semicolon are observed valuesin the continuous case the probability is replaced by a probability densitywith binomial and multinomials we only deal with the discrete casefor repeated bernoulli trials m 2 because we observe both the number of trials and the number of positive outcomes and there is only one p the explicit form for the likelihood function is the parameter space is the set of all values for p and the hypothesis that p po is a single pointfor notational brevity the model parameters can be collected into a single parameter as can the observed valuesthen the likelihood function is written as where w is considered to be a point in the parameter space q and k a point in the space of observations k particular hypotheses or observations are represented by subscripting or k respectivelymore information about likelihood ratio tests can be found in texts on theoretical statistics the likelihood ratio for a hypothesis is the ratio of the maximum value of the likelihood function over the subspace represented by the hypothesis to the maximum value of the likelihood function over the entire parameter spacethat is where q is the entire parameter space and q0 is the particular hypothesis being testedthe particularly important feature of likelihood ratios is that the quantity 2 log a is asymptotically x2 distributed with degrees of freedom equal to the difference in dimension between q and q0importantly this asymptote is approached very quickly in the case of binomial and multinomial distributionsthe comparison of two binomial or multinomial processes can be done rather easily using likelihood ratiosin the case of two binomial distributions the hypothesis that the two distributions have the same underlying parameter is represented by the set i pi p2the likelihood ratio for this test is where taking the logarithm of the likelihood ratio gives 21og a 2 log l log up2 k2 n2 log l log l for the multinomial case it is convenient to use the double subscripts and the abbreviations this expression implicitly involves n because e119 n maximizing and taking the logarithm 21og a 2 log l log l log l log l 1 where if the null hypothesis holds then the loglikelihood ratio is asymptotically x2 distributed with k2 1 degrees of freedomwhen j is 2 2 log a will be x2 distributed with one degree of freedomif we had initially approximated the binomial distribution with a normal distribution with mean np and variance np then we would have arrived at another form that is a good approximation of 2 log a when np is more than roughly 5this form is where 21og a 2 as in the multinomial case above and interestingly this expression is exactly the test statistic for pearson x2 test although the form shown is not quite the customary onefigure 2 shows the reasonably good agreement between this expression and the exact binomial loglikelihood ratio derived earlier where p 01 and n1 n2 1000 for various values of ki and k2figure 3 on the other hand shows the divergence between pearson statistic and the loglikelihood ratio when p 001 n1 100 and n2 10000note the large change of scale on the vertical axisthe pronounced disparity occurs when ki is larger than the value expected based on the observed value of k2the case where n1 1 2 n 2 is exactly the case of most interest in many text analysestile convergence of the log of the likelihood ratio to the asymptotic distribution is demonstrated dramatically in figure 4in this figure the straighter line was computed using a symbolic algebra package and represents the idealized one degree of freedom cumulative x2 distributionthe rougher curve was computed by a numerical experiment in which p 001 n1 100 and n2 10000 which corresponds to the situation in figure 3the close agreement shows that the likelihood ratio measure produces accurate results over six decades of significance even in the range where the normal x2 measure diverges radically from the idealto test the efficacy of the likelihood methods an analysis was made of a 30000word sample of text obtained from the union bank of switzerland with the intention of finding pairs of words that occurred next to each other with a significantly higher frequency than would be expected based on the word frequencies alonethe text was 31777 words of financial text largely describing market conditions for 1986 and 1987the results of such a bigram analysis should highlight collocations common in english as well as collocations peculiar to the financial nature of the analyzed textas will be seen the ranking based on likelihood ratio tests does exactly thissimilar comparisons made between a large corpus of general text and a domainspecific text can be used to produce lists consisting only of words and bigrams characteristic of the domainspecific textsthis comparison was done by creating a contingency table that contained the following counts of each bigram that appeared in the text where the a b represents the bigram in which the first word is not word a and the second is word bif the words a and b occur independently then we would expect p pp where p is the probability of a and b occurring in sequence p is the probability of a appearing in the first position and p is the probability of b appearing in the second positionwe can cast this into the mold of our earlier binomial analysis by phrasing the null hypothesis that a and b are independent as p p pthis means that testing for the independence of a and b can be done by testing to see if the distribution of a given that b is present is the same as the distribution of a given that b is not present in fact of course we are not really doing a statistical test to see if a and b are independent we know that they are generally not independent in textinstead we just want to use the test statistic as a measure that will help highlight particular as and bs that are highly associated in textthese counts were analyzed using the test for binomials described earlier and the 50 most significant are tabulated in table 2this table contains the most significant 200 bigrams and is reverse sorted by the first column which contains the quantity 2 log aother columns contain the four counts from the contingency table described above and the bigram itselfexamination of the table shows that there is good correlation with intuitive feelings about how natural the bigrams in the table actually arethis is in distinct contrast with table 3 which contains the same data except that the first column is computed using pearson x2 test statisticthe overestimate of the significance of items that occur only a few times is dramaticin fact the entire first portion of the table is dominated by bigrams rare enough to occur only once in the current sample of textthe misspelling in the bigram ees posibilities is in the original textout of 2693 bigrams analyzed 2682 of them fall outside the scope of applicability of the normal x2 testthe 11 bigrams that were suitable for analysis with the x2 test are listed in table 4it is notable that all of these bigrams contain the word the which is the most common word in englishstatistics based on the assumption of normal distribution are invalid in most cases of statistical text analysis unless either enormous corpora are used or the analysis is restricted to only the very most common words this fact is typically ignored in much of the work in this fieldusing such invalid methods may seriously overestimate the significance of relatively rare eventsparametric statistical analysis based on the binomial or multinomial distribution extends the applicability of statistical methods to much smaller texts than models using normal distributions and shows good promise in early applications of the methodfurther work is needed to develop software tools to allow the straightforward analysis of texts using these methodssome of these tools have been developed and will be distributed by the consortium for lexical researchfor further information on this software contact the author or the consortium via email at tednmsuedu or lexicalnmsueduin addition there are a wide variety of distribution free methods that may avoid even the assumption that text can be modeled by multinomial distributionsmeasures based on fischer exact method may prove even more satisfactory than the likelihood ratio measures described in this paperalso using the poisson distribution instead of the multinomial as the limiting distribution for the distribution of counts may provide some benefitsall of these possibilities should be testedfor the binomial case the log likelihood statistic is given by 21og a 2 log l log l log l log l where for the multinomial case this statistic becomes 2 log 2 log l log l log l where kji ei kii ei kii eii kii ki log pi
J93-1003
accurate methods for the statistics of surprise and coincidencemuch work has been done on the statistical analysis of text in some cases reported in the literature inappropriate statistical methods have been used and statistical significance of results have not been addressedin particular asymptotic normality assumptions have often been used unjustifiably leading to flawed resultsthis assumption of normal distribution limits the ability to analyze rare eventsunfortunately rare events do make up a large fraction of real texthowever more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples these tests can be implemented efficiently and have been used for the detection of composite terms and for the determination of domainspecific termsin some cases these measures perform much better than the methods previously usedin cases where traditional contingency table methods work well the likelihood ratio tests described here are nearly identicalthis paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of textsince it was first introduced to the nlp community by us the g loglikelihoodratio statistic has been widely used in statistical nlp as a measure of strength of association particularly lexical associations
a program for aligning sentences in bilingual corpora researchers in both machine translation and bilingual lexicography have recently become interested in studying bilingual corpora bodies of text such as the canadian hansards which are available in multiple languages one useful step is to align the sentences that is to identify correspondences between sentences in one language and sentences in the other language this paper will describe a method and a program for aligning sentences based on a simple statistical model of character lengths the program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language and that shorter sentences tend to be translated into shorter sentences a probabilistic score is assigned to each proposed correspondence of sentences based on the scaled difference of lengths of the two sentences and the variance of this difference this probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentences it is remarkable that such a simple approach works as well as it does an evaluation was performed based on a trilingual corpus of economic reports issued by the union bank of switzerland in english french and german the method correctly aligned all but 4 of the sentences moreover it is possible to extract a large subcorpus that has a much smaller error rate by selecting the bestscoring 80 of the alignments the error rate is reduced from 4 to 07 there were more errors on the englishfrench subcorpus than on the englishgerman subcorpus showing that error rates will depend on the corpus considered however both were small enough to hope that the method will be useful for many language pairs to further research on bilingual corpora a much larger sample of canadian hansards has been aligned with the and will be available through the data collection initiative of the association for computational linguistics in addition in order to facilitate replication of the an appendix is provided with detailed ccode of the more difficult core of the program researchers in both machine translation and bilingual lexicography have recently become interested in studying bilingual corpora bodies of text such as the canadian hansards which are available in multiple languages one useful step is to align the sentences that is to identify correspondences between sentences in one language and sentences in the other languagethis paper will describe a method and a program for aligning sentences based on a simple statistical model of character lengthsthe program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language and that shorter sentences tend to be translated into shorter sentencesa probabilistic score is assigned to each proposed correspondence of sentences based on the scaled difference of lengths of the two sentences and the variance of this differencethis probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentencesit is remarkable that such a simple approach works as well as it doesan evaluation was performed based on a trilingual corpus of economic reports issued by the union bank of switzerland in english french and germanthe method correctly aligned all but 4 of the sentencesmoreover it is possible to extract a large subcorpus that has a much smaller error rateby selecting the bestscoring 80 of the alignments the error rate is reduced from 4 to 07there were more errors on the englishfrench subcorpus than on the englishgerman subcorpus showing that error rates will depend on the corpus considered however both were small enough to hope that the method will be useful for many language pairsto further research on bilingual corpora a much larger sample of canadian hansards has been aligned with the align program and will be available through the data collection initiative of the association for computational linguistics in addition in order to facilitate replication of the align program an appendix is provided with detailed ccode of the more difficult core of the align programresearchers in both machine translation and bilingual lexicography have recently become interested in studying bilingual corpora bodies of text such as the canadian hansards which are available in multiple languages the sentence alignment task is to identify correspondences between sentences in one input to alignment programenglish french according to our survey 1988 sales of mineral water and soft drinks were much higher than in 1987 reflecting the growing popularity of these productscola drink manufacturers in particular achieved aboveaverage growth ratesthe higher turnover was largely due to an increase in the sales volumeemployment and investment levels also climbedfollowing a twoyear transitional period the new foodstuffs ordinance for mineral water came into effect on april 1 1988specifically it contains more stringent requirements regarding quality consistency and purity guaranteesquant aux eaux minerales et aux limonades elles rencontrent toujours plus dadeptesen effet notre sondage fait ressortir des ventes nettement superieures a celles de 1987 pour les boissons a base de cola notammentla progression des chiffres daffaires resulte en grande partie de laccroissement du volume des venteslemploi et les investissements ont egalement augment la nouvelle ordonnance federale sur les denrees alimentaires concernant entre autres les eaux minerales entrée en vigueur le ler avril 1988 apres une periode transitoire de deux ans exige surtout une plus grande constance dans la qualite et une garantie de la purete language and sentences in the other languagethis task is a first step toward the more ambitious task finding correspondences among words1 the input is a pair of texts such as table 1the output identifies the alignment between sentencesmost english sentences match exactly one french sentence but it is possible for an english sentence to match two or more french sentencesthe first two english sentences in table 2 illustrate a particularly hard case where two english sentences align to two french sentencesno smaller alignments are possible because the clause quot sales were higher quot in the first english sentence corresponds to the second french sentencethe next two alignments below illustrate the more typical case where one english sentence aligns with exactly one french sentencethe final alignment matches two english sentences to a single french sentencethese alignments agreed with the results produced by a human judgealigning sentences is just a first step toward constructing a probabilistic dictionary for use in aligning words in machine translation or for constructing a bilingual concordance for use in lexicography although there has been some previous work on the sentence alignment for example was written more than two years ago and is still unpublishedsimilarly the ibm work is also several years old but not the le 0610 the la 0178 the l 0083 the les 0023 the ce 0013 the il 0012 the de 0009 the a 0007 the que 0007 very well documented in the published literature consequently there has been a lot of unnecessary subsequent work at issco and elsewhere2 the method we describe has the same sentencelength basis as does that of brown lai and mercer while the two differ considerably from the lexical approaches tried by kay and roscheisen and by catizone russell and warwickthe feasibility of other methods has varied greatlykay approach is apparently quite slowat least with the currently inefficient implementation it might take hours 2 after we finished most of this work it came to our attention that the ibm mt group has at least four papers that mention sentence alignment start from a set of aligned sentences suggesting that they had a solution to the sentence alignment problem back in 1988brown et al mention that sentence lengths formed the basis of their methodthe draft by brown lai and mercer describes their process without giving equationsaccording to our survey 1988 sales of mineral water and soft drinks were much higher than in 1987 reflecting the growing popularity of these productscola drink manufacturers in particular achieved aboveaverage growth ratesthe higher turnover was largely due to an increase in the sales volumequant aux eaux minerales et aux limonades elles rencontrent toujours plus dadeptesen effet notre sondage fait ressortir des ventes nettement superieures a celles de 1987 pour les boissons a base de cola notammentla progression des chiffres daffaires resulte en grande partie de laccroissement du volume des ventesemployment and investment levels also lemploi et les investissements ont egaleclimbed ment augmentefollowing a twoyear transitional period the new foodstuffs ordinance for mineral water came into effect on april 1 1988specifically it contains more stringent requirements regarding quality consistency and purity guaranteesla nouvelle ordonnance federale sur les denrees alimentaires concernant entre autres les eaux minerales entrée en vigueur le ler avril 1988 apres une periode transitoire de deux ans exige surtout une plus grande constance dans la qualite et une garantie de la puretea bilingual concordance bankbanque it could also be a place where we would have a ftre le lieu oii se retrouverait une espece de f finance and the governor of the es finances et le gouverneur de la reduced by over 800 per cent in one week through us de 800 p 100 en une semaine a because dune bank of expertssent i know several people who a banque d expertssent je connais plusieurs pers bank of canada have frequently on behalf of the ca banque du canada ont frequemment utilise au co bank actionsent there was a haberdasher who wou banquesent voila un chemisier qui aurait appr bankbanc h a forumsent such was the case in the georges entre les etatsunis et le canada a propos du han i didsent he said the nose and tail of the gouvernement avait cede les extremites du he fishing privileges on the nose and tail of the les privileges de peche aux extremites du bank issue which was settled between canada and th banc de georgesent cest dans le but de re bank were surrendered by this governmentsent th bancsent en fait lors des negociations de 1 bank went down the tube before we even negotiated banc ont ete liquid avant meme qu on ai to align a single scientific american article it ought to be possible to achieve fairly reasonable results with much less computationthe ibm algorithm is much more efficient since they were able to extract nearly 3 million pairs of sentences from hansard materials in 10 days of running time on an ibm model 3090 mainframe computer with access to 16 megabytes of virtual memory the evaluation of results has been absent or rudimentarykay gives positive examples of the alignment process but no counts of error ratesbrown lai and mercer report that they achieve a 06 error rate when the algorithm suggests aligning one sentence with one sentencehowever they do not characterize its performance overall or on the more difficult casessince the research community has not had access to a practical sentence alignment program we thought that it would be helpful to describe such a program and to evaluate its resultsin addition a large sample of canadian hansards has been aligned with the align program and has been made available to the general research community through the data collection initiative of the association for computational linguistics in order to facilitate replication of the align program an appendix is provided with detailed ccode of the more difficult core of the align programthe align program is based on a very simple statistical model of character lengthsthe model makes use of the fact that longer sentences in one language tend to be translated into longer sentences in the other language and that shorter sentences tend to be translated into shorter sentencesa probabilistic score is assigned to each pair of proposed sentence pairs based on the ratio of lengths of the two sentences and the variance of this ratiothis probabilistic score is used in a dynamic programming framework in order to find the maximum likelihood alignment of sentencesit is remarkable that such a simple approach can work as well as it doesan evaluation was performed based on a trilingual corpus of 15 economic reports issued by the union bank of switzerland in english french and german the method correctly aligned all but 4 of the sentencesmoreover it is possible to extract a large subcorpus that has a much smaller error rateby selecting the bestscoring 80 of the alignments the error rate is reduced from 4 to 07there were more errors on the englishfrench subcorpus than on the englishgerman subcorpus showing that error rates will depend on the corpus considered however both were small enough for us to hope that the method will be useful for many language pairswe believe that the error rate is considerably lower in the canadian hansards because the translations are more literalthe sentence alignment program is a twostep processfirst paragraphs are aligned and then sentences within a paragraph are alignedit is fairly easy to align paragraphs in our trilingual corpus of swiss banking reports since the boundaries are usually clearly markedhowever there are some short headings and signatures that can be confused with paragraphsmoreover these short quotpseudoparagraphsquot are not always translated into all languageson a corpus this small the paragraphs could have been aligned by handit turns out that quotpseudoparagraphsquot usually have fewer than 50 characters and that real paragraphs usually have more than 100 characterswe used this fact to align the paragraphs automatically checking the result by handthe procedure correctly aligned all of the english and german paragraphshowever one of the french documents was badly translated and could not be aligned because of the omission of one long paragraph and the duplication of a short onethis document was excluded for the purposes of the remainder of this experimentwe will show below that paragraph alignment is an important step so it is fortunate that it is not particularly difficultin aligning the hansards we found that paragraphs were often already alignedfor robustness we decided to align paragraphs within certain fairly reliable regions using the same method as that described below for aligning sentences within each paragraphnow let us consider how sentences can be aligned within a paragraphthe program makes use of the fact that longer sentences in one language tend to be translated into longer sentences in the other language and that shorter sentences tend to be translated into shorter sentences3 a probabilistic score is assigned to each proposed pair of sentences based on the ratio of lengths of the two sentences and the variance of this ratiothis probabilistic score is used in a dynamic programming framework in order to find the maximum likelihood alignment of sentencesthe fol3 we will have little to say about how sentence boundaries are identifiedidentifying sentence boundaries is not always as easy as it might appear for reasons described in liberman and church it would be much easier if periods were always used to mark sentence boundaries but unfortunately many periods have other purposesin the brown corpus for example only 90 of the periods are used to mark sentence boundaries the remaining 10 appear in numerical expressions abbreviations and so forthin the wall street journal there is even more discussion of dollar amounts and percentages as well as more use of abbreviated titles such as mr consequently only 53 of the periods in the wall street journal are used to identify sentence boundariesfor the ubs data a simple set of heuristics were used to identify sentences boundariesthe dataset was sufficiently small that it was possible to correct the remaining mistakes by handfor a larger dataset such as the canadian hansards it was not possible to check the results by handwe used the same procedure that is used in church this procedure was developed by kathryn baker paragraph lengths are highly correlatedthe horizontal axis shows the length of english paragraphs while the vertical scale shows the lengths of the corresponding german paragraphsnote that the correlation is quite large lowing striking figure could easily lead one to this approachfigure 1 shows that the lengths of english and german paragraphs are highly correlated dynamic programming is often used to align two sequences of symbols in a variety of settings such as genetic code sequences from different species speech sequences from different speakers gas chromatograph sequences from different compounds and geologic sequences from different locations we could expect these matching techniques to be useful as long as the order of the sentences does not differ too radically between the two languagesdetails of the alignment techniques differ considerably from one application to another but all use a distance measure to compare two individual elements within the sequences and a dynamic programming algorithm to minimize the total distances between aligned elements within two sequenceswe have found that the sentence alignment problem fits fairly well into this framework though it is necessary to introduce a fairly interesting innovation into the structure of the distance measurekruskal and liberman describe distance measures as belonging to one of two classes trace and timewarpthe difference becomes important when a single element of one sequence is being matched with multiple elements from the otherin trace applications such as genetic code matching the single element is matched with just one of the multiple elements and all of the others will be ignoredin contrast in timewarp applications such as speech template matching the single element is matched with each of the multiple elements and the single element will be used in multiple matchesinterestingly enough our application does not fit into either of kruskal and liberman classes because our distance measure needs to compare the single element with an aggregate of the multiple elementsit is convenient for the distance measure to be based on a probabilistic model so that information can be combined in a consistent wayour distance measure is an estimate of log prob where 6 depends on h and 2 the lengths of the two portions of text under considerationthe log is introduced here so that adding distances will produce desirable resultsthis distance measure is based on the assumption that each character in one language l1 gives rise to a random number of characters in the other language l2we assume these random variables are independent and identically distributed with a normal distributionthe model is then specified by the mean c and variance s2 of this distribution c is the expected number of characters in l2 per character in l1 and s2 is the variance of the number of characters in l2 per character in l1we define 6 to be vi s2 so that it has a normal distribution with mean zero and variance one figure 2 is a check on the assumption that 6 is normally distributedthe figure is constructed using the parameters c and s2 estimated for the programvariance is modeled proportional to lengththe horizontal axis plots the length of english paragraphs while the vertical axis shows the square of the difference of english and german lengths an estimate of variancethe plot indicates that variance increases with length as predicted by the modelthe line shows the result of a robust regression analysisfive extreme points lying above the top of this figure have been suppressed since they did not contribute to the robust regressionthe parameters c and s2 are determined empirically from the ubs datawe could estimate c by counting the number of characters in german paragraphs then dividing by the number of characters in corresponding english paragraphswe obtain 8110573481 11the same calculation on french and english paragraphs yields c 7230268450 106 as the expected number of french characters per english characteras will be explained later performance does not seem to be very sensitive to these precise languagedependent quantities and therefore we simply assume the languageindependent value c 1 which simplifies the program considerablythis value would clearly be inappropriate for englishchinese alignment but it seems likely to be useful for most pairs of european languages s2 is estimated from figure 3the model assumes that s2 is proportional to lengththe constant of proportionality is determined by the slope of the robust regression line shown in the figurethe result for englishgerman is s2 73 and for english french is s2 56again we will see that the difference in the two slopes is not too importanttherefore we can combine the data across languages and adopt the simpler languageindependent estimate s2 68 which is what is actually used in the programwe now appeal to bayes theorem to estimate prob as a constant times prob probthe constant can be ignored since it will be the same for where prob is the probability that a random variable z with a standardized normal distribution has magnitude at least as large as 161that is the program computes 6 directly from the lengths of the two portions of text 1 and 2 and the two parameters c and s2that is 6 vlis2then prob is computed by integrating a standard normal distribution many statistics textbooks include a table for computing thisthe code in the appendix uses the pnorm function which is based on an approximation described by abramowitz and stegun the prior probability of a match prob is fit with the values in table 5 which were determined from the handmarked ubs datawe have found that a sentence in one language normally matches exactly one sentence in the other language three additional possibilities are also considered 10 21 and 22table 5 shows all four possibilitiesthis completes the discussion of the distance measureprob is computed as an constant times probprobprob is computed using the values in table 5prob is computed by assuming that prob 2 where prob has a standard normal distributionwe first calculate 6 as vis2 and then prob is computed by integrating a standard normal distributionsee the cfunction two_side_distance in the appendix for an example of a ccode implementation of these calculationsthe distance function d represented in the program as two_side_clistance is defined in a general way to allow for insertions deletion substitution etcthe function takes four arguments xl yi x2 y2the algorithm is summarized in the following recursion equationlet si i 1 i be the sentences of one language and ti j 1 j be the translations of those sentences in the other languagelet d be the distance function described in the previous section and let d be the minimum distance between sentences s1 si and their translations ti under the maximum likelihood alignmentd is computed by minimizing over six cases which in effect impose a set of slope constraintsthat is d is defined by the following recurrence with the initial condition d 0to evaluate align its results were compared with a human alignmentall of the ubs sentences were aligned by a primary judge a native speaker of english with a reading knowledge of french and germantwo additional judges a native speaker of french and a native speaker of german respectively were used to check the primary judge on 43 of the more difficult paragraphs having 230 sentences both of the additional judges were also fluent in english having spent the last few years living and working in the united states though they were both more comfortable with their native language than with englishthe materials were prepared in order to make the task somewhat less tedious for the judgeseach paragraph was printed in three columns one for each of the three languages english french and germanblank lines were inserted between sentencesthe judges were asked to draw lines between matching sentencesthe judges were also permitted to draw a line between a sentence and quotnullquot if they thought that the sentence was not translatedfor the purposes of this evaluation two sentences were defined to quotmatchquot if they shared a common clauseafter checking the primary judge with the other two judges it was decided that the primary judge results were sufficiently reliable that they could be used as a standard for evaluating the programthe primary judge made only two mistakes on the 43 hard paragraphs whereas the program made 44 errors on the same materialssince the primary judge error rate is so much lower than that of the program it was decided that we need not be concerned with the primary judge error rateif the program and the judge disagree we can assume that the program is probably wrongthe 43 quothardquot paragraphs were selected by looking for sentences that mapped to something other than themselves after going through both german and frenchspecifically for each english sentence we attempted to find the corresponding german sentences and then for each of them we attempted to find the corresponding french sentences and then we attempted to find the corresponding english sentences which should hopefully get us back to where we startedthe 43 paragraphs included all sentences in which this process could not be completed around the loopthis relatively small group of paragraphs contained a relatively large fraction of the program errors thus there seems to be some verification that this trilingual criterion does in fact succeed in distinguishing more difficult paragraphs from less difficult onesthere are three pairs of languages englishgerman englishfrench and frenchgermanwe will report on just the first twoerrors are reported with respect to the judge responsesthat is for each of the quotmatchesquot that the primary judge found we report the program as correct if it found the quotmatchquot and incorrect if it did notthis procedure is better than comparing on the basis of alignments proposed by the algorithm for two reasonsfirst it makes the trial quotblindquot that is the judge does not know the algorithm result when judgingsecond it allows comparison of results for different algorithms on a common basisthe program made 36 errors out of 621 total alignments for englishfrench and 19 errors out of 695 alignments for englishgermanoverall there were 55 errors out of a total of 1316 alignments the higher error rate for englishfrench alignments may result from the german being the original so that the english and german differ by one translation while the english and french differ by two translationstable 6 breaks down the errors by category illustrating that complex matches are more difficult11 alignments are by far the easiestthe 21 alignments which come next have four times the error rate for 11the 22 alignments are harder still but a majority of the alignments are foundthe 31 and 32 alignments are not even considered by the algorithm so naturally all three instances of these are counted as errorsthe most embarrassing category is 10 which was never handled correctlyin addition when the algorithm assigns a sentence to the 10 category it is also always wrongclearly more work is needed to deal with the 10 categoryit may be necessary to consider languagespecific methods in order to deal adequately with this casesince the algorithm achieves substantially better performance on the 11 regions one interpretation of these results is that the overall low error rate is due to the high frequency of 11 alignments in englishfrench and englishgerman translationstranslations to linguistically more different languages such as hebrew or japanese might encounter a higher proportion of hard matcheswe investigated the possible dependence of the error rate on four variables we used logistic regression to see how well each of the four variables predicted the errorsthe coefficients and their standard deviations are shown in table 7apparently the distance measure is the most useful predictor as indicated by the last columnin fact none of the other three factors was found to contribute significantly beyond the effect of the distance measure indicating that the distance measure is already doing an excellent job and we should not expect much improvement if we were to try to augment the measure to take these additional factors into accountthe fact that the score is such a good predictor of performance can be used to extract a large subcorpus that has a much smaller error rateby selecting the best scoring 80 of the alignments the error rate can be reduced from 4 to 07in general we can trade off the size of the subcorpus and the accuracy by setting a threshold and rejecting alignments with a score above this thresholdfigure 4 examines this tradeoff in more detailless formal tests of the error rate in the hansards suggest that the overall error rate is about 2 while the error rate for the easy 80 of the sentences is about 04apparently the hansard translations are more literal than the ubs reportsit took 20 hours of real time on a sun 4 to align 367 days of hansards or 33 minutes per hansarddaythe 367 days of hansards contained about 890000 sentences or about 37 million quotwordsquot about half of the computer time is spent identifying tokens sentences and paragraphs and about half of the time is spent in the align program itselfthe overall error 42 that we get on the ubs corpus is considerably higher than the 06 error reported by brown lai and mercer however a direct comparison is misleading because of the differences in corpora and the differences in samplingwe have observed that the hansards are much easier than the ubsour error rate drops by about 50 in that casealigning the ubs french and english texts is more difficult than aligning the english and german because the french and english extracting a subcorpus with lower error ratethe fact that the score is such a good predictor of performance can be used to extract a large subcorpus that has a much smaller error ratein general we can trade off the size of the subcorpus and the accuracy by setting a threshold and rejecting alignments with a score above this thresholdthe horizontal axis shows the size of the subcorpus and the vertical axis shows the corresponding error ratean error rate of about 23 can be obtained by selecting a threshold that would retain approximately 80 of the corpus versions are separated by two translations both being translations of the german originalin addition ibm samples only the 11 alignments which are much easier than any other category as one can see from table 6given these differences in testing methodology as well as the differences in the algorithms we find the methods giving broadly similar resultsboth methods give results with sufficient accuracy to use the resulting alignments or selected portions thereof for acquisition of lexical informationand neither method achieves human accuracy on the taskwe conclude that a sentence alignment method that achieves human accuracy will need to have lexical information available to itit is interesting to consider what happens if we change our definition of length to count words rather than charactersit might seem that a word is a more natural linguistic unit than a characterhowever we have found that words do not perform as well as charactersin fact the quotwordsquot variation increases the number of errors dramatically the total errors were thereby increased from 55 to 85 or from 42 to 65we believe that characters are better because there are more of them and therefore there is less uncertaintyon the average there are 117 characters per sentence and only 17 words per sentencerecall that we have modeled variance as proportional to sentence length v s2using the character data we found previously that s2 65the same argument applied to words yields s2 19for comparison sake it is useful to consider the ratio of vvm where m is the mean sentence lengthwe obtain vvm ratios of 022 for characters and 033 for words indicating that characters are less noisy than words and are therefore more suitable for use in alignalthough brown lai and mercer used lengths measured in words comparisons of error rates between our work and theirs will not test whether characters or words are more usefulas set out in the previous section there are numerous differences in testing methodology and materialsfurthermore there are apparently many differences between the ibm algorithm and ours other than the units of measurement which could also account for any difference on performanceappropriate methodology is to compare methods with only one factor varying as we do hererecall that align is a twostep processfirst paragraph boundaries are identified and then sentences are aligned within paragraphswe considered eliminating the first step and found a threefold degradation in performancethe englishfrench errors were increased from 36 to 84 and the englishgerman errors from 19 to 86the overall errors were increased from 55 to 170thus the twostep approach reduces errors by a factor of threeit is possible that performance might be improved further still by introducing additional alignment steps at the clause andor phrase levels but testing this hypothesis would require access to robust parsing technologythe original version of the program did not consider the category of 22 alignmentstable 6 shows that the program was right on 10 of 15 actual 22 alignmentsthis was achieved at the cost of introducing 2 spurious 22 alignmentsthus in 12 tries the program was right 10 times wrong 2 timesthis is significantly better than chance since there is less than 1 chance of getting 10 or more heads out of 12 flips of a fair cointhus it is worthwhile to include the 22 alignment possibilitywhen we discussed the estimation of the model parameters c and s2 we mentioned that it is possible to fit the parameters more accurately if we estimate different values for each language pair but that doing so did not seem to increase performance by very muchin fact we found exactly the same total number of errors although the errors are slightly differentchanging the parameters resulted in four changes to the output for englishfrench and two changes to the output for english german since it is more convenient to use languageindependent parameter values and doing so does not seem to hurt performance very much we have decided to adopt the languageindependent values751 hard and soft boundariesrecall that we rejected one of the french documents because one paragraph was omitted and two paragraphs were duplicatedwe could have handled this case if we had employed a more powerful paragraph alignment algorithmin fact in aligning the canadian hansards we found that it was necessary to do something more elaborate than we did for the ubs datawe decided to use more or less the same procedure for aligning paragraphs within a document as the procedure that we used for aligning sentences within a paragraphlet us introduce the distinction between hard and soft delimitersthe alignment program is defined to move soft delimiters as necessary within the constraints of the hard delimitershard delimiters cannot be modified and there must be equal numbers of themwhen aligning sentences within a paragraph the program considers paragraph boundaries to be quothardquot and sentence boundaries to be quotsoftquot when aligning paragraphs within a document the program considers document boundaries to be quothardquot and paragraph boundaries to be quotsoftquot this entension has been incorporated into the implementation presented in the appendix alignment procedures such as kay and roscheisen make use of wordsit ought to help to know that the english string quothousequot and the french string quotmaisonquot are likely to corresponddates and numbers are perhaps an even more extreme exampleit really ought to help to know that the english string quot1988quot and the french string quot1988quot are likely to correspondwe are currently exploring ways to integrate these kinds of clues into the framework described abovehowever at present the algorithm does not have access to lexical constraints which are clearly very importantwe expect that once these clues are properly integrated the program will achieve performance comparable to that of the primary judgehowever we are still not convinced that it is necessary to process these lexical clues since the current performance is sufficient for many applications such as building a probabilistic dictionaryit is remarkable just how well we can do without lexical constraintsadding lexical constraints might slow down the program and make it less useful as a first passthis paper has proposed a method for aligning sentences in a bilingual corpus based on a simple probabilistic model described in section 3the model was motivated by the observation that longer regions of text tend to have longer translations and that shorter regions of text tend to have shorter translationsin particular we found that the correlation between the length of a paragraph in characters and the length of its translation was extremely high this high correlation suggests that length might be a strong clue for sentence alignmentalthough this method is extremely simple it is also quite accurateoverall there was a 42 error rate on 1316 alignments averaged over both englishfrench and englishgerman datain addition we find that the probability score is a good predictor of accuracy and consequently it is possible to select a subset of 80 of the alignments with a much smaller error rate of only 07the method is also fairly languageindependentboth englishfrench and english german data were processed using the same parametersif necessary it is possible to fit the six parameters in the model with languagespecific values though thus far we have not found it necessary to do sowe have examined a number of variationsin particular we found that it is better to use characters rather than words in counting sentence lengthapparently the performance is better with characters because there is less variability in the differences of sentence lengths so measuredusing words as units increases the error rate by half from 42 to 65in the future we would hope to extend the method to make use of lexical constraintshowever it is remarkable just how well we can do without such constraintswe might advocate our simple character alignment procedure as a first pass even to those who advocate the use of lexical constraintsour procedure would complement a lexical approach quite wellour method is quick but makes a few percent errors a lexical approach is probably slower though possibly more accurateone might go with our approach when the scores are small and back off to a lexicalbased approach as necessarywe thank susanne wolff and evelyne tzoukermann for their pains in aligning sentencessusan warwick provided us with the ubs trilingual corpus and convinced us to work on the sentence alignment problemquotderiving translation data from bilingual textsquot in lexical acquisition using online resources to build a lexicon edited by zerniklawrence erlbaumchurch k quota stochastic parts program and noun phrase parser for unrestricted textquot in proceedings second kruskal j and liberman m quotthe symmetric timewarping problem from continuous to discretequot in time warps string edits and macro molecules the theory and practice of sequence comparison edited by d sankoff and j kruskaladdisonwesleyliberman m and church k quottext analysis and word pronunciation in texttospeech synthesisquot in advances in speech signal processing edited by s furui and m sondhisankoff d and kruskal jtime warps string edits and macromolecules the theory and practice of sequence comparisonaddisonwesleywith michael d riley the following code is the core of alignit is a c language program that inputs two textfileswith one token petlinethe text files contain a number of delimiter tokensthere are two types of delimiter tokens quothardquot and quotsoftquot the hard regions paragraphs may not be changed and there must be equal numbers of them in the two input filesthe soft regions may be deleted substituted contracted expanded as necessary so that the output ends up with the same number of soft regionsthe program generates two output fidesthe two output files contain an equalnurnberofsoftregions each on a lineif the v command line option is included eachsoftresjmis preceded by its probability score char hard_delimiter null d arg char soft delimiter null d arg int verbose 0 v arg utility functions align is the alignment with aligned with zero in alignnx1 and alignny1 correspond to insertion and deletion respectivelynonzero in align x2 and align y2 correspond to contraction and expansion respectively alignod gives the distance for that pairingthe function returns the length of the alignment return 100 log probability that an english sentence of length lenl is a translation of a foreign sentence of length len2the probability is based on two parameters the mean and variance of number of foreign characters per english character
J93-1004
a program for aligning sentences in bilingual corporaresearchers in both machine translation and bilingual lexicography have recently become interested in studying bilingual corpora bodies of text such as the canadian hansards which are available in multiple languages one useful step is to align the sentences that is to identify correspondences between sentences in one language and sentences in the other languagethis paper will describe a method and a program for aligning sentences based on a simple statistical model of character lengthsthe program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language and that shorter sentences tend to be translated into shorter sentencesa probabilistic score is assigned to each proposed correspondence of sen tences based on the scaled difference of lengths of the two sentences and the variance of this differencethis probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentencesit is remarkable that such a simple approach works as well as it doesan evaluation was performed based on a trilingual corpus of economic reports issued by the union bank of switzerland in english french and germanthe method correctly aligned all but 4 of the sentencesmoreover it is possible to extract a large subcorpus that has a much smaller error rateby selecting the bestscoring 80 of the alignments the error rate is reduced from 4 to 07there were more errors on the englishfrench subcorpus than on the englishgerman subcorpus showing that error rates will depend on the corpus considered however both were small enough to hope that the method will be useful for many language pairsto further research on bilingual corpora a much larger sample of canadian hansards has been aligned with the align program and will be available through the data collection initiative of the association for computational linguistics in addition in order to facilitate replication of the align program an appendix is provided with detailed ccode of the more difficult core of the align programwe present a hybrid approach and the basic hypothesis is that longer sentences in one language tend to be translated into longer sentences in the other language and shorter sentences tend to be translated into shorter sentenceswe propose a dynamic programming algorithm for the sentencelevel alignment of translations that exploited two facts the length of translated sentences roughly corresponds to the length of the original sentences and the sequence of sentences in translated text largely corresponds to the original order of sentences
structural ambiguity and lexical relations we propose that many ambiguous prepositional phrase attachments can be resolved on the basis of the relative strength of association of the preposition with verbal and nominal heads estimated on the basis of distribution in an automatically parsed corpus this suggests that a distributional approach can provide an approximate solution to parsing problems that in the worst case call for complex reasoning we propose that many ambiguous prepositional phrase attachments can be resolved on the basis of the relative strength of association of the preposition with verbal and nominal heads estimated on the basis of distribution in an automatically parsed corpusthis suggests that a distributional approach can provide an approximate solution to parsing problems that in the worst case call for complex reasoningprepositional phrase attachment is the canonical case of structural ambiguity as in the timeworn example example 1 i saw the man with the telescopean analysis where the prepositional phrase pp with the telescope is part of the object noun phrase has the semantics quotthe man who had the telescopequot an analysis where the pp has a higher attachment is associated with a semantics where the seeing is achieved by means of a telescopethe existence of such ambiguity raises problems for language modelsit looks like it might require extremely complex computation to determine what attaches to whatindeed one recent proposal suggests that resolving attachment ambiguity requires the construction of a discourse model in which the entities referred to in a text are represented and reasoned about we take this argument to show that reasoning essentially involving reference in a discourse model is implicated in resolving attachment ambiguities in a certain class of casesif this phenomenon is typical there is little hope in the near term for building computational models capable of resolving such ambiguities in unrestricted textthere have been several structurebased proposals about ambiguity resolution in the literature they are particularly attractive because they are simple and do not demand calculations in the semantic or discourse domainsthe two main ones are as followsfor the particular case we are concerned with attachment of a prepositional phrase in a verb object context as in example 1 these two principlesat least given the version of syntax that frazier assumesmake opposite predictions right association predicts noun attachment while minimal attachment predicts verb attachmentpsycholinguistic work on structurebased strategies is primarily concerned with modeling the time course of parsing and disambiguation and acknowledges that other information enters into determining a final parsestill one can ask what information is relevant to determining a final parse and it seems that in this domain structurebased disambiguation is not a very good predictora recent study of attachment of prepositional phrases in a sample of written responses to a quotwizard of ozquot travel information experiment shows that neither right association nor minimal attachment accounts for more than 55 of the cases and experiments by taraban and mcclelland show that the structural models are not in fact good predictors of people behavior in resolving ambiguitywhittemore ferrara and brunner found lexical preferences to be the key to resolving attachment ambiguitysimilarly taraban and mcclelland found that lexical content was key in explaining people behaviorvarious previous proposals for guiding attachment disambiguation by the lexical content of specific words have appeared unfortunately it is not clear where the necessary information about lexical preferences is to be foundjenson and binot describe the use of dictionary definitions for disambiguation but dictionaries are typically rather uneven in their coveragein the whittemore ferrara and brunner study the judgment of attachment preferences had to be made by hand for the cases that their study covered no precompiled list of lexical preferences was availablethus we are posed with the problem of how we can get a good list of lexical preferencesour proposal is to use cooccurrence of verbs and nouns with prepositions in a large body of text as an indicator of lexical preferencethus for example the preposition to occurs frequently in the context send np_ that is after the object of the verb sendthis is evidence of a lexical association of the verb send with tosimilarly from occurs frequently in the context withdrawal_ and this is evidence of a lexical association of the noun withdrawal with the preposition fromthis kind of association is a symmetric notion it provides no indication of whether the preposition is selecting the verbal or nominal head or vice versawe will treat the association as a property of the pair of wordsit is a separate issue which we will not be concerned with in the initial part of this paper to assign the association to a particular linguistic licensing relationthe suggestion that we want to explore is that the association revealed by textual distributionwhether its source is a complementation relation a modification relation or something elsegives us information needed to resolve prepositional attachment in the majority of casesa 13 millionword sample of associated press news stories from 1989 were automatically parsed by the fidditch parser using church a sample of np heads preceding verbs and following prepositions derived from the parsed corpus partofspeech analyzer as a preprocessor a combination that we will call simply quotthe parserquot the parser produces a single partial syntactic description of a sentenceconsider example 2 and its parsed representation in example 3the information in the tree representation is partial in the sense that some attachment information is missing the nodes dominated by quotquot have not been integrated into the syntactic representationnote in particular that many pps have not been attachedthis is a symptom of the fact that the parser does not have the kind of lexical information that we have just claimed is required in resolving pp attachmentexample 2 the radical changes in export and customs regulations evidently are aimed at remedying an extreme shortage of consumer goods in the soviet union and assuaging citizens angry over the scarcity of such basic items as soap and windshield wipersfrom the syntactic analysis provided by the parser we extracted a table containing the heads of all noun phrasesfor each noun phrase head we recorded the following preposition if any occurred and the preceding verb if the noun phrase was the object of that verbthe entries in table 1 are those generated from the text aboveeach noun phrase in example 3 is associated with an entry in the noun column of the tableusually this is simply the root of the head of the noun phrase good is the root of the head of consumer goodsnoun phrases with no head or where the head is not a common noun are coded in a special way dartpnp represents a noun phrase beginning with a definite article and headed by a proper noun and ving represents a gerundive noun phrasepro represents the empty category which in the syntactic theory underlying the parser is assumed to be the object of the passive verb aimedin cases where a prepositional phrase follows the noun phrase the head preposition appears in the prep column attached and unattached prepositional phrases generate the same kinds of entriesif the noun phrase is an object the root of the governing verb appears in the verb column aim is the root of aimed the verb governing the empty category the last column in the table labeled syntax marks with the symbol v all cases where there is no preceding verb that might license the preposition the initial subject of example 2 is such a casein the 13 millionword sample 2661872 noun phrases were identifiedof these 467920 were recognized as the object of a verb and 753843 were followed by a prepositionof the object noun phrases identified 223666 were ambiguous verb nounpreposition triplesthe table of verbs nouns and prepositions is in several respects an imperfect source of information about lexical associationsfirst the parser gives us incorrect analyses in some casesfor instance in the analysis partially described in example 4a the parser incorrectly classified probes as a verb resulting in a table entry probe lightning insimilarly in example 4b the infinitival marker to has been misidentified as a preposition athe space v probes detected lightning in jupiter upper atmosphere and observed auroral emissions like earth northern lights in the jovian polar regions bthe bush administration told congress on tuesday it wants to v preserve the right to control entry to the united states of anyone who was ever a communistsecond a preposition in an entry might be structurally related to neither the noun of the entry nor the verb even if the entry is derived from a correct parsefor instance the phrase headed by the preposition might have a higher locus of attachment athe supreme court today agreed to consider reinstating the murder conviction of a new york city man who confessed to ving killing his former girlfriend after police illegally arrested him at his homethe temporal phrase headed by after modifies confess but given the procedure described above example 5a results in a tuple kill girlfriend afterin the second example a tuple legalize abortion under is extracted although the pp headed by under modifies the higher verb shotfinally entries of the form verb noun preposition do not tell us whether to induce a lexical association between verb and preposition or between noun and prepositionwe will view the first two problems as noise that we do not have the means to eliminate 1 for present purposes we can consider a parse correct if it contains no incorrect information in the relevant areaprovided the pps in example 5 are unattached the parses would be correct in this sensethe incorrect information is added by our table construction step which assumes that a preposition following an object np modifies either the np or its governing verb and partially address the third problem in a procedure we will now describewe want to use the verbnounpreposition table to derive a table of bigrams counts where a bigram is a pair consisting of a noun or verb and an associated preposition to do this we need to try to assign each preposition that occurs either to the noun or to the verb that it occurs within some cases it is fairly certain whether the preposition attaches to the noun or the verb in other cases this is far less certainour approach is to assign the clear cases first then to use these to decide the unclear cases that can be decided and finally to divide the data in the remaining unresolved cases between the two hypotheses the procedure for assigning prepositions is as follows this procedure gives us bigram counts representing the frequency with which a given noun occurs associated with an immediately following preposition or a given verb occurs in a transitive use and is associated with a preposition immediately following the object of the verbwe use the following notation f is the frequency count for the pair consisting of the verb or noun w and the preposition p the unigram frequency count for the word w can be viewed as a sum of bigram frequencies and is written f for instance if p is a preposition f ew f our object is to develop a procedure to guess whether a preposition is attached to the verb or its object when a verb and its object are followed by a prepositionwe assume that in each case of attachment ambiguity there is a forced choice between two outcomes the preposition attaches either to the verb or to the nounfor example in example 6 we want to choose between two possibilities either into is attached to the verb send or it is attached to the noun soldiermoscow sent more than 100000 soldiers into afghanistan in particular we want to choose between two structures for the verb_attach case we require not only that the preposition attach to the verb send but also that the noun soldier have no following prepositional phrase attached since into directly follows the head of the object noun phrase there is no room for any postmodifier of the noun soldierwe use the notation null to emphasize that in order for a preposition licensed by the verb to be in the immediately postnominal position the noun must have no following complements for the case of noun attachment the verb may or may not have additional prepositional complements following the prepositional phrase associated with the nounsince we have a forced choice between two outcomes it is appropriate to use a likelihood ratio to compare the attachment probabilities 3 in particular we look at the log of the ratio of the probability of verb_attach to the probability of noun_attachwe will call this log likelihood ratio the la score and again the probability of noun attachment does not involve a term indicating that the verb sponsors no complement when we observe a prepositional phrase that is in fact attached to the object np the verb might or might not have a complement or adjunct following the object phrase2 thus we are ignoring the fact that the preposition may in fact be licensed by neither the verb nor the noun as in example 53 in earlier versions of this paper we used a ttest for deciding attachment and a different procedure for estimating the probabilitiesthe current procedure has several advantagesunlike the ttest used previously it is sensitive to the magnitude of the difference between the two probabilities not to our confidence in our ability to estimate those probabilities accuratelyand our estimation procedure has the property that it defaults to the average behavior for nouns or verbs for instance reflecting a default preference with of for noun attachmentwe can estimate these probabilities from the table of cooccurrence counts as4 the la score has several useful propertiesthe sign indicates which possibility verb attachment or noun attachment is more likely an la score of zero means they are equally likelythe magnitude of the score indicates how much more probable one outcome is than the otherfor example if the la score is 20 then the probability of verb attachment is four times greater than noun attachmentdepending on the task we can require a certain threshold of la score magnitude before making a decisionas usual in dealing with counts from corpora we must confront the problem of how to estimate probabilities when counts are smallthe maximum likelihood estimate described above is not very good when frequencies are small and when frequencies are zero the formula will not work at allwe use a crude adjustment to observed frequencies that has the right general properties though it is not likely to be a very good estimate when frequencies are smallfor our purposes howeverexploring in general the relation of distribution in a corpus to attachment disambiguationwe believe it is sufficientother approaches to adjusting small frequencies are discussed in church et al and gale church yarowsky the idea is to use the typical association rates of nouns and verbs to interpolate our probabilitieswhere f en f f e f f en f and 4 the nonintegral count for send is a consequence of the datasplitting step ambiguous attach 2 and the definition of unigram frequencies as a sum of bigram frequencies5 an advantage of the likelihood ratio approach is that we can use it in a bayesian discrimination framework to take into account other factors that might influence our decision about attachment we know of course that other information has a bearing on the attachment decisionfor example we have observed that if the noun phrase object includes a superlative adjective as a premodifier then noun attachment is certain we could easily take this into account by setting the prior odds ratio to heavily favor noun attachment let us suppose that if there is a superlative in the object noun phrase then noun attachment is say 1000 times more probable than verb attachment otherwise they are equally probablethen following mosteller and wallace we assume that final attachment odds log lain case there is no superlative in the object the initial log odds will be zero and the final odds will equal our la scoreif there is a superlative final attachment odds log 2 la and similarly for verbswhen f is zero the estimate used is proportional to this averageif we have seen only one case of a noun and it occurred with a preposition p 1 and f 1 then our estimate is nearly cut in halfthis is the kind of effect we want since under these circumstances we are not very confident in 1 as an estimate of pwhen f is large the adjustment factor does not make much differencein general this interpolation procedure adjusts small counts in the right direction and has little effect when counts are largefor our current example this estimation procedure changes the la score little the la score of 587 for this example is positive and therefore indicates verb attachment the magnitude is large enough to suggest a strong preference for verb attachmentthis method of calculating the la score was used both to decide unsure cases in building the bigram tables as described in ambiguous attach 1 and to make the attachment decisions in novel ambiguous cases as discussed in the sections followingto evaluate the performance of the procedure 1000 test sentences in which the parser identified an ambiguous verbnounpreposition triple were randomly selected from ap news storiesthese sentences were selected from stories included in the 13 million word sample but the particular sentences were excluded from the calculation of lexical associationsthe two authors first guessed attachments on the verbnounpreposition triples making a judgment on the basis of the three headwords alonethe judges were required to make a choice in each instancethis task is in essence the one that we will give the computerto judge the attachment without any more information than the preposition and the heads of the two possible attachment sitesthis initial step provides a rough indication of what we might expect to be achievable based on the information our procedure is usingwe also wanted a standard of correctness for the test sentenceswe again judged the attachment for the 1000 triples this time using the fullsentence context first grading the test sentences separately and then discussing examples on which there was disagreementdisambiguating the test sample turned out to be a surprisingly difficult taskwhile many decisions were straightforward more than 10 of the sentences seemed problematic to at least one authorthere are several kinds of constructions where the attachment decision is not clear theoreticallythese include idioms as in examples 8 and 9 light verb constructions and small clauses example 8 but over time misery has given way to mendingexample 9 the meeting will take place in quanticoexample 10 bush has said he would not make cuts in social securityexample 11 sides said francke kept a 38caliber revolver in his car glove compartmentin the case of idioms we made the assignment on the basis of a guess about the syntactic structure of the idiom though this was sometimes difficult to judgewe chose always to assign light verb constructions to noun attachment based on the fact that the noun supplies the lexical information about what prepositions are possible and small clauses to verb attachment based on the fact that this is a predicative construction lexically licensed by the verbanother difficulty arose with cases where there seemed to be a systematic semantically based indeterminacy about the attachmentin the situation described by example 12a the bar and the described event or events are presumably in the same location and so there is no semantic reason to decide on one attachmentexample 12b shows a systematic benefactive indeterminacy if you arrange something for someone then the thing arranged is also for themthe problem in example 12c is that signing an agreement usually involves two participants who are also parties to the agreementexample 13 gives some further examples drawn from another test sampleexample 12 a known to frequent the same bars in one neighborhoodin general we can say that an attachment is semantically indeterminate if situations that verify the meaning associated with one attachment also make the meaning associated with the other attachment trueeven a substantial overlap between the classes of situations verifying the two meanings makes an attachment choice difficultthe problems in determining attachments are heterogeneousthe idiom light verb and small clause constructions represent cases where the simple distinction between noun attachment and verb attachment perhaps does not make sense or is very theorydependentit seems to us that the phenomenon of semantically based indeterminacy deserves further explorationif it is often difficult to decide what licenses a prepositional phrase we need to develop language models that appropriately capture thisfor our present purpose we decided to make an attachment choice in all cases in some cases relying on controversial theoretical considerations or relatively unanalyzed intuitionsin addition to the problematic cases 120 of the 1000 triples identified automatically as instances of the verbobjectpreposition configuration turned out in fact to be other constructions often as the result of parsing errorsexamples of this kind were given above in the context of our description of the construction of the verbnoun preposition tablesome further misidentifications that showed up in the test sample are identifying the subject of the complement clause of say as its object as in example 10 which was identified as and misparsing two constituents as a singleobject noun phrase as in example 11 which was identified as first consider how the simple structural attachment preference schemas perform at predicting the outcome in our test setright association predicts noun attachment and does better since in our sample there are more noun attachments but it still has an error rate of 33minimal attachment interpreted as entailing verb attachment has the complementary error rate of 67obviously neither of these procedures is particularly impressiveperformance on the test sentences for two human judges and the lexical association procedure la actual n actual v precision recall n guess 496 89 n 848 846 v guess 90 205 v 695 697 neither 0 0 combined 797 797 judge 1 actual n actual v precision recall n guess 527 48 n 917 899 v guess 59 246 v 807 837 neither 0 0 combined 878 878 judge 2 actual n actual v precision recall n guess 482 29 n 943 823 v guess 104 265 v 718 901 neither 0 0 combined 849 849 now consider the performance of our lexical association procedure for the 880 standard test sentencestable 2 shows the performance for the two human judges and for the lexical association attachment procedurefirst we note that the task of judging attachment on the basis of verb noun and preposition alone is not easythe figures in the entry labeled quotcombined precisionquot indicate that the human judges had overall error rates of 12156 the lexical association procedure is somewhat worse than the human judges with an error rate of 20 but this is an improvement over the structural strategiesthe table also gives results broken down according to n vs v attachmentthe precision figures indicate the proportion of test items assigned to a given category that actually belong to the categoryfor instance n precision is the fraction of cases that the procedure identified as n attachments that actually were n attachmentsthe recall figures indicate the proportion of test items actually belonging to a given category that were assigned to that category n precision is the fraction of actual n attachments that were identified as n attachmentsthe la procedure recognized about 85 of the 586 actual noun attachment examples as noun attachments and about 70 of the actual verb attachments as verb attachmentsif we restrict the lexical association procedure to choose attachment only in cases where the absolute value of the la score is greater than 20 we get attachment judgments on 621 of the 880 test sentences with overall precision of about 89on these same examples the judges also showed improvement as evident in table 37 the fact that an la score threshold improves precision indicates that the la score gives information about how confident we can be about an attachment choicein some applications this information is usefulfor instance suppose that we wanted to incorporate the pp attachment procedure in a parser such as fidditchit might be preferable to achieve increased precision in pp attachment in return for leaving some pps unattachedfor this purpose a threshold could be usedtable 4 shows the combined precision and recall levels at various la thresholdsit is clear that the la score can be used effectively to trade off precision and recall with a floor for the forced choice at about 80a comparison of table 3 with table 2 indicates however that the decline in recall is severe for v attachmentand in general the performance of the la procedure is worse on v attachment examples than on n attachments according to both precision and recall criteriathe next section is concerned with a classification of the test examples which gives insight into why performance on v attachments is worseour model takes frequency of cooccurrence as evidence of an underlying relationship but makes no attempt to determine what sort of relationship is involvedit is interesting to see what kinds of relationships are responsible for the associations the model is identifyingto investigate this we categorized the 880 triples according to the nature of the relationship underlying the attachmentin many cases the decision was difficultthe argumentadjunct distinction showed many gray cases between clear participants in an action and clear adjuncts such as temporal modifierswe made rough best guesses to partition the cases into the following categories argument adjunct idiom small clause systematic locative indeterminacy other systematic indeterminacy and light verbwith this set of categories 78 of the 880 cases remained so problematic that we assigned them to the category othertable 5 shows the proportion of items in a given category that were assigned the correct attachment by the lexical association procedureeven granting the roughness of the categorization some clear patterns emergeour approach is most successful at attaching arguments correctlynotice that the 378 noun arguments constitute 65 of the total 586 noun attachments while the 104 verb arguments amount to only 35 of the 294 verb attachmentsfurthermore performance with verb adjuncts is worse than with noun adjunctsthus much of the problem with v attachments noted in the previous section appears to be attributable to a problem with adjuncts particularly verbal onesperformance on verbal arguments remains worse than performance on nominal ones howeverthe remaining cases are all complex in some way and the performance is poor on these classes showing clearly the need for a more elaborated model of the syntactic structure that is being identifiedthe idea that lexical preference is a key factor in resolving structural ambiguity leads us naturally to ask whether existing dictionaries can provide information relevant to disambiguationthe collins cobuild english language dictionary is useful for a comparison with the ap sample for several reasons it was compiled on the basis of a large text corpus and thus may be less subject to idiosyncrasy than other works and it provides in a separate field a direct indication of prepositions typically associated with many nouns and verbsfrom a machinereadable version of the dictionary we extracted a list of 1942 nouns associated with a particular preposition and of 2291 verbs associated with a particular preposition after an object noun phrasethese 4233 pairs are many fewer than the number of associations in the ap sample even if we ignore the most infrequent pairsof the total 76597 pairs 20005 have a frequency greater than 3 and 7822 have a frequency that is greater than 3 and more than 4 times what one would predict on the basis of the unigram frequencies of the noun or verb and the prepositionwe can use the fixed lexicon of nounpreposition and verbpreposition associations derived from cobuild to choose attachment in our test setthe cobuild dictionary has information on 257 of the 880 test verbnounpreposition triplesin 241 of those cases there is information only on noun or only on verb associationin these cases we can use the dictionary to choose the attachment according to the association indicatedin the remaining 16 cases associations between the preposition and both the noun and the verb are recorded in the dictionaryfor these we select noun attachment since it is the more probable outcome in generalfor the remaining cases we assume that the dictionary makes no decisiontable 7 gives the results obtained where you is e f the total number of token bigramsit is equivalent tow and p having a wp mutual information 3 and i 2 contains categorical information about associationsusing it for disambiguation in the way the cobuild dictionary was used gives the results indicated in table 7the precision is similar to that which was achieved with the la procedure with a threshold of 2 although the recall is lowerthis suggests that while overall coverage of association pairs is important the information about the relative strengths of associations contributing to the la score is also significantit must be noted that the dictionary information we derived from cobuild was composed for people to use in printed formit seems likely that associations were left out because they did not serve this purpose in one way or anotherfor instance listing many infrequent or semantically predictable associations might be confusingfurthermore our procedure undoubtedly gained advantage from the fact that the test items are drawn from the same body of text as the training corpusnevertheless the results of this comparison suggest that for the purpose of this paper a partially parsed corpus is a better source of information than a dictionarythis conclusion should not be overstated howevertable 6 showed that most of the associations in each lexicon are not found in the otherstable 8 is a sample of a verbpreposition association dictionary obtained by merging information from the ap sample and from cobuild illustrating both the common ground and the differences between the two lexiconseach source of information provides intuitively important associations that are missing from the otherin our judgment the results of the lexical association procedure are good enough to make it useful for some purposes in particular for inclusion in a parser such as fidditchthe fact that the la score provides a measure of confidence increases this usefulness since in some applications preposition associations in the cobuild dictionary and in the ap sample 3 and i 20ap sample cobuild approach about as at with corpora it is advantageous to be able to achieve increased precision in exchange for discarding a proportion of the datafrom another perspective our results are less good than what might be demandedthe performance of the human judges with access just to the verbnounpreposition triple is a standard of what is possible based on this information and the lexical association procedure falls somewhat short of this standardthe analysis of underlying relations indicated some particular areas in which the procedure did not do well and where there is therefore room for improvementin particular performance on adjuncts was poora number of classes of adjuncts such as temporal ones are fairly easy to identify once information about the object of the preposition is taken into accountbeginning with such an identification step might yield a lexical association procedure that would do better with adjunctsbut it is also possible that a procedure that evaluates associations with individual nouns and verbs is simply inappropriate for adjunctsthis is an area for further investigationthis experiment was deliberately limited to one kind of attachment ambiguityhowever we expect that the method will be extendable to other instances of pp attachment ambiguity such as the ambiguity that arises when several prepositional phrases follow a subject np and to ambiguities involving other phrases especially phrases such as infinitives that have syntactic markers analogous to a prepositionwe began this paper by alluding to several approaches to pp attachment specifically work assuming the construction of discourse models approaches based on structural attachment preferences and work indicating a dominant role for lexical preferenceour results tend to confirm the importance of lexical preferencehowever we can draw no firm conclusions about the other approachessince our method yielded incorrect results on roughly 20 of the cases its coverage is far from completethis leaves a lot of work to be done within both psycholinguistic and computational approachesfurthermore as we noted above contemporary psycholinguistic work is concerned with modeling the time course of parsingour experiment gives no information about how lexical preference information is exploited at this level of detail or the importance of such information compared with other factors such as structural preferences at a given temporal stage of the human parsing processhowever the numerical estimates of lexical association we have obtained may be relevant to a psycholinguistic investigation of this issuewe thank bill gale ken church and david yarowsky for many helpful discussions of this work and are grateful to four reviewers and christian rohrer for their comments on an earlier version
J93-1005
structural ambiguity and lexical relationswe propose that many ambiguous prepositional phrase attachments can be resolved on the basis of the relative strength of association of the preposition with verbal and nominal heads estimated on the basis of distribution in an automatically parsed corpusthis suggests that a distributional approach can provide an approximate solution to parsing problems that in the worst case call for complex reasoningwe are the first to show that a corpusbased approach to pp attachment ambiguity resolution can lead to good resultswe propose one of the earliest corpusbased approaches to prepositional phrase attachment used lexical preference by computing cooccurrence frequencies of verbs and nouns with prepositionswe used a partial parser to extract tuples from a corpus where p is the preposition whose attachment is ambiguous between the verb v and the noun n
texttranslation alignment no figure 4 sat after where 254 is a translation of the latter part of 218 and the early part of 219 when a proton strikes a gas nucleus it produces three kinds of pion of which one kind decays into two gamma rays the gamma rays travel close to the original trajectory of the proton and the model predicts they will be beamed toward the earth at just two points on the pulsars orbit around the companion star trifft them proton auf einen atomkern in dieser gashfille werden drei arten von pionen erzeugt die neutralen pionen zerfallen in jeweils zwei gammaquanten die sich beinahe in dieselbe richtung wie das urspriingliche proton bewegen nach der modellvorstellung gibt es gerade zwei positionen i am umlauf des pulsars urn semen begleitstern bei denen die strahlung in richtung zum beobachter auf der erde ausgesandt wird another example is provided by english sentences 19 and 20 which appear in german as sentences 21 and 22 however the latter part of english sentence 19 is in fact transferred to sentence 22 in the german this is also unmistakable in the final results notice also in this example that the definition of quotphotonquot has become a parenthetical expression at the beginning of the second german sentence a fact which is not reflected the other end of the cosmicray energy spectrum is defined somearbitrarily any quantum greater than electron volts arriving from space is considered a cosmic ray the definition encompasses not only particles but also gammaray photons which are quanta of electromagnetic radiation 2 4 4 45 136 martin kay and martin riischeisen texttranslation alignment le correctness of sentence alignment in the various passes of the algorithm pass correctness coverage constraint in sat of sat by ast 1 100 12 4 2 100 47 17 3 100 89 38 4 997 96 41 das untere ende des spektrums der kosmischen strahlen ist verhaltnismai3ig unscharf definiert jedes photon teilchen mit einer energie von mehr als elektronenvolt das aus dem weltraum eintrifft bezeichnet man als kosmischen strahl frequently occurred in our data that sentences that separated by colons or semicolons in the original appeared as completely distinct sentences in the german translation indeed the common usage in the two languages would probably have been better represented if we had treated colons and semicolons as sentence separators along with periods question marks and the like there are of course situations in which these punctuation marks are used in other ways but they are considerably less frequent and in any case it seems that our program would almost always make the right associations an example involving the colon is to be found in sentence 142 of the original translated as sentences 163 and 164 the absorption lines established a lower limit on the distance of cygnus x3 it must be more distant than the farthest hydrogen cloud which is believed to lie about 37000 lightyears away near the edge of the galaxy dieser absorptionslinie kann eine untere grenze der entvon cygnus x bestimmen die quelle mu13 jenseits am weitesten entfernten wasserstoffwolke sein also weiter als ungefahr 37000 lichtjahre entfernt am rande der milchstrai3e sentence 197 containing semicolon is translated by german sentences 228 and 229 the estimate is conservative because it is based on the gamma rays observed arriving at the earth it does not take into account the likelihood that cygnus x emits cosmic rays in all directions dies ist eine vorsichtige absch5tzung sie ist nur aus den gammastrahlendaten abgeleitet die auf der erde gemessen werden dai3 cygnus x3 wahrscheirtlich kosmische strahlung in alle richtungen aussendet ist dabei noch nicht beriicksichtigt 137 computational linguistics volume 19 number 1 german sentence no ca e tc x 10 20 30 40 50 english sentence no figure 5 alignment of the first of the test texts true alignment and hypothesis of the sat after the first pass and after the second pass table 3 summarizes the accuracy of the algorithm as a function of the number of passes the sat is evaluated by two criteria the number of correct by the total number alignments andsince the sat does not necessarily give an alignment for every sentencethe coverage ie the number of sentences with at least one entry relative to the total number of sentences an alignment is said to be correct if the sat contains exactly the numbers of the sentences that are complete or partial translations of the original sentence the coverage of 96 of the sat in pass 4 is as much as one would expect since the remaining nonaligned sentences are onezero alignments most of them due to the german subheadings that are not part of the english version the table also shows that the ast always provides a significant number of candidates for alignment with each sentence before a pass the fourth column gives the number of true sentence alignments relative to the total number of candidates in the ast recall that the final alignment is always a subset of the hypotheses in the ast in every preceding pass figure 5 shows the true sentence alignment for the first 50 sentences and how the algorithm discovered them in the first pass only a few sentences are set into correspondence after the second pass already almost half of the correspondences are found note that there are no wrong alignments in the first two passes in the third pass almost all of the remaining alignments are found and a final pass usually completes the alignment our algorithm produces very favorable results when allowed to converge gradually processing time in the original lisp implementation was high typically several hours for each pass by trading cpu time for memory massively the time needed by a c implementation on a sun 475 was reduced to 17 mm for the first pass 08 mm for the second and 05 min for the third pass in an application to this pair of articles it should be noted that a naive implementation of 138 martin kay and martin roscheisen texttranslation alignment the algorithm without using the appropriate data structures can easily lead to times that are a factor of 30 higher and do not scale up to larger texts the application of our method to a text that we put together from the hansard corpus had essentially no problem in identifying the correct sentence alignment in a process of five passes the alignments for the first 1000 sentences of the english text were checked by hand and seven errors were found five of them occurred in sentences where sentence boundaries were not correctly identified by the program of periods that did not mark a sentence boundary and were identified such by very simple preprocessing program the other two errors involved two short sentences for which the sat did not give an alignment processing time increased essentially linearly the first pass took 83 min the second 32 mm and it further decreased until the last pass which took 21 min note that the error rate depends crucially on the kind of quotannealing schedulequot used if the thresholds that allow a word pair in the wat to influence the sat are lowered too fast only a few passes are needed but accuracy deteriorates for example in an application where the process terminated after only three passes the accuracy was only in the eighties since processing time after the first pass is usually already considerably lower we have found that a high accuracy can safely be attained when more passes are allowed than are actually necessary in order to evaluate the sensitivity of the algorithm to the lengths of the texts that are to be aligned we applied it to text samples that ranged in length from 10 to 1000 sentences and examined the accuracy of the wat after the first pass that is more precisely the number of word pairs in the wat that are valid translations relative to the total number of word pairs with a similarity of not less than 07 the result is that this accuracy increases asymptotically to 1 with the text length and is already higher than 80 for text length of 100 sentences roughly speaking the accuracy is almost 1 for texts longer than 150 sentences and around 05 for text length in the lower range from 20 to 60 in other words texts of a length of more than 150 sentences are suitable to be processed in this way text fragments shorter than 80 sentences do not have a high proportion of correct word pairs in the first wat but further experiments showed that the final alignment for texts of this length is on average again almost perfect the drawback of a less accurate initial wat is apparently largely compensated for by the fact that the ast is also narrower for these texts however the variance in the alignment accuracies is significantly higher 5 related work since we addressed the text translation alignment problem in 1988 a number of researchers among them gale and church and brown lai and mercer have worked on the problem both methods are based on the observation that the length of text unit is highly correlated to the length of the translation of this unit no matter whether length is measured in number of words or in number of characters consequently they are both easier to implement than ours though not necessarily more efficient the method of brown lai and mercer is based on a hidden markov model for the generation of aligned pairs of corpora whose parameters are estimated from a large text for an application of this method to the canadian hansard good results are reported however the problem was also considerably facilitated by the way the implementation made use of hansardspecific comments 139 linguistics volume 19 number german length in wards co german length in chars 40 60 80 120 200 400 600 800 english length in words english length in chars figure 6 lengths of aligned paragraphs are correlated robust regression between lengths of aligned paragraphs left length measured in words right length measured in characters and annotations these are used in a preprocessing step to find anchors for sentence alignment such that on average there are only ten sentences in between moreover corpus is for the near literalness of its translations and it is therefore unclear to what extent the good results are due to the relative ease of the problem this would be an important consideration when comparing various algorithms when the algorithms are actually applied it is clearly very desirable to incorporate as much prior knowledge as possible moreover long texts can almost always be expected to contain natural anchors such as chapter section headings at which to make an priori gale and church note that their method performed considerably better when lengths of sentences were measured in number of characters instead of in number of words their method is based on a probabilistic model of the distance between and a dynamic programming algorithm is used to minimize the total aligned their implementation assumes that character in one language gives rise to on average one character in the other language in our texts one character in english on average gives rise to somewhat more than 12 characters in german and the correlation between the lengths of aligned paragraphs in the two languages was with 0952 lower than the 0991 that are menin gale and church which supports our impression that the we used are hard texts to align but it is not clear to what extent this would deteriorate the results in applications to economic reports from the union bank of switzerland the method performs very well on simple alignments but has at the moment problems with complex matches the method has the 8 recall that in a similar way we assumed in our implementation that one sentence in one language gives rise to on average nright now sentences in the other language 140 martin kay and martin riischeisen texttranslation alignment advantage of associating a score with pairs of sentences so that it is easy to extract a subset for which there is a high likelihood that the alignments are correct given the simplicity of the methods proposed by brown lai and mercer and gale and church either of them could be used as a heuristic in the construction of the initial ast in our algorithm in the current version the number of candidate sentence pairs that are considered in the first pass near the middle of a text contributes disproportionally to the cost of the computation in fact as we remarked earlier the of this step is proposed modification would effectively make it linear 6 future work for most practical purposes the alignment algorithm we have described produces very satisfactory results even when applied to relatively free translations there are doubtless many places in which the algorithm itself could be improved for example it is clear that the present method of building the sat favors associations between long sentences and this is not surprising because there is more information in long sentences but we have not investigated the extent of this bias and we do not therefore know it as appropriate the present algorithm rests on being able to identify onetoone associations between certain words notably technical terms and proper names it is clear from a brief inspection of table 2 that very few correspondences are noticed among everyday words and when they are it is usually because those words also have precise technical uses the very few exceptions include quotonlyquotquotnurquot and thequotquotdiequot the pair quotperquotquotproquot might also qualify but if the languages afford any example of a scientific preposition this is surely it the most interesting further developments would be in the direction of loosening up this dependence on onetoone associations both because this would present a very significant challenge and also because we are convinced that our present method identifies essentially all the significant onetoone associations there are two obvious kinds of looser associations that could be investigated one would consist of connections between a single vocabulary item in one language and two or more in the other or even between several items in one language and several in the other the other would involve connectionsoneone onemany or manymanybetween phrases or recurring sequences we have investigated the first of these enough to satisfy ourselves that there is latent information on onetomany associations in the text and that it can be revealed by suitable extensions of our methods however it is clear that the combinatorial problems associated with this approach are severe and pursuing it would require much fine tuning of the program and designing much more effective ways of indexing the most important data structures the key to reducing the combinatorial explosion probably lies in using tables of similarities such as those the present algorithm uses to suggest combinations of items that would be worth considering if such an approach could be made efficient enough it is even possible that it would provide a superior way of solving the problem for which our heuristic methods of morphological analysis were introduced its superiority would come from the fact that it would not depend on words being formed by concatenation but would also accommodate such phenomena as umlaut ablaut vowel harmony and the nonconcatenative process of semitic morphology the problems of treating recurring sequences are less severe data structures such as the patricia tree provide efficient means of identifying all such sequences and once identified the data they provide could be added to 141 computational linguistics volume 19 number 1 the wat much as we now add the results of morphological analysis needless to say this would only allow for uninterrupted sequences any attempt to deal with discontinuous sequences would doubtless also involve great combinatorial problems these avenues for further development are intriguing and would surely lead to interesting results but it is unlikely that they would lead to much better sets of associations among sentences than are to be found in the sats that our present program produces and it was mainly these results that we were interested in from the outset the other avenues we have mentioned concern improvements in the wat which for us was always a secondary interest we present an algorithm for aligning texts with their translations that is based only on internal evidencethe relaxation process rests on a notion of which word in one text corresponds to which word in the other text that is essentially based on the similarity of their distributionsit exploits a partial alignment of the word level to induce a maximum likelihood alignment of the sentence level which is in turn used in the next iteration to refine the word level estimatethe algorithm appears to converge to the correct sentence alignment in only a few iterationsto align a text with a translation of it in another language is in the terminology of this paper to show which of its parts are translated by what parts of the second textthe result takes the form of a list of pairs of itemswords sentences paragraphs or whateverfrom the two textsa pair is on the list if a is translated in whole or in part by bif are on the list it is because a is translated partly by b and partly by c we say that the alignment is partial if only some of the items of the chosen kind from one or other of the texts are represented in the pairsotherwise it is completeit is notoriously difficult to align good translations on the basis of words because it is often difficult to decide just which words in an original are responsible for a given one in a translation and in any case some words apparently translate morphological or syntactic phenomena rather than other wordshowever it is relatively easy to establish correspondences between such words as proper nouns and technical terms so that partial alignment on the word level is often possibleon the other hand it is also easy to align texts and translations on the sentence or paragraph levels for there is rarely much doubt as to which sentences in a translation contain the material contributed by a given one in the originalthe growing interest in the possibility of automatically aligning large texts is attested to by independent work that has been done on it since the first description of our methods was made available in recent years it has been possible for the first time to obtain machinereadable versions of large corpora of text with accompanying translationsthe most striking example is the canadian quothansardquot the transcript of the proceedings of the canadian parliamentsuch bilingual corpora make it possible to undertake statistical and other kinds of empirical studies of translation on a scale that was previously unthinkablealignment makes possible approaches to partially or completely automatic translation based on a large corpus of previous translations that have been deemed acceptableperhaps the bestknown example of this approach is to be found in sato and nagao which takes a large corpus of text with aligned translations as its point of departureit is widely recognized that one of the most important sources of information to which a translator can have access is a large body of previous translationsno dictionary or terminology bank can provide information of comparable value on topical matters of possibly intense though only transitory interest or on recently coined terms in the target language or on matters relating to house stylebut such a body of data is useful only if once a relevant example has been found in the source language the corresponding passage can be quickly located in the translationthis is simple only if the texts have been previously alignedclearly what is true of the translator is equally true of others for whom translations are a source of primary data such as students of translation the designers of translations systems and lexicographersalignment would also facilitate the job of checking for consistency in technical and legal texts where consistency constitutes a large part of accuracyin this paper we provide a method for aligning texts and translations based only on internal evidencein other words the method depends on no information about the languages involved beyond what can be derived from the texts themselvesfurthermore the computations on which it is based are straightforward and robustthe plan rests on a relationship between word and sentence alignments arising from the observation that a pair of sentences containing an aligned pair of words must themselves be alignedit follows that a partial alignment on the word level could induce a much more complete alignment on the sentence levela solution to the alignment problem consists of a subset of the cartesian product of the sets of source and target sentencesthe process starts from an initial subset excluding pairs whose relative positions in their respective texts is so different that the chance of their being aligned is extremely lowthis potentially alignable set of sentences forms the basis for a relaxation process that proceeds as followsan initial set of candidate word alignments is produced by choosing pairs of words that tend to occur in possibly aligned sentencesthe idea is to propose a pair of words for alignment if they have similar distributions in their respective textsthe distributions of a pair of words are similar if most of the sentences in which the first word occurs are alignable with sentences in which the second occurs and vice versathe most apparently reliable of these word alignments are then used to induce a set of sentence alignments that will be a subset of the eventual resulta new estimate is now made of what sentences are alignable based on the fact that we are now committed to aligning certain pairs because sentence pairs are never removed from the set of alignments the process converges to the point when no new ones can be found then it stopsin the next section we describe the algorithm in section 3 we describe additions to the basic technique required to provide for morphology that is relatively superficial variations in the forms of wordsin section 4 we show the results of applying a program that embodies these techniques to an article from scientific american and its german translation in spektrurn der wissenschaftin section 5 we discuss other approaches to the alignment problem that were subsequently undertaken by other researchers finally in section 6 we consider ways in which our present methods might be extended and improvedthe principal data structures used in the algorithm are the following wordsentence index one of these is prepared for each of the textsit is a table with an entry for each different word in the text showing the sentences in which that word occursfor the moment we may take a word as being simply a distinct sequence of lettersif a word occurs more than once in a sentence that sentence occurs on the list once for each occurrencealignable sentence table this is a table of pairs of sentences one from each texta pair is included in the table at the beginning of a pass if that pair is a candidate for association by the algorithm in that password alignment table this is a list of pairs of words together with similarities and frequencies in their respective texts that have been aligned by comparing their distributions in the textssentence alignment table this is a table that records for each pair of sentences how many times the two sentences were set in correspondence by the algorithmsome additional data structures were used to improve performance in our implementation of the algorithm but they are not essential to an understanding of the method as a wholeat the beginning of each cycle an ast is produced that is expected to contain the eventual set of alignments generally amongst othersit pairs the first and last sentences of the two texts with a small number of sentences from the beginning and end of the other textgenerally speaking the closer a sentence is to the middle of the text the larger the set of sentences in the other text that are possible correspondents for itthe next step is to hypothesize a set of pairs of words that are assumed to correspond based on similarities between their distributions in the two textsfor this purpose a word in the first text is deemed to occur at a position corresponding to a word in the second text if they occur in a pair of sentences that is a member of the astsimilarity of distribution is a function of the number of corresponding sentences in which they occur and the total number of occurrences of eachpairs of words are entered in the wat if the association between them is so close that it is not likely to be the result of a random eventin our algorithm the closeness of the association is estimated on the basis of the similarity of their distributions and the total number of occurrencesthe next step is to construct the sat which in the last pass will essentially become the output of the program as a wholethe idea here is to associate sentences that contain words paired in the wat giving preference to those word pairs that appear to be more reliablemultiple associations are recordedif there are to be further passes of the main body of the algorithm a new ast is then constructed in light of the associations in the satassociations that are supported some minimum number of times are treated just as the first and last sentences of the texts were initially that is as places at which there is known to be a correspondencepossible correspondences are provided for the intervening sentences by the same interpolation method initially used for all sentences in the middle of the textsin preparation for the next pass a new set of corresponding words is now hypothesized using distributions based on the new ast and the cycle repeatsthe main algorithm is a relaxation process that leaves at the end of each pass a new wat and sat each presumably more refined than the one left at the end of the preceding passthe input to the whole process consists only of the wsis of the two textsbefore the first pass of the relaxation process an initial ast is computed simply from the lengths of the two texts construct initial astif the texts contain m and n sentences respectively then the table can be thought of as an m x n array of ones and zerosthe average number of sentences in the second text corresponding to a given one in the first text is nright now and the average position of the sentence in the second text corresponding to the ith sentence in the first text is therefore i nmin other words the expectation is that the true correspondences will lie close to the diagonalempirically sentences typically correspond one for one correspondences of one sentence to two are much rarer and correspondences of one to three or more though they doubtless occur are very rare and were unattested in our datathe maximum deviation can be stochastically modeled as 0 the factor by which the standard deviation of a sum of n independent and identically distributed random variables multiplies we construct the initial ast using a function that pairs single sentences near the middle of the text with as many as 0 sentences in the other text it is generously designed to admit all but the most improbable associationsexperience shows that because of this policy the results are highly insensitive to the particular function used to build this initial tablethe main body of the relaxation process consists of the following steps build the watfor all sentences 5a in the first text each word in sa is compared with each word in those sentences 53 of the second text that are considered as candidates for correspondence le for which e asta pair of words is entered into the wat if the distributions of the two words in their texts are sufficiently similar and if the total number of occurrences indicates that this pair is unlikely to be the result of a spurious matchnote that the number of comparisons of the words in two sentences is quadratic only in the number of words in a sentence which can be assumed to be not a function of the length of the textbecause of the constraint on the maximum deviation from the diagonal as outlined above the computational complexity of the algorithm is bound by 0 in each passour definition of the similarity between a pair of words is complicated by the fact that the two texts have unequal lengths and that the ast allows more than one correspondence which means that we cannot simply take the inner product of the vector representations of the word occurrencesinstead we use as a measure of similarity3 where c is the number of corresponding positions and nt is the number of occurrences of the word x in the text t this is essentially dice coefficient technically the value of c is the cardirtality of the largest set of pairs such that suppose that the word quotdogquot occurs in sentences 50 52 75 and 200 of the english text and quothundquot in sentences 40 and 180 of the german and that the ast contains the pairs and among others but not there are two sets that meet the requirements namely and the set 1 is excluded on the grounds that and overlap in the above sensethe first occurrence of quothundquot is represented twicein the example the similarity would be computed as 4_2 12 regardless of the ambiguity between and the result of the comparisons of the words in all of the sentences of one text with those in the other text is that the word pairs with the highest similarity are locatedcomparing the words in a sentence of one text with those in a sentence of the other text carries with it an amortized cost of constant computational complexity4 if the usual memoryprocessing tradeoff on serial machines is exploited by maintaining redundant data structures such as multiple hash tables and ordered indexed treesthe next task is to determine for each word pair whether it will actually be entered into the wat the wat is a sorted table where the more reliable pairs are put before less reliable onesfor this purpose each entry contains as well as the pair of words themselves the frequencies of those words in their respective texts and the similarity between themthe closeness of the association between two words and thus their rank in the wat is evaluated with respect to their similarity and the total number of their occurrencesto understand why similarity cannot be used alone note that there are far more onefrequency words than words of higher frequencythus a pair of words with a similarity of 1 each of them occurring only once may well be the result of a random eventif such a pair was proposed for entry into the wat it should only be added with a low prioritythe exact stochastic relation is depicted in figure 1 where the probability is shown that a word of a frequency k that was aligned with a word in the other text with a certain similarity s is just the result of a random processnote that for a highfrequency word that has a high similarity with some other word it is very unlikely that this association has to be attributed to chanceon the other hand low similarities can easily be attained by just associating arbitrary wordslowfrequency wordsbecause there are so many of them in a textcan also achieve a high similarity with some other words without having to be related in an interesting waythis can be intuitively explained by the fact that the similarity of a highfrequency word is based on a pattern made up of a large number of instancesit is therefore a pattern that is unlikely to be replicated by chancefurthermore since there are relatively few highfrequency words and they can only contract high similarities with other highfrequency words the number of possible correspondents for them is lower and the chance of spurious associations is therefore less on these grounds alsonote that lowfrequency words with low similarity have also a low probability of being spuriously associated to some other wordthis is because lowfrequency words can achieve a low similarity only with words of a high frequency which in turn are rare in a text and are therefore unlikely to be associated spuriouslyour algorithm does not use all the detail in figure 1 but only a simple discrete heuristic a word pair whose similarity exceeds some threshold is assigned to one of two or three segments of the wat depending on the word frequencya segment with words of higher frequency is preferred to lowerfrequency segmentswithin each segment the entries are sorted in order of decreasing similarity and in case of equal similarities in order of decreasing frequencyin terms of figure 1 we take a rectangle from the right frontwe place the left boundary as far to the left as possible because this is where most of the words arebuild the satin this step the correspondences in the wat are used to establish a mapping between sentences of the two textsin general these new 6 the basis for this graph is an analytic derivation of the probability that a word with a certain frequency in a 300sentence text matches some random pattern with a particular similaritythe analytic formula relies on wordfrequency data derived from a large corpus instead of on a stochastic model for word frequency distribution clearly the figure is dependent on the state of the ast but the thresholds relevant to our algorithm can be precomputed at compiletimethe figure shown would be appropriate to pass 3 in our experimentin the formula used there are a few reasonable simplifications concerning the nature of the ast however a montecarlo simulation that is exactly in accordance with our algorithm confirmed the depicted figure in every essential detail7 this discussion could also be cast in an information theoretic framework using the notion of quotmutual informationquot estimating the variance of the degree of match in order to find a frequencythreshold likelihood that a word pair is a spurious match as a function of a word frequency and its similarity with a word in the other text associations are added to the ones inherited from the preceding passit is an obvious requirement of the mapping that lines of association should not crossat the beginning of the relaxation process the sat is initialized such that the first sentences of the two texts and the last sentences are set in correspondence with one another regardless of any words they may containthe process that adds the remaining associations scans the wat in order and applies a threepart process to each pair build a new astif there is to be another pass of the relaxation algorithm a new ast must be constructed as input to itthis is based on the current sat and is derived from it by supplying associations for sentences for which it provides nonethe idea is to fill gaps between associated pairs of sentences in the same manner that the gap between the first and the last sentence was filled before the first passhowever only sentence associations that are represented more than some minimum number of times in the sat are transferred to the astin what follows we will refer to these sentence pairs as anchorsas before it is convenient to think of the ast as a rectangular array even though it is represented more economically in the programconsider a maximal sequence of empty ast entries that is a sequence of sentences in one text for which there are no associated sentences in the other but which is bounded above and below by an anchorthe new associations that are added lie on and adjacent to the diagonal joining the two anchorsthe distance from the diagonal is a function of the distance of the current candidate sentence pair and the nearest anchorthe function is the same one used in the construction of the initial astas we said earlier the basic alignment algorithm treats words as atoms that is it treats strings as instances of the same word if they consist of identical sequences of letters and otherwise as totally differentthe effect of this is that morphological variants of a word are not seen as related to one anotherthis might not be seen as a disadvantage in all circumstancesfor example nouns and verbs in one text might be expected to map onto nouns with the same number and verbs with the same tense much of the timebut this is not always the case and more importantly some languages make morphological distinctions that are absent in the othergerman for example makes a number of case distinctions especially in adjectives that are not reflected in the morphology of englishfor these reasons it seems desirable to allow words to contract associations with other words both in the form in which they actually occur and in a more normalized form that will throw them together with morphologically related other words in the textthe strategy we adopted was to make entries in the wsi not only for maximal strings of alphabetic characters occurring in the texts but also for other strings that could usefully be regarded as normalized forms of theseclearly one way to obtain normalized forms of words is to employ a fully fledged morphological analyzer for each of the languageshowever we were concerned that our methods should be as independent as possible of any specific facts about the languages being treated since this would make them more readily usablefurthermore since our methods attend only to very gross features of the texts it seemed unreasonable that their success should turn on a very fine analysis at any levelwe argue that by adding a guess as to how a word should be normalized to the wsi we remove no associations that could have been formed on the basis of the original word but only introduce the possibility of some additional associationsalso it is unlikely that an incorrect normalization will contract any associations at all especially in view of the fact that these forms because they normalize several original forms tend to occur more oftenthey will therefore rarely be misleadingfor us a normalized form of a word is always an initial or a final substring of that wordno attention is paid to morphographemic or wordinternal changesa word is broken into two parts one of which becomes the normalized form if there is evidence that the resulting prefix and suffix belong to a paradigmin particular both must occur as prefixes and suffixes of other formsthe algorithm proceeds in two stagesfirst a data structure called the trie is constructed in which information about the occurrences of potential prefixes and suffixes in the text is storedsecond words are split where the trie provides evidence for doing so and one of the resulting parts is chosen as the normalization information with strings of charactersit is particularly economical in situations where many of the strings of interest are substrings of others in the seta trie is in fact a tree with a branch at the root node for every character that begins a string in the setto look up a string one starts at the root and follows the branch corresponding to its first character to another nodefrom there the branch for the second character is followed to a third node and so on until either the whole string has been matched or it has been discovered not to be in the setif it is in the set then the node reached after matching its last character contains whatever information the structure contains for itthe economy of the scheme lies in the fact that a node containing information about a string also serves as a point on the way to longer strings of which the given one is a prefixin this application two items of information are stored with a string namely the number of textual words in which it occurs as a prefix and as a suffixthere is a function from potential break points in words to numbers whose value is maximized to choose the best point at which to breakif p and s are the potential prefix and suffix respectively and p and s are the number of words in the text in which they occur as such the value of the function is kpsthe quantity k is introduced to enable us to prefer certain kinds of breaks over othersfor the english and german texts used in our experiments k length so as to favor long prefixes on the grounds that both languages are primarily suffixingif the function has the same value for more than one potential break point the one farthest to the right is preferred also for the reason that we prefer to maximize the lengths of prefixesonce it has been decided to divide a word and at what place one of the two parts is selected as the putative canonical form of the word namely whichever is longer and the prefix if both are of equal lengthfinally any other words in the same text that share the chosen prefix are split at the corresponding place and so assigned to the same canonical formthe morphological algorithm treats words that appear hyphenated in the text speciallythe hyphenated word is treated as a unit just as it appears and so are the strings that result from breaking the word at the hyphensin addition the analysis procedure described above is applied to these components and any putative normal forms found are also usedit is worth pointing out that we received more help from hyphens than one might normally expect in our analysis of the german texts because of a tendency on the part of the spektrum der wissenschaft translators following standard practice for technical writing of hyphenating compoundsin this section we show some of the results of our experiments with these algorithms and also data produced at some of the intermediate stageswe applied the methods described here to two pairs of articles from scientific american and their german translations in spektrum der wissenschaft the english and german articles about humanpowered flight had 214 and 162 sentences respectively the ones about cosmic rays contained 255 and 300 sentences respectivelythe first pair was primarily used to develop the algorithm and to determine the various parameters of the programthe performance of the algorithm was finally tested on the latter pair of articleswe chose these journals because of a general impression that the translations were of very high quality and were sufficiently quotfreequot to be a substantial challenge for the algorithmfurthermore we expected technical translators to adhere to a narrow view of semantic accuracy in their work and to rate the importance of this above stylistic considerationslater we also give results for another application of our algorithm to a larger text of 1257 sentences that was put together from two days from the frenchenglish hansard corpustable 1 shows the first 50 entries of the wat after pass 1 of the algorithmit shows part of the first section of the wat and the beginning of the second the first segment contains words or normalized forms with more than 7 occurrences and a similarity not less than 08strings shown with a following hyphen are prefixes arising from the morphological procedure strings with an initial hyphen are suffixesnaturally some of the word divisions are made in places that do not accurately reflect linguistic factsfor example english quotprotoquot comes from quotprotonquot and quotprotonsquot german quoteilchenquot is the normalization for words ending in quotteilchenquot and in the same way quoteistungquot comes from quotleistungquot of these 50 word pairs 42 have essentially the same meaningswe take it that quotergquot and quotjoulequot in line 4 mean the same modulo a change in unitsalso it is not unreasonable to associate pairs like quotprimaryquot quotsekunclarenquot and quotelectricquot quotfeldquot on the grounds that they tend to be used togetherthe pair quotrapidquotquotpulsarequot is made because a pulsar is a rapidly spinning neutron star and some such phrase occurs with it five out of six timesnotice however that the association quotpulsarquot quotpulsarquot is also in table furthermore the german strings quotpulsarquot and quotpulsarquot are both given correct associations in the next pass the table shows two interesting effects of the morphological analysis procedurethe word quotshowerquot is wrongly associated with the word quotgammaquantquot with a frequency of 6 but the prefix quotshowerquot is correctly associated with quotluftschauerquot the sat after pass 1 with a frequency of 20on the other hand the incorrect association of quotelementquot with quotusammensetzungquot is on the basis of a normalized form whereas quotzusammensetzungquot unnormalized is correctly associated with quotcompositionquot totally unrelated words are associated in a few instances as in quotobservatoryquotquotdiesemquot quotdetectorsquot quotprimarequot and quotbrightquotquotastrononaquot of these only the second remains at the end of the third passthe english quotobservatoryquot is then properly associated with the german word quotobservatoriumquot at that stage quotbrightquot has no associationfigure 2 shows part of the sat at the end of pass 1 of the relaxation cyclesentences in the english text and in the german text are identified by numbers on the abscissa and the ordinate respectivelyentries in the array indicate that the sentences are considered to correspondthe numbers show how often a particular association is supported which is essentially equivalent to how many word pairs in the wat support such an associationif there are no such numbers then no associations have been found for it at this stagefor example the association of english sentence 148 with german sentence 170 is supported by three different word pairsit is already very striking how strongly occupied entries in this table constrain the possible entries in the unoccupied slotsfigure 3 shows part of the ast before pass 2this is derived directly from the material illustrated in figure 2the abscissa gives the english sentence number and in direction of the ordinate the associated german sentences are shown those sentence pairs in figure 2 supported by at least three word pairs namely those shown on lines 148 192 194 and 196 are assumed to be reliable and they are the only associations shown for these sentences in figure 3candidate associations have been provided for the intervening sentences by the interpolation method described abovenotice that the greatest number of candidates are shown against sentences occurring midway between a pair assumed to have been reliably connected table 2 shows the first 100 entries of the wat after pass 3 where the threshold for the similarity was lowered to 05as we pointed out earlier most of the incorrect associations in table 1 have been eliminatedgerman quotmilchstrasequot is not a translation of the english quotgalaxyquot but the milky way is indeed a galaxy and quotthe galaxyquot is sometimes used in place of quotmilky wayquot where the reference is clearthe association between quotperiodquot and quotstundenquot is of a similar kindthe words are strongly associated because of recurring phrases of the form quotin a 48hour periodquot figure 4 gives the sat after pass 3it is immediately apparent first that the majority of the sentences have been associated with probable translations and second that many of these associations are very strongly supportedfor example note that the correspondence between english sentence 190 and german sentence 219 is supported 21 timesusing this table it is in fact possible to locate the translation of a given english sentence to within two or three sentences in the german text and usually more closely than thathowever some ambiguities remainsome of the apparent anomalies come the ast before pass 2 from stylistic differences in the way the texts were presented in the two journalsthe practice of scientific american is to collect sequences of paragraphs into a logical unit by beginning the first of them with an oversized letterthis is not done in spektrum der wissenschaf t which instead provides a subheading at these pointsthis therefore appears as an insertion in the translationtwo such are sentences number 179 and 233 but our procedure has not created incorrect associations for themrecall that the alignment problem derives its interest from the fact that single sentences are sometimes translated as sequences of sentences and converselythese cases generally stand out strongly in the output that our method deliversfor example the english sentence pair yet whereas many of the most exciting advances in astronomy have come from the detailed analysis of xray and radio sources until recently the source of cosmic rays was largely a matter of speculationthey seem to come from everywhere raining down on the earth from all directions at a uniform rate is rendered in german by the single sentence dennoch blieben die quellen der kosmischen strahlung die aus alien richtungen gleichmasig auf die erde zu treffen scheint bis vor kurzem reine spekulation wahrend einige der aufregendsten fortschritte in der astronomic aus dem detaillierten studium von röntgen und radiowellen herriihrtenthe second english sentence becomes a relative clause in the germanmore complex associations also show up clearly in the resultsfor example english sentences 218 and 219 are translated by german sentences 253 254 and 255 the sat after pass 3 where 254 is a translation of the latter part of 218 and the early part of 219 when a proton strikes a gas nucleus it produces three kinds of pion of which one kind decays into two gamma raysthe gamma rays travel close to the original trajectory of the proton and the model predicts they will be beamed toward the earth at just two points on the pulsars orbit around the companion startrifft them proton auf einen atomkern in dieser gashfille werden drei arten von pionen erzeugtdie neutralen pionen zerfallen in jeweils zwei gammaquanten die sich beinahe in dieselbe richtung wie das urspriingliche proton bewegennach der modellvorstellung gibt es gerade zwei positionen i am umlauf des pulsars urn semen begleitstern bei denen die strahlung in richtung zum beobachter auf der erde ausgesandt wirdanother example is provided by english sentences 19 and 20 which appear in german as sentences 21 and 22however the latter part of english sentence 19 is in fact transferred to sentence 22 in the germanthis is also unmistakable in the final resultsnotice also in this example that the definition of quotphotonquot has become a parenthetical expression at the beginning of the second german sentence a fact which is not reflectedthe other end of the cosmicray energy spectrum is defined somewhat arbitrarily any quantum greater than 108 electron volts arriving from space is considered a cosmic raythe definition encompasses not only particles but also gammaray photons which are quanta of electromagnetic radiationdas untere ende des spektrums der kosmischen strahlen ist verhaltnismai3ig unscharf definiertjedes photon oder teilchen mit einer energie von mehr als 108 elektronenvolt das aus dem weltraum eintrifft bezeichnet man als kosmischen strahlit frequently occurred in our data that sentences that were separated by colons or semicolons in the original appeared as completely distinct sentences in the german translationindeed the common usage in the two languages would probably have been better represented if we had treated colons and semicolons as sentence separators along with periods question marks and the likethere are of course situations in english in which these punctuation marks are used in other ways but they are considerably less frequent and in any case it seems that our program would almost always make the right associationsan example involving the colon is to be found in sentence 142 of the original translated as sentences 163 and 164 the absorption lines established a lower limit on the distance of cygnus x3 it must be more distant than the farthest hydrogen cloud which is believed to lie about 37000 lightyears away near the edge of the galaxyaus dieser absorptionslinie kann man eine untere grenze der entfernung von cygnus x bestimmendie quelle mu13 jenseits der am weitesten entfernten wasserstoffwolke sein also weiter als ungefahr 37000 lichtjahre entfernt am rande der milchstrai3eenglish sentence 197 containing a semicolon is translated by german sentences 228 and 229 the estimate is conservative because it is based on the gamma rays observed arriving at the earth it does not take into account the likelihood that cygnus x emits cosmic rays in all directionsdies ist eine vorsichtige absch5tzungsie ist nur aus den gammastrahlendaten abgeleitet die auf der erde gemessen werden dai3 cygnus x3 wahrscheirtlich kosmische strahlung in alle richtungen aussendet ist dabei noch nicht beriicksichtigtsentence alignment of the first 50 sentences of the test texts true alignment and hypothesis of the sat after the first pass and after the second pass table 3 summarizes the accuracy of the algorithm as a function of the number of passesthe sat is evaluated by two criteria the number of correct alignments divided by the total number of alignments andsince the sat does not necessarily give an alignment for every sentencethe coverage ie the number of sentences with at least one entry relative to the total number of sentencesan alignment is said to be correct if the sat contains exactly the numbers of the sentences that are complete or partial translations of the original sentencethe coverage of 96 of the sat in pass 4 is as much as one would expect since the remaining nonaligned sentences are onezero alignments most of them due to the german subheadings that are not part of the english versionthe table also shows that the ast always provides a significant number of candidates for alignment with each sentence before a pass the fourth column gives the number of true sentence alignments relative to the total number of candidates in the astrecall that the final alignment is always a subset of the hypotheses in the ast in every preceding passfigure 5 shows the true sentence alignment for the first 50 sentences and how the algorithm discovered them in the first pass only a few sentences are set into correspondence after the second pass already almost half of the correspondences are foundnote that there are no wrong alignments in the first two passesin the third pass almost all of the remaining alignments are found and a final pass usually completes the alignmentour algorithm produces very favorable results when allowed to converge graduallyprocessing time in the original lisp implementation was high typically several hours for each passby trading cpu time for memory massively the time needed by a c implementation on a sun 475 was reduced to 17 mm for the first pass 08 mm for the second and 05 min for the third pass in an application to this pair of articlesit should be noted that a naive implementation of the algorithm without using the appropriate data structures can easily lead to times that are a factor of 30 higher and do not scale up to larger textsthe application of our method to a text that we put together from the hansard corpus had essentially no problem in identifying the correct sentence alignment in a process of five passesthe alignments for the first 1000 sentences of the english text were checked by hand and seven errors were found five of them occurred in sentences where sentence boundaries were not correctly identified by the program because of periods that did not mark a sentence boundary and were not identified as such by a very simple preprocessing programthe other two errors involved two short sentences for which the sat did not give an alignmentprocessing time increased essentially linearly the first pass took 83 min the second 32 mm and it further decreased until the last pass which took 21 minnote that the error rate depends crucially on the kind of quotannealing schedulequot used if the thresholds that allow a word pair in the wat to influence the sat are lowered too fast only a few passes are needed but accuracy deterioratesfor example in an application where the process terminated after only three passes the accuracy was only in the eighties since processing time after the first pass is usually already considerably lower we have found that a high accuracy can safely be attained when more passes are allowed than are actually necessaryin order to evaluate the sensitivity of the algorithm to the lengths of the texts that are to be aligned we applied it to text samples that ranged in length from 10 to 1000 sentences and examined the accuracy of the wat after the first pass that is more precisely the number of word pairs in the wat that are valid translations relative to the total number of word pairs with a similarity of not less than 07 the result is that this accuracy increases asymptotically to 1 with the text length and is already higher than 80 for a text length of 100 sentences roughly speaking the accuracy is almost 1 for texts longer than 150 sentences and around 05 for text length in the lower range from 20 to 60in other words texts of a length of more than 150 sentences are suitable to be processed in this way text fragments shorter than 80 sentences do not have a high proportion of correct word pairs in the first wat but further experiments showed that the final alignment for texts of this length is on average again almost perfect the drawback of a less accurate initial wat is apparently largely compensated for by the fact that the ast is also narrower for these texts however the variance in the alignment accuracies is significantly highersince we addressed the text translation alignment problem in 1988 a number of researchers among them gale and church and brown lai and mercer have worked on the problemboth methods are based on the observation that the length of text unit is highly correlated to the length of the translation of this unit no matter whether length is measured in number of words or in number of characters consequently they are both easier to implement than ours though not necessarily more efficientthe method of brown lai and mercer is based on a hidden markov model for the generation of aligned pairs of corpora whose parameters are estimated from a large textfor an application of this method to the canadian hansard good results are reportedhowever the problem was also considerably facilitated by the way the implementation made use of hansardspecific comments and annotations these are used in a preprocessing step to find anchors for sentence alignment such that on average there are only ten sentences in betweenmoreover this particular corpus is well known for the near literalness of its translations and it is therefore unclear to what extent the good results are due to the relative ease of the problemthis would be an important consideration when comparing various algorithms when the algorithms are actually applied it is clearly very desirable to incorporate as much prior knowledge as possiblemoreover long texts can almost always be expected to contain natural anchors such as chapter and section headings at which to make an a priori segmentationgale and church note that their method performed considerably better when lengths of sentences were measured in number of characters instead of in number of wordstheir method is based on a probabilistic model of the distance between two sentences and a dynamic programming algorithm is used to minimize the total distance between aligned unitstheir implementation assumes that each character in one language gives rise to on average one character in the other languagein our texts one character in english on average gives rise to somewhat more than 12 characters in german and the correlation between the lengths of aligned paragraphs in the two languages was with 0952 lower than the 0991 that are mentioned in gale and church which supports our impression that the scientific american texts we used are hard texts to align but it is not clear to what extent this would deteriorate the resultsin applications to economic reports from the union bank of switzerland the method performs very well on simple alignments but has at the moment problems with complex matchesthe method has the advantage of associating a score with pairs of sentences so that it is easy to extract a subset for which there is a high likelihood that the alignments are correctgiven the simplicity of the methods proposed by brown lai and mercer and gale and church either of them could be used as a heuristic in the construction of the initial ast in our algorithmin the current version the number of candidate sentence pairs that are considered in the first pass near the middle of a text contributes disproportionally to the cost of the computationin fact as we remarked earlier the complexity of this step is 0the proposed modification would effectively make it linearfor most practical purposes the alignment algorithm we have described produces very satisfactory results even when applied to relatively free translationsthere are doubtless many places in which the algorithm itself could be improvedfor example it is clear that the present method of building the sat favors associations between long sentences and this is not surprising because there is more information in long sentencesbut we have not investigated the extent of this bias and we do not therefore know it as appropriatethe present algorithm rests on being able to identify onetoone associations between certain words notably technical terms and proper namesit is clear from a brief inspection of table 2 that very few correspondences are noticed among everyday words and when they are it is usually because those words also have precise technical usesthe very few exceptions include quotonlyquotquotnurquot and thequotquotdiequot the pair quotperquotquotproquot might also qualify but if the languages afford any example of a scientific preposition this is surely itthe most interesting further developments would be in the direction of loosening up this dependence on onetoone associations both because this would present a very significant challenge and also because we are convinced that our present method identifies essentially all the significant onetoone associationsthere are two obvious kinds of looser associations that could be investigatedone would consist of connections between a single vocabulary item in one language and two or more in the other or even between several items in one language and several in the otherthe other would involve connectionsoneone onemany or manymanybetween phrases or recurring sequenceswe have investigated the first of these enough to satisfy ourselves that there is latent information on onetomany associations in the text and that it can be revealed by suitable extensions of our methodshowever it is clear that the combinatorial problems associated with this approach are severe and pursuing it would require much fine tuning of the program and designing much more effective ways of indexing the most important data structuresthe key to reducing the combinatorial explosion probably lies in using tables of similarities such as those the present algorithm uses to suggest combinations of items that would be worth consideringif such an approach could be made efficient enough it is even possible that it would provide a superior way of solving the problem for which our heuristic methods of morphological analysis were introducedits superiority would come from the fact that it would not depend on words being formed by concatenation but would also accommodate such phenomena as umlaut ablaut vowel harmony and the nonconcatenative process of semitic morphologythe problems of treating recurring sequences are less severedata structures such as the patricia tree provide efficient means of identifying all such sequences and once identified the data they provide could be added to the wat much as we now add the results of morphological analysisneedless to say this would only allow for uninterrupted sequencesany attempt to deal with discontinuous sequences would doubtless also involve great combinatorial problemsthese avenues for further development are intriguing and would surely lead to interesting resultsbut it is unlikely that they would lead to much better sets of associations among sentences than are to be found in the sats that our present program produces and it was mainly these results that we were interested in from the outsetthe other avenues we have mentioned concern improvements in the wat which for us was always a secondary interest
J93-1006
texttranslation alignmentwe present an algorithm for aligning texts with their translations that is based only on internal evidencethe relaxation process rests on a notion of which word in one text corresponds to which word in the other text that is essentially based on the similarity of their distributionsit exploits a partial alignment of the word level to induce a maximum likelihood alignment of the sentence level which is in turn used in the next iteration to refine the word level estimatethe algorithm appears to converge to the correct sentence alignment in only a few iterationsour morphology algorithm is applied for splitting potential suffixes and prefixes and for obtaining the normalised word forms
retrieving collocations from text xtract natural languages are full of collocations recurrent combinations of words that cooccur more often than expected by chance and that correspond to arbitrary word usages recent work in lexicography indicates that collocations are pervasive in english apparently they are common in all types of writing including both technical and nontechnical genres several approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual data these techniques automatically produce large numbers of collocations along with statistical figures intended to reflect the relevance of the associations however none of these techniques provides functional information along with the collocation also the results produced often contained improper word associations reflecting some spurious aspect of the training corpus that did not stand for true collocations in this paper we describe a set of techniques based on statistical methods for retrieving and identifying collocations from large textual corpora these techniques produce a wide range of collocations and are based on some original filtering methods that allow the production of richer and higherprecision output these techniques have been implemented and resulted in a tool techniques are described and some results are presented on a 10 corpus of stock market news reports a lexicographic evaluation of a retrieval tool has been made and the estimated precision of 80 natural languages are full of collocations recurrent combinations of words that cooccur more often than expected by chance and that correspond to arbitrary word usagesrecent work in lexicography indicates that collocations are pervasive in english apparently they are common in all types of writing including both technical and nontechnical genresseveral approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual datathese techniques automatically produce large numbers of collocations along with statistical figures intended to reflect the relevance of the associationshowever none of these techniques provides functional information along with the collocationalso the results produced often contained improper word associations reflecting some spurious aspect of the training corpus that did not stand for true collocationsin this paper we describe a set of techniques based on statistical methods for retrieving and identifying collocations from large textual corporathese techniques produce a wide range of collocations and are based on some original filtering methods that allow the production of richer and higherprecision outputthese techniques have been implemented and resulted in a lexicographic tool xtractthe techniques are described and some results are presented on a 10 millionword corpus of stock market news reportsa lexicographic evaluation of xtract as a collocation retrieval tool has been made and the estimated precision of xtract is 80consider the following sentences voir la porte to see the door die tar sehen to see the door vedere la porta to see the door ver la puerta to see the door lcapiyi gormek to see the door enfoncer la porte to push the door through die tiir aufbrechen to break the door sfondare la porta to hitdemolish the door tumbar la puerta to fall the door kapiyi kirmak to break the door the above sentences contain expressions that are difficult to handle for nonspecialistsfor example among the eight different expressions referring to the famous wall street index only those used in sentences 14 are correctthe expressions used in the starred sentences 58 are all incorrectthe rules violated in sentences 58 are neither rules of syntax nor of semantics but purely lexical rulesthe word combinations used in sentences 58 are invalid simply because they do not exist similarly the ones used in sentences 14 are correct because they existexpressions such as these are called collocationscollocations vary tremendously in the number of words involved in the syntactic categories of the words in the syntactic relations between the words and in how rigidly the individual words are used togetherfor example in some cases the words of a collocation must be adjacent as in sentences 15 above while in others they can be separated by a varying number of other wordsunfortunately with few exceptions collocations are generally unavailable in compiled formthis creates a problem for persons not familiar with the sublanguagel as well as for several machine applications such as language generationin this paper we describe a set of techniques for automatically retrieving such collocations from naturally occurring textual corporathese techniques are based on statistical methods they have been implemented in a tool xtract which is able to retrieve a wide range of collocations with high performancepreliminary results obtained with parts of xtract have been described in the past this paper gives a complete description of the system and the results obtainedxtract now works in three stagesin the first stage pairwise lexical relations are retrieved using only statistical informationthis stage is comparable to church and hanks in that it evaluates a certain word association between pairs of wordsas in church and hanks the words can appear in any order and they can be separated by an arbitrary number of other wordshowever the statistics we use provide more information and allow us to have more precision in our outputthe output of this first stage is then passed in parallel to the next two stagesin the second stage multipleword combinations and complex expressions are identifiedthis stage produces output comparable to that of choueka klein and neuwitz however the techniques we use are simpler and only produce relevant datafinally by combining parsing and statistical techniques the third stage labels and filters collocations retrieved at stage onethe third stage has been evaluated to raise the precision of xtract from 40 to 80 with a recall of 94section 2 is an introductory section on collocational knowledge section 3 describes the type of collocations that are retrieved by xtract and section 4 briefly surveys related efforts and contrasts our work to themthe three stages of xtract are then introduced in section 5 and described respectively in sections 6 7 and 8some results obtained by running xtract on several corpora are listed and discussed in section 9qualitative and quantitative evaluations of our methods and of our results are discussed in sections 10 and 11finally several possible applications and tasks for xtract are discussed in section 12there has been a great deal of theoretical and applied work related to collocations that has resulted in different characterizations depending on their interests and points of view researchers have focused on different aspects of collocationsone of the most comprehensive definition that has been used can be found in the lexicographic work of benson and his colleagues the definition is the followinga collocation is an arbitrary and recurrent word combination this definition however does not cover some aspects and properties of collocations that have consequences for a number of machine applicationsfor example it has been shown that collocations are difficult to translate across languagesthis fact obviously has a direct application for machine translationmany properties of collocations have been identified in the past however the tendency was to focus on a restricted type of collocationin this section we present four properties of collocations that we have identified and discuss their relevance to computational linguisticscollocations are difficult to produce for second language learners in most cases the learner cannot simply translate wordforword what she would say in herhis native languageas we can see in table 1 the wordforword translation of quotto open the doorquot works well in both directions in all five languagesin contrast translating wordforword the expression quotto break downforce the doorquot is a poor strategy in both directions in all five languagesthe cooccurrence of quotdoorquot and quotopenquot is an open or free combination whereas the combination quotdoorquot and quotbreak downquot is a collocationlearners of english would not produce quotto break down a doorquot whether their first language is french german italian spanish or turkish if they were not aware of the constructfigure 1 illustrates disagreements between british english and american englishhere the problem is even finer than in table 1 since the disagreement is not across two different languages but across dialects of englishin each of the sentences given in this figure there is a different word choice for the american and the british english the word choices do not correspond to any syntactic or semantic variation of english but rather to different word usages in both dialects of englishtranslating from one language to another requires more than a good knowledge of the syntactic structure and the semantic representationbecause collocations are arbitrary they must be readily available in both languages for effective machine translationin addition to nontechnical collocations such as the ones presented before domainspecific collocations are numeroustechnical jargons are often totally unintelligible for the laymanthey contain a large number of technical termsin addition familiar words seem to be used differentlyin the domain of sailing for example some words are unknown to the nonfamiliar reader rigg jib and leeward are totally meaningless to the laymansome other combinations apparently do not contain any technical words but these words take on a totally different meaning in the domainfor example a dry suit is not a suit that is dry but a special type of suit used by sailors to stay dry in difficult weather conditionssimilarly a wet suit is a special kind of suit used for several marine activitiesnative speakers are often unaware of the arbitrariness of collocations in nontechnical core english however this arbitrariness becomes obvious to the native speaker in specific sublanguagessome examples of predicative collocationslinguistically mastering a domain such as the domain of sailing thus requires more than a glossary it requires knowledge of domaindependent collocationsthe recurrent property simply means that these combinations are not exceptions but rather that they are very often repeated in a given contextword combinations such as quotto make a decision to hit a record to perform an operationquot are typical of the language and collocations such as quotto buy shortquot quotto ease the jibquot are characteristic of specific domainsboth types are repeatedly used in specific contextsby cohesive2 clusters we mean that the presence of one or several words of the collocations often implies or suggests the rest of the collocationthis is the property mostly used by lexicographers when compiling collocations lexicographers use other people linguistic judgment for deciding what is and what is not a collocationthey give questionnaires to people such as the one given in figure 2this questionnaire contains sentences used by benson for compiling collocational knowledge for the bbi each sentence contains an empty slot that can easily be filled in by native speakersin contrast second language speakers would not find the missing words automatically but would consider a long list of words having the appropriate semantic and syntactic features such as the ones given in the second columnas a consequence collocations have particular statistical distributions this means that for example the probability that any two adjacent words in a sample will be quotred herringquot is considerably larger than the probability of quotredquot times the probability of quotherringquot the words cannot be considered as independent variableswe take advantage of this fact to develop a set of statistical techniques for retrieving and identifying collocations from large textual corporacollocations come in a large variety of formsthe number of words involved as well as the way they are involved can vary a great dealsome collocations are very rigid whereas others are very flexiblefor example a collocation such as the one linking quotto makequot and quotdecisionquot can appear as quotto make a decisionquot quotdecisions to be madequot quotmade an important decisionquot etcin contrast a collocation such as quotthe new york stock exchangequot can only appear under one form it is a very rigid collocation a fixed expressionwe have identified three types of collocations rigid noun phrases predicative relations and phrasal templateswe discuss the three types in turn and give some examples of collocationsa predicative relation consists of two words repeatedly used together in a similar syntactic relationthese lexical relations are the most flexible type of collocationthey are hard to identify since they often correspond to interrupted word sequences in the corpusfor example a noun and a verb will form a predicative relation if they are repeatedly used together with the noun as the object of the verbquotmakedecisionquot is a good example of a predicative relationsimilarly an adjective repeatedly modifying a given noun such as quothostiletakeoverquot also forms a predicative relationexamples of automatically extracted predicative relations are given in figure 33 this class of collocations is related to menuk lexical functions and benson ltype relations rigid noun phrases involve uninterrupted sequences of words such as quotstock marketquot quotforeign exchangequot quotnew york stock exchangequot quotthe dow jones average of 30 industrialsquot they can include nouns and adjectives as well as closed class words and are similar to the type of collocations retrieved by choueka and amsler they are the most rigid type of collocationexamples of rigid noun phrases are4 quotthe nyse composite index of all its listed common stocksquot quotthe nasdaq composite index for the over the counter marketquot quotleveraged buyoutquot quotthe gross national productquot quotwhite house spokesman marlin fitzwaterquot in general rigid noun phrases cannot be broken into smaller fragments without losing their meaning they are lexical units in and of themselvesmoreover they often refer to important concepts in a domain and several rigid noun phrases can be used to express the same conceptin the new york stock exchange domain for example quotthe dow industrialsquot quotthe dow jones average of 30 industrial stocksquot quotthe dow jones industrial averagequot and quotthe dow jones industrialsquot represent several ways to express a single conceptas we have seen before these rigid noun phrases do not seem to follow any simple construction rule as for example the examples given in sentences 68 at the beginning of the paper are all incorrectphrasal templates consist of idiomatic phrases containing one several or no empty slotsthey are phraselong collocationsfigure 4 lists some examples of phrasal templates in the stock market domainin the figure the empty slots must be filled in by a number more generally phrasal templates specify the parts of speech of the words that can fill the empty slotsphrasal templates are quite representative of a given domain and are very often repeated in a rigid way in a given sublanguagein the domain of weather reports for example the sentence quottemperatures indicate previous day high and overnight low to 8 amquot is actually repeated before each weather reportunlike rigid noun phrases and predicative relations phrasal templates are specifically useful for language generationbecause of their slightly idiosyncratic structure generating them from single words is often a very difficult task for a language generatoras pointed out by kukich in general their usage gives an impression of fluency that could not be equaled with compositional generation alonethere has been a recent surge of research interest in corpusbased computational linguistics methods that is the study and elaboration of techniques using large real text as a basissuch techniques have various applicationsspeech recognition and text compression have been of longstanding interest and some new applications are currently being investigated such as machine translation spelling correction parsing as pointed out by bell witten and cleary these applications fall under two research paradigms statistical approaches and lexical approachesin the statistical approach language is modeled as a stochastic process and the corpus is used to estimate probabilitiesin this approach a collocation is simply considered as a sequence of words among millions of other possible sequencesin contrast in the lexical approach a collocation is an element of a dictionary among a few thousand other lexical itemscollocations in the lexicographic meaning are only dealt with in the lexical approachaside from the work we present in this paper most of the work carried out within the lexical approach has been done in computerassisted lexicography by choueka klein and neuwitz and church and his colleagues both works attempted to automatically acquire true collocations from corporaour work builds on choueka and has been developed contemporarily to churchchoueka klein and neuwitz proposed algorithms to automatically retrieve idiomatic and collocational expressionsa collocation as defined by choueka is a sequence of adjacent words that frequently appear togetherin theory the sequences can be of any length but in actuality they contain two to six wordsin choueka experiments performed on an 11 millionword corpus taken from the new york times archives are reportedthousands of commonly used expressions such as quotfried chickenquot quotcasual sexquot quotchop sueyquot quothome runquot and quotmagic johnsonquot were retrievedchoueka methodology for handling large corpora can be considered as a first step toward computeraided lexicographythe work however has some limitationsfirst by definition only uninterrupted sequences of words are retrieved more flexible collocations such as quotmakedecisionquot in which the two words can be separated by an arbitrary number of words are not dealt withsecond these techniques simply analyze the collocations according to their observed frequency in the corpus this makes the results too dependent on the size of the corpusfinally at a more general level although disambiguation was originally considered as a performance task the collocations retrieved have not been used for any specific computational taskchurch and hanks describe a different set of techniques to retrieve collocationsa collocation as defined in their work is a pair of correlated wordsthat is a collocation is a pair of words that appear together more often than expectedchurch et al improve over choueka work as they retrieve interrupted as well as uninterrupted sequences of wordsalso these collocations have been used by an automatic parser in order to resolve attachment ambiguities they use the notion of mutual information as defined in information theory in a manner similar to what has been used in speech recognition or text compression to evaluate the correlation of common appearances of pairs of wordstheir work however has some limitations toofirst by definition it can only retrieve collocations of length twothis limitation is intrinsic to the technique used since mutual information scores are defined for two itemsthe second limitation is that many collocations identified in church and hanks do not really identify true collocations but simply pairs of words that frequently appear together such as the pairs quotdoctornursequot quotdoctorbillquot quotdoctorhonoraryquot quotdoctorsdentistsquot quotdoctorshospitalsquot etcthese cooccurrences are mostly due to semantic reasonsthe two words are used in the same context because they are of related meanings they are not part of a single collocational constructthe work we describe in the rest of this paper is along the same lines of researchit builds on choueka work and attempts to remedy the problems identified abovethe techniques we describe retrieve the three types of collocations discussed in section 2 and they have been implemented in a tool xtractxtract retrieves interrupted as well as uninterrupted sequences of words and deals with collocations of arbitrary length the following four sections describe and discuss the techniques used for xtractxtract consists of a set of tools to locate words in context and make statistical observation to identify collocationsin the upgraded version we describe here xtract has been extended and refinedmore information is computed and an effort has been made to extract more functional informationxtract now works in three stagesthe threestage analysis is described in sections 6 7 and 8in the first stage described in section 6 xtract uses straight statistical measures to retrieve from a corpus pairwise lexical relations whose common appearance within a single sentence are correlateda pair is retrieved if its frequency of occurrence is above a certain threshold and if the words are used in relatively rigid waysthe output of stage one is then passed to both the second and third stage in parallelin the second stage described in section 7 xtract uses the output bigrams to produce collocations involving more than two words it analyzes all sentences containing the bigram and the distribution of words and parts of speech for each position around the pairit retains words occupying a position with probability greater than a given thresholdfor example the bigram quotaverageindustrialquot produces the ngram quotthe dow jones industrial averagequot since the words are always used within rigid noun phrases in the training corpusin the third stage described in section 8 xtract adds syntactic information to collocations retrieved at the first stage and filters out inappropriate onesfor example if a bigram involves a noun and a verb this stage identifies it either as a subjectverb or as a verbobject collocationif no such consistent relation is observed then the collocation is rejectedaccording to cruse definition a syntagmatic lexical relation consists of a pair of words whose common appearances within a single phrase structure are correlatedin other words those two words appear together within a single syntactic construct more often than expected by chancethe first stage of xtract attempts to identify such pairwise lexical relations and produce statistical information on pairs of words involved together in the corpusideally in order to identify lexical relations in a corpus one would need to first parse it to verify that the words are used in a single phrase structurehowever in practice freestyle texts contain a great deal of nonstandard features over which automatic parsers would failfortunately there is strong lexicographic evidence that most syntagmatic lexical relations relate words separated by at most five other words in other words most of the lexical relations involving a word w can be retrieved by examining the neighborhood of w wherever it occurs within a span of five wordsin the work presented here we use this simplification and consider that two words cooccur if they are in a single sentence and if there are fewer than five words between themin this first stage we thus use only statistical methods to identify relevant pairs of wordsthese techniques are based on the assumptions that if two words are involved in a collocation then these two assumptions are used to analyze the word distributions and we base our filtering techniques on themin this stage as well as in the two others we often need partofspeech information for several purposesstochastic partofspeech taggers such as those in church and garside and leech have been shown to reach 9599 performance on freestyle textwe preprocessed the corpus with a stochastic partofspeech tagger developed at bell laboratories by ken church 9 in the rest of this section we describe the algorithm used for the first stage of xtract in some detailwe assume that the corpus is preprocessed by a part of speech tagger and we note w a collocate of w if the two words appear in a common sentence within a distance of 5 wordsinput the tagged corpus a given word w output all the sentences containing w description this actually encompasses the task of identifying sentence boundaries and the task of selecting sentences containing w the first task is not simple and is still an open problemit is not enough to look for a period followed by a blank space as for example abbreviations and acronyms such as sbf yousa and atm often pose a problemthe basic algorithm for isolating sentences is described and implemented by a finitestate recognizerour implementation could easily be improved in many waysfor example it performs poorly on acronyms and often considers them as end of sentences giving it a list of currently used acronyms such as nba eik etc would significantly improve its performanceinput output of step 11 ie a set of tagged sentences containing w output a list of words w with frequency information on how w and w cooccurthis includes the raw frequency as well as the breakdown into frequencies for each possible positionsee table 2 for example outputsdescription for each input sentence containing w we make a note of its collocates and store them along with their position relative to w their part of speech and their frequency of appearancemore precisely for each prospective lexical relation or for each potential collocate wi we maintain a data structure containing this informationthe data structure is shown in figure 5it contains freq the frequency of appearance of w with w so far in the corpus pp the part of speech of wi and pij helps eliminate the collocates that are not frequent enoughthis condition specifies that the frequency of appearance of w in the neighborhood of w must be at least one standard deviation above the averagein most statistical distributions this thresholding eliminates the vast majority of the lexical relationsfor example for w quottakeoverquot among the 3385 possible collocates only 167 were selected which gives a proportion of 95 rejectedin the case of the standard normal distribution this would reject some 68 of the casesthis indicates that the actual distribution of the collocates of quottakeoverquot has a large kurtosisamong the eliminated collocates were quotdormant dilute ex defunctquot which obviously are not typical of a takeoveralthough these rejected collocations might be useful for applications such as speech recognition for example we do not consider them any further herewe are looking for recurrent combinations and not casual onescondition requires that the histogram of the 10 relative frequencies of appearance of w within five words of w have at least one spikeif the histogram is flat it will be rejected by this conditionfor example in figure 5 the histogram associated with w2 would be rejected whereas the one associated with wl or w would be acceptedin table 2 the histogram for quottakeoverpossiblequot is clearly accepted were of this type as they retrieved doctorsdentists doctorsnurses doctorbills doctorshospitals nursesdoctor etc which are not collocations in the sense defined abovesuch collocations are not of interest for our purpose although they could be useful for disambiguation or other semantic purposescondition filters out exactly this type of collocations pi pi condition pulls out the interesting relative positions of the two wordsconditions and eliminate rows in the output of step 12in contrast condition selects columns from the remaining rowsfor each pair of words one or several positions might be favored and thus result in several 91 selectedfor example the pair quotexpensivetakeoverquot produced two different peaks one with only one word in between quotexpensivequot and quottakeoverquot and the other with two wordsexample sentences containing the two words in the two possible positions are quot the provision is aimed at making a hostile takeover prohibitively expensive by enabling borg warner stockholders to buy the quot quotthe pill would make a takeover attempt more expensive by allowing the retailer shareholders to buy more company stock quot let us note that this filtering method is an original contribution of our workother works such as church and hanks simply focus on an evaluation of the correlation of appearance of a pair of words which is roughly equivalent to condition however taking note of their pattern of appearance allows us to filter out more irrelevant collocations with and this is a very important point that will allow us to filter out many invalid collocations and also produce more functional information at stages 2 and 3a graphical interpretation of the filtering method used for xtract is given in smadja 7xtract stage two from 2grams to ngrams the role of the second stage of xtract is twofoldit produces collocations involving more than two words and it filters out some pairwise relationsstage 2 is related to the work of choueka and to some extent to what has been done in speech recognition in this second stage xtract uses the same components used for the first stage but in a different wayit starts with the pairwise lexical relations produced in stage 1 and produces multiple word collocations such as rigid noun phrases or phrasal templates from themto do this xtract studies the lexical relations in context which is exactly what lexicographers dofor each bigram identified at the previous stage xtract examines all instances of appearance of the two words and analyzes the distributions of words and parts of speech in the surrounding positionsinput output of stage 1similar to table 4 ie a list of bigrams with their statistical information as computed in stage 1identical to stage 1 step 11given a pair of words w and wi and an integer specifying the distance of the two words11 this step produces all the sentences containing them in the given positionfor example given the bigram takeoverthwart and the distance 2 this step produces sentences like quotunder the recapitalization plan it proposed to thwart the takeoverquot identical to stage 1 step 12we compute the frequency of appearance of each of the collocates of w by maintaining a data structure similar to the one given in figure 5 input output of step 22output ngrams such as in figure 8discussion here the analyses are simpler than for stage 1we are only interested in percentage frequencies and we only compute the moment of order 1 of the frequency distributionstables produced in step 22 are used to compute the frequency of appearance of each word in each position around w for each of the possible relative distances from w we analyze the distribution of the words and only keep the words occupying the position with a probability greater than a given threshold t12 if part of speech information is available the same analysis is also performed with parts of speech instead of actual wordsin short a word w or a part of speech pos is kept in the final ngram at position i if and only if it satisfies the following inequation p denotes the probability of event e consider the examples given in figures 6 and 7 that show the concordances for the input pairs quotaverageindustrialquot and quotindexcompositequot in figure 6 the same words are always used from position 4 to position 0however at position 1 the words used are always differentquotdowquot is used at position 3 in more than 90 of the casesit is thus part of the produced rigid noun phrasesbut quotdownquot is only used a couple of times at position 1 11 the distance is actually optional and can be given in various wayswe can specify the word order the maximum distance the exact distance etc and will not be part of the produced rigid noun phrasesfrom those concordances xtract produced the fiveword rigid noun phrases quotthe dow jones industrial averagequot figure 7 shows that from position 3 to position 7 the words used are always the samein all the example sentences in which quotcompositequot and quotindexquot are adjacent the two words are used within a bigger construct of 11 words however if we look at position 8 for example we see that although the words used are different in all the cases they are verbsthus after the 11gram we expect to find a verbin short figure 7 helps us produce both the rigid noun phrases quotthe nyse composite index of all its listed common stocksquot as well as the phrasal template quotthe nyse composite index of all its listed common stocks verb number to numberquot figure 8 shows some sample phrasal templates and rigid noun phrases that were produced at this stagethe leftmost column gives the input lexical relationssome other examples are given in figure 3the role of stage 2 is to filter out many lexical relations and replace them by valid onesit produces both phrasal templates and rigid noun phrasesfor example associations such as quotbluestocks quot quotaircontrollerquot or quotadvancingmarketquot were filtered out tuesday the dow jones industrial average the dow jones industrial average that sent the dow jones industrial average monday the dow jones industrial average the dow jones industrial average in the dow jones industrial average rose 2628 points to 2 30469the nyse s composite index the nyse s composite index the nyse s composite index the nyse s composite index the nyse s composite index the nyse s composite index the nyse s composite index the nyse s composite index the nyse s composite index quotthe nyse composite index of all its listed common stocks and respectively replaced by quotblue chip stocksquot quotair traffic controllersquot and quotthe broader market in the nyse advancing issuesquot thus stage 2 produces nword collocations from twoword associationsproducing nword collocations has already been done quot the general method used by choueka is the following for each length n produce all the word sequences of length n and sort them by frequencyon a 12 millionword corpus choueka retrieved 10 collocations of length six 115 collocations of length five 1024 collocations of length four 4777 of length three and some 15973 of length twothe threshold imposed was 14the method we presented in this section has three main advantages when compared to a straight ngram method like choueka of cpu time and spacein a 10 millionword corpus with about 60000 different words there are about 36 x 109 possible bigrams 216 x 1014 trigrams and 3 x 1033 7gramsthis rapidly gets out of handchoueka for example had to stop at length sixin contrast the rigid noun phrases we retrieve are of arbitrary length and are retrieved very easily and in one passthe method we use starts from bigrams and produces the biggest possible subsuming ngramit is based on the fact that if an ngram is statistically significant then the included bigrams must also be significantfor example to identify quotthe dow jones average of 30 industrialsquot a traditional ngram method would compare it to the other 7grams and determine that it is significantin contrast we start from an included significant bigram and we directly retrieve the surrounding ngrams8xtract stage three adding syntax to the collocations the collocations as produced in the previous stages are already useful for lexicographyfor computational use however functional information is neededfor example the collocations should have some syntactic propertiesit is not enough to say that quotmakequot goes with quotdecisionquot we need to know that quotdecisionquot is used as the direct object of the verbthe advent of robust parsers such as cass and fidditch has made it possible to process large text corpora with good performance and thus combine statistical techniques with more symbolic analysisin the past some similar attempts have been donedebili parsed corpora of french texts to identify nonambiguous predicate argument relationshe then used these relations for disambiguationhindle and rooth later refined this approach by using bigram statistics to enhance the task of prepositional phrase attachmentchurch et al have yet another approach they consider questions such as what does a boat typically dothey are preprocessing a corpus with the fidditch parser in order to produce a list of verbs that are most likely associated with the subject quotboatquot our goal here is different as we analyze collocations automatically produced by the first stage of xtract to either add syntactic information or reject themfor example if a lexical relation identified at stage 1 involves a noun and a verb the role of stage 3 is to determine whether it is a subjectverb or a verbobject collocationif no such consistent relation is observed then the collocation is rejectedstage 3 uses a parser but it does not require a complete parse treegiven a number of sentences xtract only needs to know pairwise syntactic relationsthe parser we used in the experiment reported here is cass a bottomup incremental parsercass takes input sentences labeled with part of speech and attempts to identify syntactic structureone of the subtasks performed by cass is to identify predicate argument relations and this is the task we are interested in herestage 3 works in the following three stepsall the syntactic labels produced by cass on sentence identical to what we did at stage 2 step 21given a pair of words w and w a distance of the two words and a tagged corpus xtract produces all the sentences containing them in the given position specified by the distanceinput output of step 31a set of tagged sentences each containing both w and woutput for each sentence a set of syntactic labels such as those shown in figure 9discussion cass is called on the concordancesfrom cass output we only retrieve binary syntactic relations such as quotverbobjectquot or quotverbsubjectquot quotnoun adjectivequot and quotnounnounquot to simplify we abbreviate them respectively vo sv nj nnfor sentence below for example the labels produced are shown in figure 910quotwall street faced a major test with stock traders returning to action for the first time since last week epic selloff and investors awaited signs of life from the 5yearold bull marketquot input a set of sentences each associated with a set of labels as shown in figure 9output collocations with associated syntactic labels as shown in figure 10discussion for any given sentence containing both w and w two cases are possible either there is a label for the bigram or there is nonefor example for sentence there is a syntactic label for the bigram facedtest but there is none for the bigram stockreturningfacedtest enters into a verb object relation and stockreturning does not enter into any type of relationif no label is retrieved for the bigram it means that the parser could not identify a relation between the two wordsin this case we introduce a new label you to label the bigramat this point we associate with the sentence the label for the bigram with each of the input sentences we associate a label for the bigram for example the label associated with sentence for the bigram facedtest would be voa list of labeled sentences for the bigram w quotrosequot and w quotpricesquot is shown in figure 10producing the quotprices f rosequot sv predicative relation at stage 3input a set of sentences containing w and w each associated with a label as shown in figure 10output labeled collocations as shown in figure 11discussion on step 34 at this step we count the frequencies of each possible label identified for the bigram and perform a statistical analysis of order two for this distributionwe compute the average frequency for the distribution of labels ft and the standard deviation at we finally apply a filtering method similar to let t be a possible labelwe keep t if and only if it satisfies inequality similar to given before a collocation is thus accepted if and only if it has a label g satisfying inequality and g yousimilarly a collocation is rejected if no label satisfies inequality or if you satisfies itfigure 10 shows part of the output of step 33 for w quotrosequot and w quotpricesquot as shown in the figure sv labels are a large majoritythus we would label the relation pricerose as an sv relationan example output of this stage is given in figure 11the bigrams labeled you were rejected at this stagestage 3 thus produces very useful resultsit filters out collocations and rejects more than half of them thus improving the quality of the resultsit also labels the collocations it accepts thus producing a more functional and usable type of knowledgefor example if the first two stages of xtract produce the collocation quotmakedecisionquot the third stage identifies it as a verbobject collocationif no such relation can be observed then the collocation is rejectedthe produced collocations are not simple word associations but complex syntactic structureslabeling and filtering are two useful tasks for automatic use of collocations as well as for lexicographythe whole of stage 3 is an original contribution of our workretrieving syntactically labeled collocations is a relatively new concernmoreover filtering greatly improves the quality of the resultsthis is also a possible use of the emerging new parsing technologyxtract is actually a library of tools implemented using standard cunix librariesthe toolkit has several utilities useful for analyzing corporawithout making any effort to make xtract efficient in terms of computing resources the first stage as well as the second stage of xtract only takes a few minutes to run on a tenmegabyte corpusxtract is currently being used at columbia university for various lexical tasksand it has been tested on many corpora among them several tenmegabyte corpora of news stories a corpus consisting of some twenty megabytes of new york times articles which has already been used by choueka the brown corpus a corpus of the proceedings of the canadian parliament also called the hansards corpus which amounts to several hundred megabyteswe are currently working on packaging xtract to make it available to the research communitythe packaged version will be portable reusable and faster than the one we used to write this paperwe evaluate the filtering power of stage 3 in the evaluation section section 10section 9 presents some results that we obtained with the three stages of xtractresults obtained from the jerusalem post corpus have already been reported figure 12 gives some results for the threestage process of xtract on a 10 millionword corpus of stock market reports taken from the associated press newswirethe collocations are given in the following formatthe first line contains the bigrams with the distance so that quotsales fell 1quot says that the two words under consideration are quotsalesquot and quotfellquot and that the distance we are considering is 1the first line is thus the output of stage 1the second line gives the output of stage 2 ie the ngramsfor example quottakeoverthwartquot is retrieved as quot44 to thwart at takeover nn quotat stands for article nn stands for nouns and 44 is the number of times this collocation has been retrieved in the corpusthe third line gives the retrieved tags for this collocation so that the syntactic relation between quottakeoverquot and quotthwartquot is an sv relationand finally the last line is an example sentence containing the collocationoutput of the type of figure 12 is automatically producedthis kind of output is about as far as we have gone automaticallyany further analysis andor use of the collocations would probably require some manual interventionsome complete output on the stock market corpusfor the 10 millionword stock market corpus there were some 60000 different word formsxtract has been able to retrieve some 15000 collocations in totalwe would like to note however that xtract has only been effective at retrieving collocations for words appearing at least several dozen times in the corpusthis means that lowfrequency words were not productive in terms of collocationsout of the 60000 words in the corpus only 8000 were repeated more than 50 timesthis means that for a target overlap of the manual and automatic evaluations lexicon of size n 8000 one should expect at least as many collocations to be added and xtract can help retrieve most of themthe third stage of xtract can thus be considered as a retrieval system that retrieves valid collocations from a set of candidatesthis section describes an evaluation experiment of the third stage of xtract as a retrieval system as well as an evaluation of the overall output of xtractevaluation of retrieval systems is usually done with the help of two parameters precision and recall precision of a retrieval system is defined as the ratio of retrieved valid elements divided by the total number of retrieved elements it measures the quality of the retrieved materialrecall is defined as the ratio of retrieved valid elements divided by the total number of valid elementsit measures the effectiveness of the systemthis section presents an evaluation of the retrieval performance of the third stage of xtractdeciding whether a given word combination is a valid or invalid collocation is actually a difficult task that is best done by a lexicographerjeffery triggs is a lexicographer working for the oxford english dictionary coordinating the north american readers program of oed at bell communication researchjeffery triggs agreed to go over manually several thousands of collocationsin order to have an unbiased experiment we had to be able to evaluate the performance of xtract against a human expertwe had to have the lexicographer and xtract perform the same taskto do this in an unbiased way we randomly selected a subset of about 4000 collocations after the first two stages of xtractthis set of collocations thus contained some good collocations and some bad onesthis data set was then evaluated by the lexicographer and the third stage of xtractthis allowed 17 i am grateful to jeffery whose professionalism and kindness helped me understand some of the difficulty of lexicographywithout him this evaluation would not have been possible us to evaluate the performances of the third stage of xtract and the overall quality of the total output of xtract in a single experimentthe experiment was as follows we gave the 4000 collocations to evaluate to the lexicographer asking him to select the ones that he would consider for a domainspecific dictionary and to cross out the othersthe lexicographer came up with three simple tags yy y and n both y and yy include good collocations and n includes bad collocationsthe difference between yy and y is that y collocations are of better quality than yy collocationsyy collocations are often too specific to be included in a dictionary or some words are missing etcafter stage 2 about 20 of the collocations are y about 20 are yy and about 60 are n this told us that the precision of xtract at stage 2 was only about 40although this would seem like a poor precision one should compare it with the much lower rates currently in practice in lexicographyfor compiling new entries for the oed for example the first stage roughly consists of reading numerous documents to identify new or interesting expressionsthis task is performed by professional readersfor the oed the readers for the american program alone produce some 10000 expressions a monththese lists are then sent off to the dictionary and go through several rounds of careful analysis before actually being submitted to the dictionarythe ratio of proposed candidates to good candidates is usually lowfor example out of the 10000 expressions proposed each month fewer than 400 are serious candidates for the oed which represents a current rate of 4automatically producing lists of candidate expressions could actually be of great help to lexicographers and even a precision of 40 would be helpfulsuch lexicographic tools could for example help readers retrieve sublanguagespecific expressions by providing them with lists of candidate collocationsthe lexicographer then manually examines the list to remove the irrelevant dataeven low precision is useful for lexicographers as manual filtering is much faster than manual scanning of the documents such techniques are not able to replace readers though as they are not designed to identify lowfrequency expressions whereas a human reader immediately identifies interesting expressions with as few as one occurrencethe second stage of this experiment was to use xtract stage 3 to filter out and label the sample set of collocationsas described in section 8 there are several valid labels in this experiment we grouped them under a single label t there is only one nonvalid label you a t collocation is thus accepted by xtract stage 3 and a you collocation is rejectedthe results of the use of stage 3 on the sample set of collocations are similar to the manual evaluation in terms of numbers about 40 of the collocations were labeled by xtract stage 3 and about 60 were rejected figure 13 shows the overlap of the classifications made by xtract and the lexicographerin the figure the first diagram on the left represents the breakdown in t and you of each of the manual categories the diagram on the right represents the breakdown in yyy and n of the t and you categoriesfor example the first column of the diagram on the left represents the application of xtract stage 3 on the yy collocationsit shows that 94 of the collocations accepted by the lexicographer were also accepted by xtractin other words this means that the recall of the third stage of xtract is 94the first column of the diagram on the right represents the lexicographic evaluation of the collocations automatically accepted by xtractit shows that about 80 of the t collocations were accepted by the lexicographer and that about 20 were rejectedthis shows that precision was raised from 40 to 80 with the addition of xtract stage 3in summary these experiments allowed us to evaluate stage 3 as a retrieval systemthe results are precision 80 and recall 94top associations with quotpricequot in nyt dj and apin this section we discuss the extent to which the results are dependent on the corpus usedto illustrate our purpose here we are using results collected from three different corporathe first one dj for dow jones is the corpus we used in this paper it contains stock market stories taken from the associated press newswiredj contains 89 million wordsthe second corpus nyt contains articles published in the new york times during the years 1987 and 1988the articles are on various subjectsthis is the same corpus that was used by choueka nyt contains 12 million wordsthe third corpus ap contains stories from the associated press newswire on various domains such as weather reports politics health finances etcap is 4 million wordsfigure 14 represents the top 10 word associations retrieved by xtract stage 1 for the three corpora with the word quotpricequot in this figure d represents the distance between the two words and w represents the weight associated with the bigramthe weight is a combined index of the statistical distribution as discussed in section 6 and it evaluates the collocationthere are several differences and similarities among the three columns of the figure in terms of the words retrieved the order of the words retrieved and the values of w we identified two main ways in which the results depend on the corpuswe discuss them in turnfrom the different corpora we used we noticed that our statistical methods were not effective for lowfrequency wordsmore precisely the statistical methods we use do not seem to be effective on low frequency words if the word is not frequently used in the corpus or if the corpus is too small then the distribution of its collocates will not be big enoughfor example from ap which contains about 1000 occurrences of the word quotrainquot xtract produced over 170 collocations at stage 1 involving itin contrast dj only contains some 50 occurrences of quotrainquot and xtract could only produce a few collocations with itsome collocations with quotrainquot and quothurricanequot extracted from ap are listed in figure 15both words are highfrequency words in ap and lowfrequency words in djin short to build a lexicon for a computational linguistics application in a given domain one should make sure that the important words in the domain are frequent enough in the corpusfor a subdomain of the stock market describing only the fluctuations of several indexes and some of the major events of the day at wall street a corpus of 10 million words appeared to be sufficientthis 10 milliontoken corpus contains only 5000 words each repeated more than 100 timessize and frequency are not the only important criteriafor example even though quotfoodquot is a highfrequency word in dj quoteatquot is not among its collocates whereas it is among the top ones in the two other corporafood is not eaten at wall street but rather traded sold offered bought etcif the corpus only contains stories in a given domain most of the collocations retrieved will also be dependent on this domainwe have seen in section 2 that in addition to jargonistic words there are a number of more familiar terms that form collocations when used in different domainsa corpus containing stock market stories is obviously not a good choice for retrieving collocations related to weather reports or for retrieving domain independent collocations such as quotmakedecisionquot for a domainspecific application domaindependent collocations are of interest and a domainspecific corpus is exactly what is requiredto build a system that generates stock market reports it is a good choice to use a corpus containing only stock market reportsthere is a danger in choosing a too specific corpus howeverfor example in figure 14 we see that the first collocate of quotpricequot in ap is quotgougingquot which is not retrieved in either dj or in nytquotprice gougingquot is not a current practice at wall street and this collocation could not be retrieved even on some 20000 occurrences of the wordan example use of quotprice gougingquot is the following quotthe charleston city council passed an emergency ordinance barring price gouging later saturday after learning of an incident in which 5 pound bags of ice were being sold for 10quot more formally if we compare the columns in figure 14 we see that the numbers are much higher for dj than for the other two corporathis is not due to a sizefrequency factor since quotpricequot occurs about 10000 times in both nyt and dj whereas it only occurs 4500 times in apit rather says that the distribution of collocates around quotpricequot has a much higher variance in dj than in the other corporadj has much bigger weights because it is focused the stories are almost all about wall streetin contrast nyt contains a large number of stories with quotpricequot but they have various originsquotpricequot has 4627 collocates in nyt whereas it only has 2830 in djlet us call ocorpus the variety of a given corpusone way to measure the variety is to use the information theory measure of entropy for a given language modelentropy is defined as where p is the probability of appearance of a given word w entropy measures the predictability of a corpus in other words the bigger the entropy of a corpus the less predictable it isin an ideal language model the entropy of a corpus should not depend on its sizehowever word probabilities are difficult to approximate and in most cases entropy grows with the size of the corpusin this section we use a simple unigram language model trained on the corpus and we approximate the variety of a given corpus by 0 corpus es log s w in which f is the frequency of appearance of the word w in the corpus and s is the total number of different word forms in the corpusin addition to be fair in our comparison of the three corpora we have used three corpora of about one million words for dj nyt and brownthe 1 millionword brown corpus contains 43300 different words of which only 1091 are repeated more than 100 timesthe 0 of the brown corpus is brown 105in comparison the size of dj is 8000000it contains 59233 different words of which 5367 are repeated more than 100 timesdj 0 ratio is odj 96and the 0 ratio of nyt which contains stories pertaining to various domains has been estimated at onyt 104according to this measure dj is much more focused than both the brown corpus and nyt because the difference in variety is 1 in the logarithmic scalethis is not a surprise since the subjects it covers are much more restricted the genre is of only one kind and the setting is constantin contrast the brown corpus has been designed to be of mixed and rich composition and nyt is made up of stories and articles related to various subjects and domainslet us note that several factors might also influence the overall entropy of a given corpus for example the number of writers the time span covered by the corpus etcin any case the success of statistical methods such as the ones described in this report also depends on the sublanguage used in the corpusfor a sublanguagedependent application the training corpus must be focused mainly because its vocabulary being restricted the important words will be more frequent than in a nonrestricted corpus and thus the collocations will be easier to retrieveother applications might require less focused corporafor those applications the problem is even more touchy as a perfectly balanced corpus is very difficult to compilea sample of the 1987 dj text is certainly not a good sample of general english however a balanced sample such as the brown corpus may also be a poor sampleit is doubtful that even a balanced corpus contains enough data on all possible domains and the very effort of artificially balancing the corpus might also bias the resultscorpusbased techniques are still rarely used in the fields of linguistics lexicography and computational linguistics and the main thrust of the work presented here is to promote its use for any text based applicationin this section we discuss several uses of xtractlanguage generation is a novel application for corpusbased computational linguistics in smadja we show how collocations enhance the task of lexical selection in language generationprevious language generation works did not use collocations mainly because they did not have the information in compiled form and the lexicon formalisms available did not handle the variability of collocational knowledgein contrast we use xtract to produce the collocations and we use functional unification grammars as a representation formalism and a unification enginewe show how the use of fugs allows us to properly handle the interactions of collocational and various other constraintswe have implemented cook a surface sentence generator that uses a flexible lexicon for expressing collocational constraints in the stock market domainusing ana as a deep generator cook is implemented in fuf an extended implementation of fug and uniformly represents the lexicon and syntax as originally suggested by halliday for a more detailed description of cook the reader is referred to smadja according to benson benson and ilson collocations fall into two major groups lexical collocations and grammatical collocationsthe difference between these two groups lies in the types of words involvedlexical collocations roughly consist of syntagmatic affinities among open class words such as verbs nouns adjectives and adverbsin contrast grammatical collocations generally involve at least one closed class word among particles prepositions and auxiliary verbsexamples of grammatical collocations are putup as in quoti cannot put up with this anymorequot and fillout as in quotyou have to fill out your 1040 formquot19 consider the sentences below 6 quot a new initiative in the aftermath from the plo evacuation from beirutquot 7quot a new initiative in the aftershocks from the plo evacuation from beirutquot 8 quot a new initiative in the aftershocks of the plo evacuation from beirutquot these examples clearly show that the choices of the prepositions are arbitrarysentences and compare the word associations comparison withto with association withtoalthough very similar in meaning the two words select different prepositionsmoreover the difference of meaning of the two prepositions does not account for the wording choicessimilarly sentences and illustrate the fact that quotaftermathquot selects the preposition quotofquot and quotaftershockquot selects quotfromquot grammatical collocations are very similar to lexical collocations in the sense that they also correspond to arbitrary and recurrent word cooccurrences in terms of structure grammatical collocations are much simpler since many of the grammatical collocations only include one open class word the separation basecollocator becomes trivialthe open class word is the meaning bearing element it is the base and the closed class word is the collocatorfor lexicographers grammatical collocations are somehow simpler than lexical collocationsa large number of dictionaries actually include themfor example the random house dictionary of the english language gives quotabreast of accessible to accustomed to careful about conducive to conscious of equal to expert at fond of jealous ofquot etchowever a large number are missing and the information provided is inconsistent and spottyfor example rhdel does not include appreciative of available to certain of clever at comprehensible to curious about difficult for effective against faithful to friendly with furious at happy about hostile to etcas demonstrated by benson even the most complete learners dictionaries miss very important grammatical collocations and treat the others inconsistentlydeterminers are lexical elements that are used in conjunction with a noun to bring into correspondence with it a certain sector of reality a noun without determiner has no referentthe role of determiner can be played by several classes of items articles possessives indefinite adjectives demonstratives numbers etcdeterminernoun combinations are often based simply on semantic or syntactic criteriafor example in the expression quotmy left footquot the determiner quotmyquot is here for semantic reasonsany other determiner would fail to identify the correct object classes of nouns such as mass and count are supposed to determine the type of determiners to be used in conjunction with the nouns mass nouns often refer to objects or ideas that can be divided into smaller parts without losing their meaningin contrast count nouns refer to objects that are not dividablefor example quotwaterquot is a mass noun if you spill half a glass of water you still have some nounpreposition associations retrieved by xtract some water left in your glassin contrast if you cut a book in two halves and discard one half you do not have a book any more quotbookquot is a count nouncount nouns are often used with numbers and articles and mass nouns are often used with no articles as with other types of word combinations noundeterminer combinations often lead to collocationsconsider the table given in table 5in the table some noundeterminer combinations are comparedthe first four determiners represent a singular use of the noun and the last four represent a plural use1 and 300 are numbers0 is the zero articlein the table a sign means that the combination is frequent and normal a sign means that the combination is very rare if not forbiddena sign means that the combination is very low probability and that it would probably require an unusual contextfor example one does not say quota butterquot one says quotsome butterquot and the combination buttermany is rather unusual and would only occur in unusual contextsfor example if one refers to several types of butter one could say quotmany butters are based on regular butter and an additional spice or flavor such as rosemary sage basil garlic etcquot quotbookquot is a typical count noun in that it can combine with quotaquot and quotmanyquot quotbutterquot is a typical mass noun in that it combines with the zero determiner and quota great dealquot however words such as quotpolice people traffic opinion weatherquot etc share some characteristics of both mass nouns and count nounsfor example quotweatherquot is neither a count nounquota weatherquot is incorrectnor a mass nounquota lot of weatherquot is incorrect however it shares some characteristics of both types of nounsmass noun features include the premodified structures quota lot of good weatherquot quotsome bad weatherquot and quotwhat lovely weatherquot count noun features include the plural quotgo out in all weathersquot quotin the worst of weathersquot the problem with such combinations is that if the word is irregular then the information will probably not be in the dictionarymoreover even if the word is regular the word itself might not be in the dictionary or the information could simply be difficult to retrieve automaticallysimple tools such as xtract can hopefully provide such informationbased on a large number of occurrences of the noun xtract will be able to make statistical inferences as to the determiners used with itsuch analysis is possible without any modification to xtractactually only a subpart of xtract is necessary to retrieve themwe have seen that collocations are difficult to handle for nonnative speakers and that they require special handling for computational applicationsin a multilingual environment the problems become even more complex as each language imposes its own collocational constraintsconsider for example the english expressions quothouse of parliamentquot and quothouse painterquot the natural french translation for quothousequot is quotmaisonquot however the two expressions do not use this translation but respectively quotchambrequot and quotwitimentquot translations have to be provided for collocations and should not be wordbased but rather expressionbasedbilingual dictionaries are generally inadequate in dealing with such issuesthey generally limit such contextsensitive translations to ambiguous words or highly complex words such as quotmakequot quothavequot etcmoreover even in these cases coverage is limited to semantic variants and lexical collocations are generally omittedone possible application is the development of compilation techniques for bilingual dictionariesthis would require compiling two monolingual collocational dictionaries and then developing some automatic or assisted translation methodsthose translation methods could be based on the statistical analysis of bilingual corpora currently availablea simple algorithm for translating collocations is given in smadja several other applications such as information retrieval automatic thesauri compilation and speech recognition are also discussed in smadja 21 note that it might be in some grammar bookfor example quirk et al in their extensive grammar book devote some 100 pages to such noundeterminer combinationsthey include a large number of rules and list exceptions to those rulescorpus analysis is a relatively recent domain of researchwith the availability of large samples of textual data and automated tools such as partofspeech taggers it has become possible to develop and use automatic techniques for retrieving lexical information from textual corporain this paper some original techniques for the automatic extraction of collocations have been presentedthe techniques have been implemented in a system xtract and tested on several corporaalthough some other attempts have been made to retrieve collocations from textual corpora no work has been able to retrieve the full range of the collocations that xtract retrievesthanks to our filtering methods the collocations produced by xtract are of better qualityand finally because of the syntactic labeling the collocations we produce are richer than the ones produced by other methodsthe number and size of available textual corpora is constantly growingdictionaries are available in machinereadable form news agencies provide subscribers with daily reports on various events publishing companies use computers and provide machinereadable versions of books magazines and journalsthis amounts to a vast quantity of language data with unused and virtually unlimited implicit and explicit information about the english languagethese textual data can thus be used to retrieve important information that is not available in other formsthe primary goal of the research we presented is to provide a comprehensive lexicographic toolkit to assist in implementing natural language processing as well as to assist lexicographers in compiling generalpurpose dictionaries as most of the work is still manually performed in this domainthe abundance of text corpora allows a shift toward more empirical studies of language that emphasize the development of automated toolswe think that more research should be conducted in this direction and hope that our work will stimulate research projects along these linesi would like to thank steve abney ken church karen kukich and michael elhadad for making their software tools available to uswithout them most of the work reported here would not have been possiblekathy mckeown read earlier versions of this paper and was helpful in both the writing and the researchfinally the anonymous reviewers for computational linguistics made insightful comments on earlier versions of the paperpart of this work has been done in collaboration with bell communication research and part of this work has been supported by darpa grant no003984c0165 by nsf grant irt8451438 and by onr grant no001489j1782
J93-1007
retrieving collocations from text xtractnatural languages are full of collocations recurrent combinations of words that cooccur more often than expected by chance and that correspond to arbitrary word usagesrecent work in lexicography indicates that collocations are pervasive in english apparently they are common in all types of writing including both technical and nontechnical genresseveral approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual datathese techniques automatically produce large numbers of collocations along with statistical figures intended to reflect the relevance of the associationshowever none of these techniques provides functional information along with the collocationalso the results produced often contained improper word associations reflecting some spurious aspect of the training corpus that did not stand for true collocationsin this paper we describe a set of techniques based on statistical methods for retrieving and identifying collocations from large textual corporathese techniques produce a wide range of collocations and are based on some original filtering methods that allow the production of richer and higherprecision outputthese techniques have been implemented and resulted in a lexicographic tool xtractthe techniques are described and some results are presented on a 10 millionword corpus of stock market news reportsa lexicographic evaluation of xtract as a collocation retrieval tool has been made and the estimated precision of xtract is 80 we develop xtract a term extraction systemwe propose a statistical model by measuring the spread of the distribution of co occurring pairs of words with higher strengthin terms of practical mwe identification systems we propose a well known approach that uses a set of techniques based on statistical methods calculated from word frequencies to identify mwes in corpora
from grammar to lexicon unsupervised learning of lexical syntax imagine a language that is completely unfamiliar the only means of studying it are an ordinary grammar book and a very large corpus of text no dictionary is available how can easily recognized surface grammatical facts be used to extract from a corpus as much syntactic information as possible about individual words this paper describes an approach based on two principles first rely on local morphosyntactic cues to structure rather than trying to parse entire sentences second treat these cues as probabilistic rather than absolute indicators of syntactic structure apply inferential statistics to the data collected using the cues rather than drawing a categorical conclusion from a single occurrence of a cue the effectiveness of this approach for inferring the syntactic frames of verbs is supported by experiments on an english corpus using a program called lerner lerner starts out with no knowledge of content wordsit bootstraps from determiners auxiliaries modals prepositions pronouns complementizers coordinating conjunctions and punctuation imagine a language that is completely unfamiliar the only means of studying it are an ordinary grammar book and a very large corpus of textno dictionary is availablehow can easily recognized surface grammatical facts be used to extract from a corpus as much syntactic information as possible about individual wordsthis paper describes an approach based on two principlesfirst rely on local morphosyntactic cues to structure rather than trying to parse entire sentencessecond treat these cues as probabilistic rather than absolute indicators of syntactic structureapply inferential statistics to the data collected using the cues rather than drawing a categorical conclusion from a single occurrence of a cuethe effectiveness of this approach for inferring the syntactic frames of verbs is supported by experiments on an english corpus using a program called lernerlerner starts out with no knowledge of content wordsit bootstraps from determiners auxiliaries modals prepositions pronouns complementizers coordinating conjunctions and punctuationthis paper presents a study in the automatic acquisition of lexical syntax from naturally occurring english textit focuses on discovering the kinds of syntactic phrases that can be used to represent the semantic arguments of particular verbsfor example want can take an infinitive argument and hope a tensed clause argument but not vice versa this study focuses on the ability of verbs to take arguments represented by infinitives tensed clauses and noun phrases serving as both direct and indirect objectsthese lexical properties are similar to those that chomsky termed subcategorization frames but to avoid confusion the properties under study here will be referred to as syntactic frames or simply framesthe general framework for the problems addressed in this paper can be thought of as followsimagine a language that is completely unfamiliar the only means of studying it are an ordinary grammar book and a very large corpus of text no dictionary is availablehow can easily recognized surface grammatical facts be used to extract from a corpus as much syntactic information as possible about individual wordsthe scenario outlined above is adopted in this paper as a framework for basic research in computational language acquisitionhowever it is also an abstraction of the situation faced by engineers building natural language processing systems for more familiar languagesthe lexicon is a central component of nlp systems and it is widely agreed that current lexical resources are inadequatelanguage engineers have access to some but not all of the grammar and some but not all of the lexiconthe most easily formalized and most reliable grammatical facts tend to be those involving auxiliaries modals and determiners the agreement and case properties of pronouns and so onthese vary little from speaker to speaker topic to topic register to registerunfortunately this information is not sufficient to parse sentences completely a fact that is underscored by the current state of the parsing artif sentences cannot be parsed completely and reliably then the syntactic frames used in them cannot be determined reliablyhow then can reliable easily formalized grammatical information be used to extract syntactic facts about words from a corpusthis paper suggests the following approach one or a fixed number of examplesinstead attempt to determine the distribution of exceptions to the expected correspondence between cues and syntactic framesuse a statistical model to determine whether the cooccurrence of a verb with cues for a frame is too regular to be explained by randomly distributed exceptionsthe effectiveness of this approach for inferring the syntactic frames of verbs is supported by experiments using an implementation called lernerin the spirit of the problem stated above lerner starts out with no knowledge of content wordsit bootstraps from determiners auxiliaries modals prepositions pronouns complementizers coordinating conjunctions and punctuationlerner has two independent components corresponding to the two strategies listed abovethe first component identifies sentences where a particular verb is likely to be exhibiting a particular syntactic frameit does this using local cues such as the that the cuethis component keeps track of the number of times each verb appears with cues for each syntactic frame as well as the total number of times each verb occursthis process can be described as collecting observations and its output as an observations tablea segment of an actual observations table is shown in table 4the observations table serves as input to the statistical modeler which ultimately decides whether the accumulated evidence that a particular verb manifests a particular syntactic frame in the input is reliable enough to warrant a conclusionto the best of my knowledge this is the first attempt to design a system that autonomously learns syntactic frames from naturally occurring textthe goal of learning syntactic frames and the learning framework described above lead to three major differences between the approach reported here and most recent work in learning grammar from textfirst this approach leverages a little a priori grammatical knowledge using statistical inferencemost work on corpora of naturally occurring language either uses no a priori grammatical knowledge or else it relies on a large and complex grammar one exception is magerman and marcus in which a small grammar is used to aid learninga second difference is that the work reported here uses inferential rather than descriptive statisticsin other words it uses statistical methods to infer facts about the language as it exists in the minds of those who produced the corpusmany other projects have used statistics in a way that summarizes facts about the text but does not draw any explicit conclusions from them on the other hand hindle does use inferential statistics and brill recognizes the value of inference although he does not use inferential statistics per sefinally many other projects in machine learning of natural language use input that is annotated in some way either with partofspeech tags or with syntactic brackets the remainder of the paper is organized as followssection 2 describes the morphosyntactic cues lerner uses to collect observationssection 3 presents the main contribution of this paperthe statistical model and experiments supporting its effectivenessfinally section 4 draws conclusions and lays out a research program in machine learning of natural languagethis section describes the local morphosyntactic cues that lerner uses to identify likely examples of particular syntactic framesthese cues must address two problems finding verbs in the input and identifying phrases that represent arguments to the verbthe next two subsections present cues for these tasksthe cues presented here are not intended to be the last word on local cues to structure in english they are merely intended to illustrate the feasibility of such cues and demonstrate how the statistical model accommodates their probabilistic correspondence to the true syntactic structure of sentencesvariants of these cues are presented in brent the final subsection summarizes the procedure for collecting observations and discusses a sample of the observations table collected from the brown corpuslerner identifies verbs in two stages each carried out on a separate pass through the corpusfirst strings that sometimes occur as verbs are identifiedsecond occurrences of those strings in context are judged as likely or unlikely to be verbal occurrencesthe second stage is necessary because of lexical ambiguitythe first stage uses the fact that all english verbs can occur both with and without the suffix ingwords are taken as potential verbs if and only if they display this alternation in the corpus2 there are a few words that meet this criterion but do not occur as verbs including incomeincoming earearring herherring and middlemiddlinghowever the second stage of verb detection combined with the statistical criteria prevent these pairs from introducing errorslerner assumes that a potential verb is functioning as a verb unless the context suggests otherwisein particular an occurrence of a potential verb is taken as a nonverbal occurrence only if it follows a determiner or a preposition other than tofor example was talking would be taken as a verb but a talk would notthis precaution reduces the likelihood that a singular count noun will be mistaken for a verb since singular count nouns are frequently preceded by a determinerfinally the only morphological forms that are used for learning syntactic frames are the stem form and the ing formthere are several reasons for thisfirst forms ending in s are potentially ambiguous between third person singular present verbs and plural nounssince plural nouns are not necessarily preceded by determiners they could pose a significant ambiguity problemsecond past participles do not generally take direct objects knows me and knew me are ok but not is known mefurther the past tense and past participle forms of some verbs are identical while those of others are distinctas a result using the ed forms would have complicated the statistical model substantiallysince the availability of raw text is not generally a limiting factor it makes sense to wait for the simpler caseswhen a putative occurrence of a verb is found the next step is to identify the syntactic types of nearby phrases and determine whether or not they are likely to be arguments of the verbfirst assume that a phrase p and a verb v have been identified in some sentencelerner strategy for determining whether p is an argument to v has two components for example suppose that the sequence that the were identified as the left boundary of a clause in the sentence i want to tell him that the idea will not flybecause pronouns like him almost never take relative clauses and because pronouns are known at the outset lerner concludes that the clause beginning with that the is probably an argument of the verb tellit is always possible that it could be an argument of the previous verb want but lerner treats that as unlikelyon the other hand if the sentence were i want to tell the boss that the idea will not fly then lerner cannot determine whether the clause beginning with that the is an argument to tell or is instead related to boss as in i want to fire the boss that the workers do not trustnow consider specific cues for identifying argument phrasesthe phrase types for which data are reported here are noun phrases infinitive verb phrases and tensed clausesthese phrase types yield three syntactic frames with a single argument and three with two arguments as shown in table 1the cues used for identifying these frames are shown in tables 2 and 3table 2 defines lexical categories that are referred to in table 3the category v in table 3 starts out empty and is filled as verbs are detected on the first passquotcapquot stands for any capitalized word and quotcapquot for any sequence of capitalized wordsthese cues are applied by matching them against the string of words immediately to the right of each verbfor example a verb v is the six syntactic frames studied in this papersf description good example bad example np only greet them arrive them tensed clause hope he will attend want he will attend infinitive hope to attend greet to attend np clause tell him he is a fool yell him he is a fool np infinitive want him to attend hope him to attend np np tell him the story shout him the story recorded as having occurred with a direct object and no other argument phrase if v is followed by a pronoun of ambiguous case and then a coordinating conjunction as in i will see you when you return from mexicothe coordinating conjunction makes it unlikely that the pronoun is the subject of another clause as in i see you like champagneit also makes it unlikely that the verb has an additional np argument as in i will tell you my secret recipeto summarize the procedure for collecting observations from a corpus is as follows table 4 shows an alphabetically contiguous portion of the observations table that results from applying this procedure to the brown corpus each row represents data collected from one pair of words including both the ing form and the stem formthe first column titled v represents the total number of times the word occurs in positions where it could be functioning as a verbeach subsequent column represents a single framethe number appearing in each row and column represents the number of times that the row verb cooccurred with cues for the column framezeros are omittedthus recall and recalling occurred a combined total of 42 times excluding those occurrences that followed determiners or prepositionsthree of those occurrences were followed by a cue for a single np argument and four were followed by cues for a tensed clause argumentjudgments based on the observations in table 4 made by the method of section 3 recall np cl recognize np cl recover np refuse inf the cues are fairly rare so verbs in table 4 that occur fewer than 15 times tend not to occur with these cues at allfurther these cues occur fairly often in structures other than those they are designed to detectfor example record recover and refer all occurred with cues for an infinitive although none of them in fact takes an infinitive argumentthe sentences responsible for these erroneous observations are in record occurs as a nounin recover is a verb but the infinitive vp to make a race of it does not appear to be an argumentin any case it does not bear the same relation to the verb as the infinitive arguments of verbs like try want hope ask and refusein refer is a verb but to change is a pp rather than an infinitivethe remainder of this paper describes and evaluates a method for making judgments about the ability of verbs to appear in particular syntactic frames on the basis of noisy data like that of table 4given the data in table 4 that method yields the judgments in table 5as noted above the correspondence between syntactic structure and the cues that lerner uses is not perfectmismatches between cue and structure are problematic because naturally occurring language provides no negative evidenceif a v verb is followed by a cue for some syntactic frame s that provides evidence that v does occur in frame s but there is no analogous source of evidence that v does not occur in frame s the occurrence of mismatches between cue and structure can be thought of as a random process where each occurrence of a verb v has some nonzero probability of being followed by a cue for a frame s even if v cannot in fact occur in s if this model is accurate the more times v occurs the more likely it is to occur at least once with a cue for s the intransitive verb arrive for example will eventually occur with a cue for an np argument if enough text is considereda learner that considers a single occurrence of verb followed by a cue to be conclusive evidence will eventually come to the false conclusion that arrive is transitivein other words the information provided by the cues will eventually be washed out by the noisethis problem is inherent in learning from naturally occurring language since infallible parsing is not possiblethe only way to prevent it is to consider the frequency with which each verb occurs with cues for each framein other words to consider each occurrence of v without a cue for s as a small bit of evidence against v being able to occur in frame s this section describes a statistical technique for weighing such evidencegiven a syntactic frame s the statistical model treats each verb v as analogous to a biased coin and each occurrence of v as analogous to a flip of that coinan occurrence that is followed by a cue for s corresponds to one outcome of the coin flip say heads an occurrence without a cue for s corresponds to tailsif the cues were perfect predictors of syntactic structure then a verb v that does not in fact occur in frame s would never appear with cues for sthe coin would never come up headssince the cues are not perfect such verbs do occur with cues for s the problem is to determine when a verb occurs with cues for s often enough that all those occurrences are unlikely to be errorsin the following discussion a verb that in fact occurs in frame s in the input is described as a s verb one that does not is described as a s verbthe statistical model is based on the following approximation for fixed s all s verbs have equal probability of being followed by a cue for s let 7r stand for that probability7rs may vary from frame to frame but not from verb to verbthus errors might be more common for tensed clauses than for nps but the working hypothesis is that all intransitives such as saunter and arrive are about equally likely to be followed by a cue for an np argumentif the error probability 77__s were known then we could use the standard hypothesis testing method for binomial frequency datafor example suppose 7r 05on average one in twenty occurrences of a s verb is followed by a cue for s if some verb v occurs 200 times in the corpus and 20 of those occurrences are followed by cues for s that ought to suggest that v is unlikely to have probability 05 of being followed by a cue for s and hence v is unlikely to be s specifically the chance of flipping 20 or more heads out of 200 tosses of a coin with a five percent chance of coming up heads each time is less than three in 1000on the other hand it is not all that unusual to flip 2 or more heads out of 20 on such a coinit happens about one time in fourif a verb occurs 20 times in the corpus and 2 of those occurrences are followed by cues for s it is quite possible that v is s and that the 2 occurrences with cues for s are explained by the five percent error rate on s verbsthe next section reviews the hypothesistesting method and gives the formulas for computing the probabilities of various outcomes of coin tosses given the coin biasit also provides empirical evidence that for some values of 71_ hypothesistesting does a good job of distinguishing s verbs from s verbs that occur with cues for s because of mismatches between cue and structurethe following section proposes a method for estimating 7r_s and provides empirical evidence that its estimates are nearly optimalthe statistical component of lerner is designed to prevent the information provided by the cues from being washed out by the noisethe basic approach is hypothesis testing on binomial frequency data specifically a verb v is shown to 4 given a verb v the outcomes of the coins for different s are treated as approximately independent even though they cannot be perfectly independenttheir dependence could be modeled using a multinomial rather than a binomial model but the experimental data suggest that this is unnecessary be s by assuming that it is s and then showing that if this were true the observed pattern of cooccurrence of v with cues for s would be extremely unlikely311 binomial frequency datain order to use the hypothesis testing method we need to estimate the probability 7_ that an occurrence of a verb v will be followed by a cue for s if v is s in this section it is assumed that 7_ is knownthe next section suggests a means of estimating 7_in both sections it is also assumed that for each s verb v the probability that v will be followed by a cue for s is greater than 7r_5other than that no assumptions are made about the probability that a s verb will be followed by a cue for s for example two verbs with transitive senses such as cut and walk may have quite different frequencies of cooccurrence with cues for npit does not matter what these frequencies are as long as they are greater than r_npif a coin has probability p of flipping heads and if it is flipped n times the probability of its coming up heads exactly m times is given by the binomial distribution the probability of coming up heads m or more times is given by the obvious sum analogously p gives the probability that m or more occurrences of a s verb v will be followed by a cue for s out of n occurrences totalif m out of n occurrences of v are followed by cues for s and if p is quite small then it is unlikely that v is s traditionally a threshold less than or equal to 05 is set such that a hypothesis is rejected if assuming the hypothesis were true the probability of outcomes as extreme as the observed outcome would be below the thresholdthe confidence attached to this conclusion increases as the threshold decreases312 experimentthe experiment presented in this section is aimed at determining how well the method presented above can distinguish s verbs from s verbslet ps be an estimate of 7it is conceivable that p might not be a good predictor of whether or not a verb is s regardless of the estimate pfor example if the correspondence between the cues and the structures they are designed to detect were quite weak then many s verbs might have lower p than many s verbsthis experiment measures the accuracy of binomial hypothesis testing on the data collected by lerner cues as a function of p_sin addition to showing that p is good for distinguishing s and s verbs these data provide a baseline against which to compare methods for estimating the error rate 7r_smethod the cues described in section 2 were applied to the brown corpus equation 2 was applied to the resulting data with a cutoff of p 02 and p varying between 25 and 213 the resulting judgments were compared to the blind judgments of a single judgeone hundred ninetythree distinct verbs were chosen at random from the tagged version of the brown corpus for comparisoncommon verbs are more likely to be included in the test sample than rare verbs but no verb is included more than onceeach verb was scored for a given frame only if it cooccurs with a cue for that frame at least oncethus although 193 verbs were randomly selected from the corpus for scoring only the 63 that cooccur with a cue for tensed clause at least once were scored for the tensedclause framethis procedure makes it possible to evaluate the hypothesistesting method on data collected by the cues rather than evaluating the cues per seit also makes the judgment task much easierit is not necessary to determine whether a verb can appear in a frame in principle only whether it does so in particular sentencesthere were however five cases where the judgments were unclearthese five were not scoredsee appendix c for detailsresults the results of these comparisons are summarized in table 6 and table 7 each row shows the performance of the hypothesistesting procedure for a different estimate p_s of the errorrate 7r_sthe first column shows the negative logarithm of p which is varied from 5 to 13 the second column shows p in decimal notationthe next four columns show the number of true positives verbs judged s both by machine and by hand false positives verbs judged 1s by machine s by hand true negatives verbs judged s both by machine and by hand and false negatives verbs judged s by machine fs by handthe numbers represent distinct verbs not occurrencesthe seventh column shows the number of verbs that were misclassified the sum of false positives and false negativesthe eighth column shows the percentage of verbs that were misclassified the nexttolast column shows the precision the true positives divided by all verbs that lerner judged to be sthe final column shows the recall the true positives divided by all verbs that were judged s by handdiscussion for verbs taking just a tensed clause argument table 6 shows that given the right estimate p of 7r_5 it is possible to classify these 63 verbs with only 1 false positive and 8 false negativesif the error rate were ignored or approximated as zero then the false positives would go up to 19on the other hand if the error rate were taken to be as high as 1 in 25 then the false negatives would go up to 20in this case the sum of both error types is minimized with 28 pcl 210table 7 shows similar results for verbs taking just an infinitive argument where misclassifications are minimized with p_mf 27as before assume that an occurrence of a s verb is followed by a cue for s with probability 71also as before assume that for each s verb v the probability that an occurrence of v is followed by a cue for s is greater than irit is useful to think of the verbs in the corpus as analogous to a large bag of coins with various biases or probabilities of coming up headsthe only assumption about the distribution of biases is that there is some definite but unknown minimum bias 7r5 determining whether or not a verb appears in frame s is analogous to determining for some randomly selected coin whether its bias is greater than 7rthe only available evidence comes from selecting a number of coins at random and flipping themthe previous section showed how this can be done given an estimate of 7r_ssuppose a series of coins is drawn at random from the bageach coin is flipped n timesit is then assigned to a histogram bin representing the number of times it came up headsat the end of this sampling procedure bin i contains the number of coins that came up heads exactly i times out of n such a histogram is shown in figure 1 where n 40if n is large enough and enough coins are flipped n times one would expect the following a histogram illustrating a binomially shaped distribution in the first eight bins were 16their height drops to zero for two stretches before rising significantly above zero againspecifically the height of the ith histogram bin should be roughly proportional to p with n the fixed sample size and p_s an estimate of 7r_sthe estimation procedure tries out each bin as a possible estimate of joeach estimate of jo leads to an estimate of 7r and hence to an expected shape for the first jo histogram binseach estimate j of jo is evaluated by comparing the predicted distribution in the first j bins to the observed distributionthe better the fit the better the estimatemoving from coins to verbs the procedure works as followsfor some fixed n consider the first n occurrences of each verb that occurs at least n times in the inputlet s be some syntactic frame and let hi be the number of distinct verbs that were followed by cues for s exactly i times out of nie the height of the ith histogram binassume that there is some 1 jo n such that most s verbs are followed by cues for s jo times or fewer and conversely most verbs that are followed by cues for s jo times or fewer are s verbsfor each possible estimate j of jo there is a corresponding estimate of 7r_s namely the average rate at which verbs in the first j bins are followed by cues for s choosing the most plausible estimate of 7_5 thus comes down to choosing the most plausible estimate of jo the boundary between the s verbs and the rest of the histogramto evaluate the plausibility of each possible estimate j of jo measure the fit between the predicted distribution of s verbs assuming j is the boundary of the s cluster and the observed distribution of the s verbs also assuming j is the boundary of the s clustergiven j let p_s stand for the average rate at which verbs in bins j or lower are followed by cues for s the predicted distribution for s verbs is proportional to p for 0 i n the observed distribution of s verbs assuming j is the boundary of the s cluster is hi for 0 i j and 0 for j i n measure the fit between the predicted and observed distributions by normalizing both to have unit area and taking the sum over 0 i n of the squares of the differences between the two distributions at each bin i j p_ tp fp tn fn mc mc pre rec cl 2 00037 25 1 28 8 9 15 96 76 inf 2 00048 22 1 32 5 6 10 96 81 npc1 1 00002 3 2 2 0 2 29 60 100 npinf 1 00005 5 0 3 2 2 20 100 71 npnp 3 00004 3 0 3 3 3 33 100 50 np 4 00132 52 1 5 59 60 51 98 47 total 110 5 73 74 79 30 96 60 in pseudocode the procedure is as follows estimatep 1 area h0 minsumofsquares oo bestestimate estimate 7r_ by the average cooccurrence rate for the first binsthose presumed to hold s verbs verbs in the first bins and below are presumed s the results for each of the six framesvarying n between 50 and 150 results in no significant change in the estimated error ratesone way to judge the value of the estimation and hypothesistesting methods is to examine the false positivesthree of the five false positives result from errors in verb detection that are not distributed uniformly across verbsin particular shock board and near are used more often as nonverbs than as verbsthis creates many opportunities for nonverbal occurrences of these words to be mistaken for verbal occurrencesother verbs like know are unambiguous and thus are not subject to this type of erroras a result these errors violate the model assumption that errors are distributed uniformly across verbs and highlight the limitations of the modelthe remaining false positives were touch and belong both mistaken as taking an np followed by a tensed clausethe touch error was caused by the capitalization of the first word of a line of poetry i knew not what did to a friend belong till i stood up true friend by thy true side till was mistaken for a proper namethe belong error was caused by mistaking a matrix clause for an argument in with the blue flesh of night touching him he stood under a gentle hill caressing the flageolet with his lips making it whisperit seems likely that such input would be much rarer in more mundane sources of text such as newspapers of record than in the diverse brown corpusthe results for infinitives and clauses can also be judged by comparison to the optimal classifications rates from tables 6 and 7in both cases the classification appears to be right in the optimal rangein fact the estimated error rate for infinitives produces a better classification than any of those shown in table 7the classification of clauses and infinitives remains in the optimal range when the probability threshold is varied from 01 to 05overall the tradeoff between improved precision and reduced recall seems quite good as compared to doing no noise reduction the only possible exception is the np frame where noise reduction causes 59 false negatives in exchange for preventing only 5 false positivesthis is partly explained by the different prior probabilities of the different framesmost verbs can take a direct object argument whereas most verbs cannot take a direct object argument followed by a tensed clause argumentthere is no way to know this in advancethere may be other factors as wellif the error rate for the np cues is substantially lower than 1 out of 100 then it cannot be estimated accurately with sample size n 100on the other hand if the sample size n is increased substantially there may not be enough verbs that occur n times or more in the corpusso a larger corpus might improve the recall rate for npthis paper explores the possibility of using simple grammatical regularities to learn lexical syntaxthe data presented in tables 6 7 and 8 provide evidence that it is possible to learn significant aspects of english lexical syntax in this wayspecifically these data suggest that neither a large parser nor a large lexicon is needed to recover enough syntactic structure for learning lexical syntaxrather it seems that significant lexical syntactic information can be recovered using a few approximate cues along with statistical inference based on a simple model of the cues error distributionsthe lexical entry of a verb can specify other syntactic frames in addition to the six studied herein particular many verbs take prepositional phrases headed by a particular preposition or class of prepositionsfor example put requires a location as a second argument and locations are often represented by pps headed by locative prepositionsextending lerner to detect pps is trivialsince the set of prepositions in the language is essentially fixed all prepositions can be included in the initial lexicondetecting a pp requires nothing more than detecting a prepositionthe statistical model can of course be applied without modificationthe problem however is determining which pps are arguments and which are adjunctsthere are clearly cases where a prepositional phrase can occur in a clause not by virtue of the lexical entry of the verb but rather by virtue of nonlexical facts of english syntaxfor instance almost any verb can occur with a temporal pp headed by on as in john arrived on mondaysuch pps are called adjunctson the other hand the sense of on in john sprayed water on the ceiling is quite differentthis sense it can be argued is available only because the lexical entry of spray specifies a location argument that can be realized as a ppif anything significant is to be learned about individual words the nonspecific cooccurrences of verbs with pps must be separated from the specific ones it is not clear how a machine learning system could do this although frequency might provide some clueworse however there are many cases in which even trained linguists lack clear intuitionsdespite a number of attempts to formulate necessary and sufficient conditions for the argumentadjunct distinction there are many cases for which the various criteria do not agree or the judgments are unclear thus the penn treebank does not make the argumentadjunct distinction because their judges do not agree often enoughuntil a useful definition that trained humans can agree on is developed it would seem fruitless to attempt machine learning experiments in this domainalthough the results of this study are generally encouraging they also point to some limitations of the statistical model presented herefirst it does not take into account variation in the percentage of verbs that can appear in each framefor example most verbs can take an np argument while very few can take an np followed by a tensed clausethis results in too few verbs being classified as np and too many being classified as npcl as shown in table 8second it does not take into account the fact that for some words with verbal senses most of their occurrences are verbal whereas for others most of their occurrences are nonverbalfor example operate occurs exclusively as a verb while board occurs much more often as a noun than as a verbsince the cues are based on the assumption that the word in question is a verb board presents many more opportunities for error than operatethis violates the assumption that the probability of error for a given frame is approximately uniform across verbsthese limitations do not constitute a major impediment to applications of the current resultsfor example an applied system can be provided with the rough estimates that 8095 percent of verbs take a direct object while 12 percent take a direct object followed by a tensed clausesuch estimates can be expected to reduce misclassification significantlyfurther an existing dictionary could be used to quottrainquot a statistical model on familiar verbsa trained system would probably be more accurate in classifying new verbsfinally the lexical ambiguity problem could probably be reduced substantially in the applied context by using a statistical tagging program for addressing basic questions in machine learning of natural language the solutions outlined above are not attractiveall of those solutions provide the learner with additional specific knowledge of english whereas the goal for the machine learning effort should be to replace specific knowledge with general knowledge about the types of regularities to be found in natural languagethere is one approach to the lexical ambiguity problem that does not require giving the learner additional specific knowledgethe problem is as follows words that occur frequently as say nouns are likely to have a different error rate from unambiguous verbsif it were known which words occur primarily as verbs and which occur primarily as nouns then separate error rate estimates could be made for eachthis would reduce the rate of false positive errors even without any further information about which particular occurrences are nominal and which are verbalone way to distinguish primarily nominal words from primarily verbal words is by the relative frequencies of their various inflected formsfor example table 9 shows the contrast in the distribution of inflected forms between project and board on the one hand and operate and follow on the otherproject and board are two words whose frequent occurrence as nouns has caused lerner to make false positive errorsin both cases the stem and s forms are much more common than the ed and ing formscompare this to the distribution for the unambiguous verbs operate and followin these cases the diversity of frequencies is much lower and does not display the characteristic pattern of a word that occurs primarily as a noun ing and ed forms that are much rarer than the s and stem formssimilar characteristic patterns exist for words that occur primarily as adjectivesrecognizing such ambiguity patterns automatically would allow a separate error rate to be estimated for the highly ambiguous wordsfrom the perspective of computational language acquisition a natural direction in which to extend this work is to develop algorithms for learning some of the specific knowledge that was programmed into the system described aboveconsider the morphological adjustment rules according to which for example the final quotequot of bite is deleted when the suffix ing is added yielding biting rather than quotbiteingquot lerner needs to know such rules in order to determine whether or not a given word occurs both with and without the suffix ingexperiments are under way on an unsupervised procedure that learns such rules from english text given only the list of english verbal suffixesthis work is being extended further in the direction of discovering the morphemic suffixes themselves and discovering the ways in which these suffixes alternate in paradigmsthe shortterm goal is to develop algorithms that can learn the rules of inflection in english starting from only a corpus and a general notion of the nature of morphological regularitiesultimately this line of inquiry may lead to algorithms that can learn much of the grammar of a language starting with only a corpus and a general theory of the kinds of formal regularities to be found in natural languagessome elements of syntax may not be learnable in this way but the lexicon morphology and phonology together make up a substantial portion of the grammar of a languageif it does not prove possible to learn these aspects of grammar starting from a general ontology of linguistic regularities and using distributional analysis then that too is an interesting resultit would suggest that the task requires a more substantive initial theory of possible grammars or some semantic information about input sentences or bothin any case this line of inquiry promises to she would light on the nature of language learning and language learningthe experiments described above used the following 193 verbs selected at random from the tagged version of the brown corpusforms of be and have were excluded as were modal verbs such as must and should abandon account acquire act add announce anticipate appear arch ask attempt attend attest avoid bear believe belong bend board boil bring bristle brush build buzz call cap cast choose choreograph close come concern conclude consider contain convert culminate cut deal decrease defend delegate deliver denounce deny depend design determine develop die dine discourage dispatch disunite drink duplicate eliminate emerge end enter equate erect execute exist expect extend face fail fall feed feel fight figure find fly follow get give glow guide hear help hijack hire hope impart impede improve include increase indicate inform instruct inure issue keep learn let live look make mean measure meet mine miss mount mourn near offer open oppose organize own pardon pickle plan play plead prefer prepare present prevent progress project provide question quote range reappear receive recommend remember remind repeat report request resign retire return save say season seat see seem serve set settle shift ship shock sign sing speak spend spice sponsor stand start stay study succeed suffer suggest support surprise swept take talk tell term terminate think touch treat tremble trust try turn understand unite unload use visit weep wheel wipe wish wonder work writeof the 193 verbs listed above lerner detects 174 in the untagged version of the brown corpusof these 174 there are 87 for which lerner does not find sufficient evidence to prove that they have any of the six syntactic frames in questionsome of these genuinely do not appear in the corpus with cues for any of the six while others do appear with cues but not often enough to provide reliable evidencegiven more text sufficient evidence might eventually accumulate for many of these verbsthe 87 that were detected but not assigned any frames are as follows account act anticipate arch attend bear bend boil bristle brush buzz cast close contain convert culminate deal decrease delegate deliver depend design determine develop dine discourage dispatch drink emerge end equate erect exist extend fall figure fly glow hire increase instruct issue live look measure mine miss mount mourn open oppose organize own present prevent progress project question quote range reappear receive recommend repeat report retire return season seat settle ship sign sing speak spend sponsor stand stay succeed suffer talk term terminate tremble turn weep wheel the 87 verbs for which lerner does find sufficient evidence to assign one or more frames are shown in table 10reading across each row a verb is assigned those frames the lexicon that lerner produces when restricted to the 193 test verbs whose symbols appear in its rowfor easy reference by frame all the symbols for a given frame are aligned in one column
J93-2002
from grammar to lexicon unsupervised learning of lexical syntaximagine a language that is completely unfamiliar the only means of studying it are an ordinary grammar book and a very large corpus of textno dictionary is available how can easily recognized surface grammatical facts be used to extract from a corpus as much syntactic information as possible about individual wordsthis paper describes an approach based on two principlesfirst rely on local morphosyntactic cues to structure rather than trying to parse entire sentencessecond treat these cues as probabilistic rather than absolute indicators of syntactic structureapply inferential statistics to the data collected using the cues rather than drawing a categorical conclusion from a single occurrence of a cuethe effectiveness of this approach for inferring the syntactic frames of verbs is supported by experiments on an english corpus using a program called lernerlerner starts out with no knowledge of content wordsit bootstraps from determiners auxiliaries modals prepositions pronouns complementizers coordinating conjunctions and punctuationour study is focused on largescaled automatic acquisition of subcategorization frames
the mathematics of statistical machine translation parameter estimation we describe a series of five statistical models of the translation process and give algorithms for estimating the parameters of these models given a set of pairs of sentences that are translations of one another we define a concept of wordbyword alignment between such pairs of sentences for any given pair of such sentences each of our models assigns a probability to each of the possible wordbyword alignments we give an algorithm for seeking the most probable of these alignments although the algorithm is suboptimal the alignment thus obtained accounts well for the wordbyword relationships in the pair of sentences we have a great deal of data in french and english from the proceedings of the canadian parliament accordingly we have restricted our work to these two languages but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languages we also feel again because of the minimal linguistic content of our algorithms that it is reasonable to argue that wordbyword alignments are inherent in any sufficiently large bilingual corpus we describe a series of five statistical models of the translation process and give algorithms for estimating the parameters of these models given a set of pairs of sentences that are translations of one anotherwe define a concept of wordbyword alignment between such pairs of sentencesfor any given pair of such sentences each of our models assigns a probability to each of the possible wordbyword alignmentswe give an algorithm for seeking the most probable of these alignmentsalthough the algorithm is suboptimal the alignment thus obtained accounts well for the wordbyword relationships in the pair of sentenceswe have a great deal of data in french and english from the proceedings of the canadian parliamentaccordingly we have restricted our work to these two languages but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languageswe also feel again because of the minimal linguistic content of our algorithms that it is reasonable to argue that wordbyword alignments are inherent in any sufficiently large bilingual corpusthe growing availability of bilingual machinereadable texts has stimulated interest in methods for extracting linguistically valuable information from such textsfor example a number of recent papers deal with the problem of automatically obtaining pairs of aligned sentences from parallel corpora brown et al assert and brown lai and mercer and gale and church both show that it is possible to obtain such aligned pairs of sentences without inspecting the words that the sentences containbrown lai and mercer base their algorithm on the number of words that the sentences contain while gale and church base a similar algorithm on the number of characters that the sentences containthe lesson to be learned from these two efforts is that simple statistical methods can be surprisingly successful in achieving linguistically interesting goalshere we address a natural extension of that work matching up the words within pairs of aligned sentencesin recent papers brown et al propose a statistical approach to machine translation from french to englishin the latter of these papers they sketch an algorithm for estimating the probability that an english word will be translated into any particular french word and show that such probabilities once estimated can be used together with a statistical model of the translation process to align the words in an english sentence with the words in its french translation pairs of sentences with words aligned in this way offer a valuable resource for work in bilingual lexicography and machine translationsection 2 is a synopsis of our statistical approach to machine translationfollowing this synopsis we develop some terminology and notation for describing the wordbyword alignment of pairs of sentencesin section 4 we describe our series of models of the translation process and give an informal discussion of the algorithms by which we estimate their parameters from datawe have written section 4 with two aims in mind first to provide the interested reader with sufficient detail to reproduce our results and second to hold the mathematics at the level of college calculusa few more difficult parts of the discussion have been postponed to the appendixin section 5 we present results obtained by estimating the parameters for these models from a large collection of aligned pairs of sentences from the canadian hansard data for a number of english words we show translation probabilities that give convincing evidence of the power of statistical methods to extract linguistically interesting correlations from large corporawe also show automatically derived wordbyword alignments for several sentencesin section 6 we discuss some shortcomings of our models and propose modifications to address some of themin the final section we discuss the significance of our work and the possibility of extending it to other pairs of languagesfinally we include two appendices one to summarize notation and one to collect the formulae for the various models that we describe and to fill an occasional gap in our developmentin 1949 warren weaver suggested applying the statistical and cryptanalytic techniques then emerging from the nascent field of communication theory to the problem of using computers to translate text from one natural language to another efforts in this direction were soon abandoned for various philosophical and theoretical reasons but at a time when the most advanced computers were of a piece with today digital watch any such approach was surely doomed to computational starvationtoday the fruitful application of statistical methods to the study of machine translation is within the computational grasp of anyone with a wellequipped workstationa string of english words e can be translated into a string of french words in many different waysoften knowing the broader context in which e occurs may serve to winnow the field of acceptable french translations but even so many acceptable translations will remain the choice among them is largely a matter of tastein statistical translation we take the view that every french string f is a possible translation of e we assign to every pair of strings a number pr which we interpret as the probability that a translator when presented with e will produce f as his translationwe further take the view that when a native speaker of french produces a string of french words he has actually conceived of a string of english words which he translated mentallygiven a french string f the job of our translation system is to find the string e that the native speaker had in mind when he produced f we minimize our chance of error by choosing that english string e for which pr is greatestusing bayes theorem we can write since the denominator here is independent of e finding ê is the same as finding e so as to make the product pr pr as large as possiblewe arrive then at the fundamental equation of machine translation as a representation of the process by which a human being translates a passage from french to english this equation is fanciful at bestone can hardly imagine someone rifling mentally through the list of all english passages computing the product of the a priori probability of the passage pr and the conditional probability of the french passage given the english passage poi einstead there is an overwhelming intuitive appeal to the idea that a translator proceeds by first understanding the french and then expressing in english the meaning that he has thus graspedmany people have been guided by this intuitive picture when building machine translation systemsfrom a purely formal point of view on the other hand equation is completely adequatethe conditional distribution poi e is just an enormous table that associates a real number between zero and one with every possible pairing of a french passage and an english passagewith the proper choice for this distribution translations of arbitrarily high quality can be achievedof course to construct pr by examining individual pairs of french and english passages one by one is out of the questioneven if we restrict our attention to passages no longer than a typical novel there are just too many such pairsbut this is only a problem in practice not in principlethe essential question for statistical translation then is not a philosophical one but an empirical one can one construct approximations to the distributions pr and pr that are good enough to achieve an acceptable quality of translationequation summarizes the three computational challenges presented by the practice of statistical translation estimating the language model probability pr estimating the translation model probability pr and devising an effective and efficient suboptimal search for the english string that maximizes their productwe call these the language modeling problem the translation modeling problem and the search problemthe language modeling problem for machine translation is essentially the same as that for speech recognition and has been dealt with elsewhere in that context we hope to deal with the search problem in a later paperin this paper we focus on the translation modeling problembefore we turn to this problem however we should address an issue that may be a concern to some readers why do we estimate pr and poi e rather than estimate pr directlywe are really interested in this latter probabilitywould not we reduce our problems from three to two by this direct approachif we can estimate poi e adequately why cannot we just turn the whole process around to estimate pr to understand this imagine that we divide french and english strings into those that are wellformed and those that are illformedthis is not a precise notionwe have in mind that strings like il va a la bibliotheque or i live in a house or even colorless green ideas sleep furiously are wellformed but that strings like a la va il bibliotheque or a i in live house are notwhen we translate a french string into english we can think of ourselves as springing from a wellformed french string into the sea of wellformed english strings with the hope of landing on a good oneit is important therefore that our model for pr concentrate its probability as much as possible on wellformed english stringsbut it is not important that our model for poi e concentrate its probability on wellformed french stringsif we were to reduce the probability of all wellformed french strings by the same factor spreading the probability thus liberated over illformed french strings there would be no effect on our translations the argument that maximizes some function f also maximizes cf for any positive constant c as we shall see below our translation models are prodigal spraying probability all over the place most of it on illformed french stringsin fact as we discuss in section 45 two of our models waste much of their probability on things that are not strings at all having for example several different second words but no first wordif we were to turn one of these models around to model pr directly the result would be a model with so little probability concentrated on wellformed english strings as to confound any scheme to discover onethe two factors in equation cooperatethe translation model probability is large for english strings whether well or illformed that have the necessary words in them in roughly the right places to explain the frenchthe language model probability is large for wellformed english strings regardless of their connection to the frenchtogether they produce a large probability for wellformed english strings that account well for the frenchwe cannot achieve this simply by reversing our translation modelswe say that a pair of strings that are translations of one another form a translation and we show this by enclosing the strings in parentheses and separating them by a vertical barthus we write the translation to show that what could we have done is a translation of quaurionsnous pu fairewhen the strings end in sentences we usually omit the final stop unless it is a question mark or an exclamation pointbrown et al introduce the idea of an alignment between a pair of strings as an object indicating for each word in the french string that word in the english string from which it arosealignments are shown graphically as in figure 1 by drawing lines which we call connections from some of the english words to some of the french wordsthe alignment in figure 1 has seven connections and so onfollowing the notation of brown et al we write this alignment as program has been implementedthe list of numbers following an english word shows the positions in the french string of the words to which it is connectedbecause and is not connected to any french words here there is no list of numbers after itwe consider every alignment to be correct with some probability and so we find the program has been implemented perfectly acceptableof course we expect it to be much less probable than the alignment shown in figure 1in figure 1 each french word is connected to exactly one english word but more general alignments are possible and may be appropriate for some translationsfor example we may have a french word connected to several english words as in figure 2 which we write as balance was the territory of the aboriginal peoplemore generally still we may have several french words connected to several english words as in figure 3 which we write as poor do not have any moneyhere the four english words do not have any money work together to generate the two french words sont demunisin a figurative sense an english passage is a web of concepts woven together according to the rules of english grammarwhen we look at a passage we cannot see the concepts directly but only the words that they leave behindto show that these words are related to a concept but are not quite the whole story we say that they form a ceptsome of the words in a passage may participate in more than one cept while others may participate in none serving only as a sort of syntactic glue to bind the whole togetherwhen a passage is translated into french each of its cepts contributes some french words to the translationwe formalize this use of the term cept and relate it to the idea of an alignment as followswe call the set of english words connected to a french word in a particular alignment the cept that generates the french wordthus an alignment resolves an english string into a set of possibly overlapping cepts that we call the ceptual scheme of the english string with respect to the alignmentthe alignment in figure 3 contains the three cepts the poor and do not have any moneywhen one or more of the french words is connected to no english words we say that the ceptual scheme includes the empty cept and that each of these words has been generated by this empty ceptformally a cept is a subset of the positions in the english string together with the words occupying those positionswhen we write the words that make up a cept we sometimes affix a subscript to each one showing its positionthe alignment in figure 2 includes the cepts thei and 016 the7 but not the cepts of6 thei or the7in applaud the decision a is generated by the empty ceptalthough the empty cept has no position we place it by convention in position zero and write it as eothus we may also write the previous alignment as 1 applaud the decisionwe denote the set of alignments of by 4if e has length 1 and f has length in there are m different connections that can be drawn between them because each of the m french words can be connected to any of the 1 english wordssince an alignment is determined by the connections that it contains and since a subset of the possible connections can be chosen in 2quot ways there are 2quot alignments in ain this section we develop a series of five translation models together with the algorithms necessary to estimate their parameterseach model gives a prescription for computing the conditional probability pr which we call the likelihood of the translation this likelihood is a function of a large number of free parameters that we must estimate in a process that we call trainingthe likelihood of a set of translations is the product of the likelihoods of its membersin broad outline our plan is to guess values for these parameters and then to apply the them algorithm iteratively so as to approach a local maximum of the likelihood of a particular set of translations that we call the training datawhen the likelihood of the training data has more than one local maximum the one that we approach will depend on our initial guessin models 1 and 2 we first choose a length for the french string assuming all reasonable lengths to be equally likelythen for each position in the french string we decide how to connect it to the english string and what french word to place therein model 1 we assume all connections for each french position to be equally likelytherefore the order of the words in e and f does not affect prin model 2 we make the more realistic assumption that the probability of a connection depends on the positions it connects and on the lengths of the two stringstherefore for model 2 pr does depend on the order of the words in e and f although it is possible to obtain interesting correlations between some pairs of frequent words in the two languages using models 1 and 2 as we will see later these models often lead to unsatisfactory alignmentsin models 3 4 and 5 we develop the french string by choosing for each word in the english string first the number of words in the french string that will be connected to it then the identity of these french words and finally the actual positions in the french string that these words will occupyit is this last step that determines the connections between the english string and the french string and it is here that these three models differin model 3 as in model 2 the probability of a connection depends on the positions that it connects and on the lengths of the english and french stringsin model 4 the probability of a connection depends in addition on the identities of the french and english words connected and on the positions of any other french words that are connected to the same english wordmodels 3 and 4 are deficient a technical concept defined and discussed in section 45briefly this means that they waste some of their probability on objects that are not french strings at allmodel 5 is very much like model 4 except that it is not deficientmodels 14 serve as stepping stones to the training of model 5models 1 and 2 have an especially simple mathematical form so that iterations of the them algorithm can be computed exactlythat is we can explicitly perform sums over all possible alignments for these two modelsin addition model 1 has a unique local maximum so that parameters derived for it in a series of them iterations do not depend on the starting point for the iterationsas explained below we use model 1 to provide initial estimates for the parameters of model 2in model 2 and subsequent models the likelihood function does not have a unique local maximum but by initializing each model from the parameters of the model before it we arrive at estimates of the parameters of the final model that do not depend on our initial estimates of the parameters for model 1in models 3 and 4 we must be content with approximate them iterations because it is not feasible to carry out sums over all possible alignments for these modelsbut while approaching more closely the complexity of model 5 they retain enough simplicity to allow an efficient investigation of the neighborhood of probable alignments and therefore allow us to include what we hope are all of the important alignments in each them iterationin the remainder of this section we give an informal but reasonably precise description of each of the five models and an intuitive account of the them algorithm as applied to themwe assume the reader to be comfortable with lagrange multipliers partial differentiation and constrained optimization as they are presented in a typical college calculus text and to have a nodding acquaintance with random variableson the first time through the reader may wish to jump from here directly to section 5 returning to this section when and if he should desire to understand more deeply how the results reported later are achievedthe basic mathematical object with which we deal here is the joint probability distribution pr where the random variables f and e are a french string and an english string making up a translation and the random variable a is an alignment between themwe also consider various marginal and conditional probability distributions that can be constructed from pr especially the distribution prwe generally follow the common convention of using uppercase letters to denote random variables and the corresponding lowercase letters to denote specific values that the random variables may takewe have already used 1 and in to represent the lengths of the strings e and f and so we use l and m to denote the corresponding random variableswhen there is no possibility for confusion or more properly when the probability of confusion is not thereby materially increased we write pr for pr and use similar shorthands throughoutwe can write the likelihood of in terms of the conditional probability pr as the sum here like all subsequent sums over a is over the elements of awe restrict ourselves in this section to alignments like the one shown in figure 1 where each french word has exactly one connectionin this kind of alignment each cept is either a single english word or it is emptytherefore we can assign cepts to positions in the english string reserving position zero for the empty ceptif the english string e e1e2 e1 has 1 words and the french string f firn 1112 fm has m words then the alignment a can be represented by a series ay a1a2 am of m values each between 0 and 1 such that if the word in position j of the french string is connected to the word in position i of the english string then al i and if it is not connected to any english word then a 0without loss of generality we can write this is only one of many ways in which pr can be written as the product of a series of conditional probabilitiesit is important to realize that equation is not an approximationregardless of the form of pr it can always be analyzed into a product of terms in this waywe are simply asserting in this equation that when we generate a french string together with an alignment from an english string we can first choose the length of the french string given our knowledge of the english stringthen we can choose where to connect the first position in the french string given our knowledge of the english string and the length of the french stringthen we can choose the identity of the first word in the french string given our knowledge of the english string the length of the french string and the position in the english string to which the first position in the french string is connected and so onas we step through the french string at each point we make our next choice given our complete knowledge of the english string and of all our previous choices as to the details of the french string and its alignmentthe conditional probabilities on the righthand side of equation cannot all be taken as independent parameters because there are too many of themin model 1 we assume that pr is independent of e and m that pr depends only on 1the length of the english string and therefore must be 1 and that pr depends only on f and eajthe parameters then are pr i and t e pr which we call the translation probability of fl given eawe think of e as some small fixed numberthe distribution of m the length of the french string is unnormalized but this is a minor technical issue of no significance to our computationsif we wish we can think of m as having some finite rangeas long as this range encompasses everything that actually occurs in training data no problems arisewe turn now to the problem of estimating the translation probabilities for model 1the joint likelihood of a french string and an alignment given an english string is we wish to adjust the translation probabilities so as to maximize pr subject to the constraints that for each e following standard practice for constrained maximization we introduce lagrange multipliers a and seek an unconstrained extremum of the auxiliary function an extremum occurs when all of the partial derivatives of h with respect to the components of t and a are zerothat the partial derivatives with respect to the components of a be zero is simply a restatement of the constraints on the translation probabilitiesthe partial derivative of h with respect to vi e is where is the kronecker delta function equal to one when both of its arguments are the same and equal to zero otherwisethis partial derivative will be zero provided that superficially equation looks like a solution to the extremum problem but it is not because the translation probabilities appear on both sides of the equal signnonetheless it suggests an iterative procedure for finding a solution given an initial guess for the translation probabilities we can evaluate the righthand side of equation and use the result as a new estimate for tthis process when applied repeatedly is called the them algorithmthat it converges to a stationary point of h in situations like this was first shown by baum and later by others with the aid of equation we can reexpress equation as number of times e connects to f in a we call the expected number of times that e connects to f in the translation the count off given e for and denote it by cby definition where pr pr prif we replace a by a pr then equation can be written very compactly as in practice our training data consists of a set of translations ie le ie so this equation becomes here a serves only as a reminder that the translation probabilities must be normalizedusually it is not feasible to evaluate the expectation in equation exactlyeven when we exclude multiword cepts there are still m alignments possible for model 1 however is special because by recasting equation we arrive at an expression that can be evaluated efficientlythe righthand side of equation is a sum of terms each of which is a monomial in the translation probabilitieseach monomial contains in translation probabilities one for each of the words in idifferent monomials correspond to different ways of connecting words in f to cepts in e with every way appearing exactly onceby direct evaluation we see that an example may help to clarify thissuppose that m 3 and 1 1 and that we write as a shorthand for tthen the lefthand side of equation is t10 tzo t30 t10 t20 t31 quot 41 t21 t30 tll t21 t31 and the righthand side is it is routine to verify that these are the sametherefore we can interchange the sums in equation with the product to obtain if we use this expression in place of equation when we write the auxiliary function in equation we find that count of e in e thus the number of operations necessary to calculate a count is proportional to 1 m rather than to right now as equation might suggestusing equations and we can estimate the parameters t as followsthe details of our initial guesses for t are unimportant because pr has a unique local maximum for model 1 as is shown in appendix bwe start with all of the t equal but any other choice that avoids zeros would lead to the same final solutionin model 1 we take no cognizance of where words appear in either stringthe first word in the french string is just as likely to be connected to a word at the end of the english string as to one at the beginningin model 2 we make the same assumptions as in model 1 except that we assume that pr depends on j al and in as well as on 1we introduce a set of alignment probabilities therefore we seek an unconstrained extremum of the auxiliary function the reader will easily verify that equations and carry over from model 1 to model 2 unchangedwe need a new count c the expected number of times that the word in position j of f is connected to the word in position i of e clearly notice that if f does not have length m or if e does not have length then the corresponding count is zeroas with the as in earlier equations the its here serve simply to remind us that the alignment probabilities must be normalizedmodel 2 shares with model 1 the important property that the sums in equations and can be obtained efficientlywe can rewrite equation as equation has a double sum rather than the product of two single sums as in equation because in equation i and j are tied together through the alignment probabilitiesmodel 1 is the special case of model 2 in which a is held fixed at 1therefore any set of parameters for model 1 can be reinterpreted as a set of parameters for model 2taking as our initial estimates of the parameters for model 2 the parameter values that result from training model 1 is equivalent to computing the probabilities of all alignments as if we were dealing with model 1 but then collecting the counts as if we were dealing with model 2the idea of computing the probabilities of the alignments using one model but collecting the counts in a way appropriate to a second model is very general and can always be used to transfer a set of parameters from one model to anotherwe created models 1 and 2 by making various assumptions about the conditional probabilities that appear in equation as we have mentioned equation is an exact statement but it is only one of many ways in which the joint likelihood of f and a can be written as a product of conditional probabilitieseach such product corresponds in a natural way to a generative process for developing f and a from e in the process corresponding to equation we first choose a length for f next we decide which position in e is connected to fi and what the identity of fi isthen we decide which position in e is connected to 12 and so onfor models 3 4 and 5 we write the joint likelihood as a product of conditional probabilities in a different waycasual inspection of some translations quickly establishes that the is usually translated into a single word but is sometimes omitted or that only is often translated into one word but sometimes into two and sometimes into nonethe number of french words to which e is connected in a randomly selected alignment is a random variable e 0 for this random variablebut the relationship is remote just what change will be wrought in the distribution of 43the if say we adjust a is not immediately clearin models 3 4 and 5 we parameterize fertilities directlyas a prolegomenon to a detailed discussion of models 3 4 and 5 we describe the generative process upon which they are basedgiven an english string e we first decide the fertility of each word and a list of french words to connect to itwe call this list which may be empty a tabletthe collection of tablets is a random variable t that we call the tableau of e the tablet for the ith english word is a random variable ti and the kth french word in the ith tablet is a random variable tjkafter choosing the tableau we permute its words to produce f this permutation is a random variable h the position in f of the kth word in the ith tablet is yet another a random variable ilzk the joint likelihood for a tableau t and a permutation 7r is in this equation r1c1 represents the series of values 7 7rik1 represents the series of values 7ri1 7rik_i and 0 is shorthand for knowing t and 7r determines a french string and an alignment but in general several different pairs r may lead to the same pair f awe denote the set of such pairs by clearly then two tableaux for one alignmentthe number of elements in is 1110 oi because for each 7 there are oz arrangements that lead to the pair f afigure 4 shows the two tableaux for except for degenerate cases there is one alignment in 4 for which pr is greatestwe call this the viterbi alignment for and denote it by vwe know of no practical algorithm for finding v for a general modelindeed if someone were to claim that he had found v we know of no practical algorithm for demonstrating that he is correctbut for model 2 finding v is straightforwardfor each j we simply choose aj so as to make the product ta as large as possiblethe viterbi alignment depends on the model with respect to which it is computedwhen we need to distinguish between the viterbi alignments for different models we write v v and so onwe denote by 4___1 the set of alignments for which al iwe say that ij is pegged in these alignmentsby the pegged viterbi alignment for ij which we write vi_1 we mean that element of ai_1 for which pr is greatestobviously we can find v_ and 17_1 quickly with a straightforward modification of the algorithm described above for finding v and vmodel 3 is based on equation earlier we were unable to treat each of the conditional probabilities on the righthand side of equation as a separate parameterwith equation we are no better off and must again make assumptions to reduce the number of independent parametersthere are many different sets of assumptions that we might make each leading to a different model for the translation processin model 3 we assume that for i between 1 and pr depends only on oi and ei that for all i pr depends only on rik and ei and that for i between 1 and pr depends only on 7rik i m and the parameters of model 3 are thus a set of fertility probabilities n e pr a set of translation probabilities t pr and a set of distortion probabilities d a pr ewe treat the distortion and fertility probabilities for e0 differentlythe empty cept conventionally occupies position 0 but actually has no positionits purpose is to account for those words in the french string that cannot readily be accounted for by other cepts in the english stringbecause we expect these words to be spread uniformly throughout the french string and because they are placed only after all of the other peter f brown et al the mathematics of statistical machine translation words in the string have been placed we assume that pr equals 0 unless position j is vacant in which case it equals 1therefore the contribution of the distortion probabilities for all of the words in to is 100we expect q50 to depend on the length of the french string because longer strings should have more extraneous wordstherefore we assume that for some pair of auxiliary parameters po and pithe expression on the lefthand side of this equation depends on 01 only through the sum 01 0 and defines a probability distribution over o whenever po and pi are nonnegative and sum to 1we can interpret pr as followswe imagine that each of the words from ti requires an extraneous word with probability pi and that this extraneous word must be connected to the empty ceptthe probability that exactly cbo of the words from ti will require an extraneous word is just the expression given in equation as with models 1 and 2 an alignment of is determined by specifying al for each position in the french stringthe fertilities 00 through 0 are functions of the ais 0 is equal to the number of js for which aj equals i therefore with ef t 1 e1 d 1 e0 n 1 and po pi 1the assumptions that we make for model 3 are such that each of the pairs in makes an identical contribution to the sum in equation the factorials in equation come from carrying out this sum explicitlythere is no factorial for the empty cept because it is exactly canceled by the contribution from the distortion probabilitiesby now the reader will be able to provide his or her own auxiliary function for seeking a constrained minimum of the likelihood of a translation with model 3 but for completeness and to establish notation we write h e a 1 eitina 1 _eve 1 following the trail blazed with models 1 and 2 we define the counts the counts in these last two equations correspond to the parameters po and p1 that determine the fertility of the empty cept in the english stringthe reestimation formulae for model 3 are equations and are identical to equations and and are repeated here only for convenienceequations and are similar to equations and but a differs from d in that the former sums to unity over all i for fixed j while the latter sums to unity over all j for fixed i equations and for the fertility parameters are newthe trick that allows us to evaluate the righthand sides of equations and efficiently for model 2 does not work for model 3because of the fertility parameters we cannot exchange the sums over al through am with the product over j in equation as we were able to for equations and we are not however entirely bereft of hopethe alignment is a useful device precisely because some alignments are much more probable than othersour strategy is to carry out the sums in equations and only over some of the more probable alignments ignoring the vast sea of much less probable onesspecifically we begin with the most probable alignment that we can find and then include all alignments that can be obtained from it by small changesto define unambiguously the subset s of the elements of a over which we evaluate the sums we need yet more terminologywe say that two alignments a and a differ by a move if there is exactly one value of j for which ai 0 we say that they differ by a swap if aj ai except at two values ii and j2 for which let b be that neighbor of a for which the likelihood prlf e is greatestsuppose that ij is pegged for aamong the neighbors of a for which ij is also pegged let b_1 be that for which the likelihood is greatestthe sequence of alignments a b b2 b converges in a finite number of steps to an alignment that we write as because similarly if ij is pegged for a the sequence of alignments a notice that op is the fertility of the word in position i for alignment athe fertility of this word in alignment a is 0 1similar equations can be easily derived when either i or i is zero or when a and a differ by a swapwe leave the details to the readerwith these preliminaries we define s by s ar you v in this equation we use b and lf j as handy approximations to v and vij neither of which we are able to compute efficientlyin one iteration of the them algorithm for model 3 we compute the counts in equations summing only over elements of s and then use these counts in equations to obtain a new set of parametersif the error made by including only some of the elements of 4 is not too great this iteration will lead to values of the parameters for which the likelihood of the training data is at least as large as for the first set of parameterswe make no initial guess of the parameters for model 3 but instead adapt the parameters from the final iteration of the them algorithm for model 2that is we compute the counts in equations using model 2 to evaluate prthe simple form of model 2 again makes exact calculation feasiblewe can readily adapt equations and to compute counts for the translation and distortion probabilities efficient calculation of the fertility counts is more involved and we defer a discussion of it to appendix bthe reader will have noticed a problem with our parameterization of the distortion probabilities in model 3 whereas we can see by inspection that the sum over all pairs y 7 of the expression on the righthand side of equation is unity it is equally clear that this can no longer be the case if we assume that pr depends only on i m and 1 for i 0because the distortion probabilities for assigning positions to later words do not depend on the positions assigned to earlier words model 3 wastes some of its probability on what we might call generalized strings ie strings that have some positions with several words and others with nonewhen a model has this property of not concentrating all of its probability on events of interest we say that it is deficientdeficiency is the price that we pay for the simplicity that allows us to write equation deficiency poses no serious problem herealthough models 1 and 2 are not technically deficient they are surely spiritually deficienteach assigns the same probability to the alignments do not have a pen and do not have a pen and therefore essentially the same probability to the translations and in each case not produces two words ne and pas and in each case one of these words ends up in the second position of the french string and the other in the fourth positionthe first translation should be much more probable than the second but this defect is of little concern because while we might have to translate the first string someday we will never have to translate the secondwe do not use our translation models to predict french given english but rather as a component of a system designed to predict english given frenchthey need only be accurate to within a constant factor over wellformed strings of french wordsoften the words in an english string constitute phrases that are translated as units into frenchsometimes a translated phrase may appear at a spot in the french string different from that at which the corresponding english phrase appears in the english stringthe distortion probabilities of model 3 do not account well for this tendency of phrases to move around as unitsmovement of a long phrase will be much less likely than movement of a short phrase because each word must be moved independentlyin model 4 we modify our treatment of pr e so as to alleviate this problemwords that are connected to the empty cept do not usually form phrases and so we continue to assume that these words are spread uniformly throughout the french stringas we have described an alignment resolves an english string into a ceptual scheme consisting of a set of possibly overlapping ceptseach of these cepts then accounts for one or more french wordsin model 3 the ceptual scheme for an alignment is determined by the fertilities of the words a word is a cept if its fertility is greater than zerothe empty cept is a part of the ceptual scheme if 00 is greater than zeroas before we exclude multiword ceptsamong the oneword cepts there is a natural order corresponding to the order in which they appear in the english stringlet ii denote the position in the english string of the ith oneword ceptwe define the center of this cept 0 to be the ceiling of the average value of the positions in the french string of the words from its tabletwe define its head to be that word in its tablet for which the position in the french string is smallestin model 4 we replace right now 1 by two sets of parameters one for placing the head of each cept and one for placing any remaining wordsfor i 0 we require that the head for cept i be ri and we assume that pr e oii ia 8 here a and 8 are functions of the english and french words that take on a small number of different values as their arguments range over their respective vocabulariesbrown et al describe an algorithm for dividing a vocabulary into classes so as to preserve mutual information between adjacent classes in running textwe construct a and b as functions with 50 distinct values by dividing the english and french vocabularies each into 50 classes according to this algorithmby assuming that the probability depends on the previous cept and on the identity of the french word being placed we can account for such facts as the appearance of adjectives before nouns in english but after them in frenchwe call j 011 the displacement for the head of cept iit may be either positive or negativewe expect di bm to be larger than d18 when e is an adjective and f is a nounindeed this is borne out in the trained distortion probabilities for model 4 where we find that di is 07986 while d1 b is 00168suppose now that we wish to place the kth word of cept i for i 0 k 1we assume that we require that irk be greater than rk_isome english words tend to produce a series of french words that belong together while others tend to produce a series of words that should be separatefor example implemented can produce mis en application which usually occurs as a unit but not can produce ne pas which often occurs with an intervening verbwe expect d1 to be relatively large compared with di after training we find that di is 06847 and d1 is 01533whereas we assume that 7ni can be placed either before or after any previously positioned words we require subsequent words from tn to be placed in orderthis does not mean that they must occupy consecutive positions but only that the second word from tn must lie to the right of the first the third to the right of the second and so onbecause of this only one of the om arrangements of rt is possiblewe leave the routine details of deriving the count and reestimation formulae for model 4 to the readerhe may find the general formulae in appendix b helpfulonce again the several counts for a translation are expectations of various quantities over the possible alignments with the probability of each alignment computed from an earlier estimate of the parametersas with model 3 we know of no trick for evaluating these expectations and must rely on sampling some small set s of alignmentsas described above the simple form that we assume for the distortion probabilities in model 3 makes it possible for us to find pc rapidly for any athe analog of equation for model 4 is complicated by the fact that when we move a french word from cept to cept we change the centers of two cepts and may affect the contribution of several wordsit is nonetheless possible to evaluate the adjusted likelihood incrementally although it is substantially more timeconsumingfaced with this unpleasant situation we proceed as followslet the neighbors of a be ranked so that the first is the neighbor for which pr is greatest the second the one for which pr is next greatest and so onwe define b to be the highestranking neighbor of a for which pr le f 4 is at least as large as prwe define 6_1 analogouslyhere pr means pr as computed with model 3 and pr means pr as computed with model 4we define s for model 4 by n you this equation is identical to equation except that b has been replaced by bmodels 3 and 4 are both deficientin model 4 not only can several words lie on top of one another but words can be placed before the first position or beyond the last position in the french stringwe remove this deficiency in model 5after we have placed the words for 411 and riik1 there will remain some vacant positions in the french stringobviously tik should be placed in one of these vacanciesmodels 3 and 4 are deficient precisely because we fail to enforce this constraint for the oneword ceptslet v be the number of vacancies up to and including position j just before we place ttipcin the interest of notational brevity a noble but elusive goal we write this simply as v1we retain two sets of distortion parameters as in model 4 and continue to refer to them as d1 and d1we assume that for ii 0 7 e voi_i vin 1 the number of vacancies up to j is the same as the number of vacancies up to j 1 only when j is not itself vacantthe last factor therefore is 1 when j is vacant and 0 otherwisein the final parameter of d1 um is the number of vacancies remaining in the french stringif on 1 then rto may be placed in any of these vacancies if oki 2 7ni may be placed in any but the last of these vacancies in general rii may be placed in any but the rightmost on 1 of the remaining vacanciesbecause rto must occupy the leftmost place of any of the words from tn we must take care to leave room at the end of the string for the remaining words from this tabletas with model 4 we allow d1 to depend on the center of the previous cept and on fj but we suppress the dependence on eu_ii since we should otherwise have too many parametersfor ii 0 and k 1 we assume again the final factor enforces the constraint that ttipc land in a vacant position and again we assume that the probability depends on 4 only through its classmodel 5 is described in more detail in appendix bas with model 4 we leave the details of the count and reestimation formulae to the readerno incremental evaluation of the likelihood of neighbors is possible with model 5 because a move or swap may require wholesale recomputation of the likelihood of an alignmenttherefore when we evaluate expectations for model 5 we include only the alignments in s as defined in equation we further trim these alignments by removing any alignment a for which pr is too much smaller than pr le f 4model 5 is a powerful but unwieldy ally in the battle to align translationsit must be led to the battlefield by its weaker but more agile brethren models 2 3 and 4in fact this is the raison dêtre of these modelsto keep them aware of the lay of the land we adjust their parameters as we carry out iterations of the them algorithm for model 5that is we collect counts for models 2 3 and 4 by summing over alignments as determined by the abbreviated s described above using model 5 to compute pralthough this appears to increase the storage necessary for maintaining counts as we proceed through the training data the extra burden is small because the overwhelming majority of the storage is devoted to counts for t and these are the same for models 2 3 4 and 5we have used a large collection of training data to estimate the parameters of the models described abovebrown lai and mercer have described an algorithm with which one can reliably extract french and english sentences that are translations of one another from parallel corporathey used the algorithm to extract a large number of translations from several years of the proceedings of the canadian parliamentfrom these translations we have chosen as our training data those for which both the english sentence and the french sentence are 30 or fewer words in lengththis is a collection of 1778620 translationsin an effort to eliminate some of the typographical errors that abound in the text we have chosen as our english vocabulary all of those words that appear at least twice in english sentences in our data and as our french vocabulary all of those words that appear at least twice in french sentences in our dataall other words we replace with a special unknown english word or unknown french word accordingly as they appear in an english sentence or a french sentencewe arrive in this way at an english vocabulary of 42005 words and a french vocabulary of 58016 wordssome typographical errors are quite frequent for example momento for memento and so our vocabularies are not completely free of themat the same time some words are truly rare and so we have in some cases snubbed legitimate wordsadding eo to the english vocabulary brings it to 42006 wordswe have carried out 12 iterations of the them algorithm for this datawe initialized the process by setting each of the 2 437 020 096 translation probabilities t to 158016that is we assume each of the 58016 words in the french vocabulary to be equally likely as a translation for each of the 42006 words in the english vocabularyfor t to be greater than zero at the maximum likelihood solution for one of our models f and e must occur together in at least one of the translations in our training datathis is the case for only 25 427 016 pairs or about one percent of all translation probabilitieson the average then each english word appears with about 605 french wordstable 1 summarizes our training computationat each iteration we compute the probabilities of the various alignments of each translation using one model and collect counts using a second modelthese are referred to in the table as the in model and the out model respectivelyafter each iteration we retain individual values only for those translation probabilities that surpass a threshold the remainder we set to a small value this value is so small that it does not affect the normalization conditions but is large enough that translation probabilities can be resurrected during later iterationswe see in columns 4 and 5 that even though we lower the threshold as iterations progress fewer and fewer probabilities surviveby the final iteration only 1 658 364 probabilities survive an average of about 39 french words for each english wordalthough the entire t array has 2 437 020 096 entries and we need to store it twice once as probabilities and once as counts it is clear from the preceeding remarks that we need never deal with more than about 25 million counts or about 12 million probabilitieswe store these two arrays using standard sparse matrix techniqueswe keep counts as pairs of bytes but allow for overflow into 4 bytes if necessaryin this way it is possible to run the training program in less than 100 megabytes of memorywhile this number would have seemed extravagant a few years ago today it is available at modest cost in a personal workstationas we have described when the in model is neither model 1 nor model 2 we evaluate the count sums over only some of the possible alignmentsmany of these alignments have a probability much smaller than that of the viterbi alignmentthe column headed alignments in table 1 shows the average number of alignments for which the probability is within a factor of 25 of the probability of the viterbi alignment in each iterationas this number drops the model concentrates more and more probability onto fewer and fewer alignments so that the viterbi alignment becomes ever more dominantthe last column in the table shows the perplexity of the french text given the english text for the in model of the iterationwe expect the likelihood of the training data to increase with each iterationwe can think of this likelihood as arising from a product of factors one for each french word in the training datawe have 28850 104 french words in our training data so the 28850 104th root of the likelihood is the average factor by which the likelihood is reduced for each additional french wordthe reciprocal of this root is the perplexity shown in the tableas the likelihood increases the perplexity decreaseswe see a steady decrease in perplexity as the iterations progress except when we switch from model 2 as the in model to model 3this sudden jump is not because model 3 is a poorer model than model 2 but because model 3 is deficient the great majority of its probability is squandered on objects that are not strings of french wordsas we have argued deficiency is not a problemin our description of model 1 we left pr unspecifiedin quoting perplexities for models 1 and 2 we have assumed that the length of the french string is poisson with a mean that is a linear function of the length of the english stringspecifically we have assumed that pr two interesting changes evolve over the course of the iterationsin the alignment for model 1 ii is correctly connected to he but in all later alignments il is incorrectly connected to itmodels 2 3 and 5 discount a connection of he to il because it is quite far awaywe do not yet have a model with sufficient linguistic sophistication to make this connection properlyon the other hand we see that nodding which in models 1 2 and 3 is connected only to signe and oui is correctly connected to the entire phrase faire signe que oui in model 5in the second example models 1 2 and 3 incorrectly connect profits4 to both profits3 and realises7 but with model 5 profits4 is correctly connected only to profits3 and made7 is connected to reaises7finally in promisesi is connected to both instances of promesses with model 1 promises3 is connected to most of the french sentence with model 2 the final punctuation of the english sentence is connected to both the exclamation point and curiously to de5 with model 3 and only with model 5 do we have a satisfying alignment of the two sentencesthe orthography for the french sentence in the second example is voyez les profits quils ont realises and in the third example is des promesses des promesseswe have restored the e to the end figure 5 the progress of alignments with iteration of qu and have twice analyzed des into its constituents de and leswe commit these and other petty pseudographic improprieties in the interest of regularizing the french textin all cases orthographic french can be recovered by rule from our corrupted versionsfigures 615 show the translation probabilities and fertilities after the final iteration of training for a number of english wordswe show all and only those probabilities that are greater than 001some words like nodding in figure 6 do not slip gracefully into frenchthus we have translations like or as a result nodding frequently has a large fertility and spreads its translation probability over a variety of wordsin french what is worth saying is worth saying in many different wayswe see another facet of this with words like should in figure 7 which rarely has a fertility greater than one but still produces many different words among them devrait devraient devrions doit doivent devons and devraisthese are forms of the french verb devoiradjectives fare a little better national in figure 8 almost never produces more than one word and confines itself to one of nationale national nationaux and nationales respectively the feminine the masculine the masculine plural and the feminine plural of the corresponding french adjectiveit is clear that our models would benefit from some kind of morphological processing to rein in the lexical exuberance of frenchwe see from the data for the in figure 9 that it produces le la les and l as we would expectits fertility is usually 1 but in some situations english prefers an article where french does not and so about 14 of the time its fertility is 0sometimes as with farmers in figure 10 it is french that prefers the articlewhen this happens the english noun trains to produce its translation together with an articlethus farmers translation and fertility probabilities for nodding typically has a fertility 2 and usually produces either agriculteurs or leswe include additional examples in figures 11 through 15 which show the translation and fertility probabilities for external answer oil former and notalthough we show the various probabilities to three decimal places one must realize that the specific numbers that appear are peculiar to the training data that we used in obtaining themthey are not constants of nature relating the platonic ideals of eternal english and eternal frenchhad we used different sentences as training data we might well have arrived at different numbersfor example in figure 9 we see that t 0497 while the corresponding number from figure 4 of brown et al is 0610the difference arises not from some instability in the training algorithms or some subtle shift in the languages in recent years but from the fact that we have used 1778620 pairs of sentences covering virtually the complete vocabulary of the hansard data for training while they used only 40000 pairs of sentences and restricted their attention to the 9000 most common words in each of the two vocabulariesfigures 16 17 and 18 show automatically derived alignments for three translationsin the terminology of section 46 each alignment is bquotwe stress that these alignments have been found by an algorithm that involves no explicit knowledge of either french or englishevery fact adduced to support them has been discovered algorithmically from the 1 778 620 translations that constitute our training datathis data in turn is the product of an algorithm the sole linguistic input of which is a set of rules explaining how to find sentence boundaries in the two languageswe may justifiably claim therefore that these alignments are inherent in the canadian hansard data itselfin the alignment shown in figure 16 all but one of the english words has fertility 1the final prepositional phrase has been moved to the front of the french sentence but otherwise the translation is almost verbatimnotice however that the new proposal has been translated into les nouvelles propositions demonstrating that number is not an invariant under translationthe empty cept has fertility 5 hereit generates eni de3 the comma de16 and dela f t 0 n le 0497 1 0746 la 0207 0 0254 les 0155 l 0086 ce 0018 cette 0011 translation and fertility probabilities for the f t 0 n agriculteurs 0442 2 0731 les 0418 1 0228 cultivateurs 0046 0 0039 producteurs 0021 translation and fertility probabilities for farmerstranslation and fertility probabilities for oiltranslation and fertility probabilities for notin figure 17 two of the english words have fertility 0 one has fertility 2 and one embattled has fertility 5embattled is another word like nodding that eludes the french grasp and comes with a panoply of multiword translationsthe final example in figure 18 has several features that bear commentthe second word speaker is connected to the sequence lorateurlike farmers above it has trained to produce both the word that we naturally think of as its translation and the associated articlein our data speaker always has fertility 2 and produces equally often lorateur and le presidentlater in the sentence starred is connected to the phrase marquees de un asterisquefrom an initial situation in which each french word is equally probable as a translation of starred we have arrived through training at a situation where it is possible to connect starred to just the right string of four wordsnear the end of the sentence give is connected to donnerai the first person singular future of donner which means to givewe should be more comfortable if both will and give were connected to donnerai but by limiting cepts to no more than one word we have precluded this possibilityfinally the last 12 words of the english sentence i now have the answer and will give it to the house clearly correspond to the last 7 words of the french sentence je donnerai la reponse a la chambre but literally the french is i will give the answer to the housethere is nothing about now have and or it and each of these words has fertility 0translations that are as far as this from the literal are rather more the rule than the exception in our training dataone might cavil at the connection of la reponse to the answer rather than to itwe do notmodels 15 provide an effective means for obtaining wordbyword alignments of translations but as a means to achieve our real goal which is translation there is the best of 19 x 1025 alignmentsthe best of 84 x 1029 alignmentsthe best of 56 x 1031 alignments room for improvementwe have seen that by ignoring the morphological structure of the two languages we dilute the strength of our statistical model explaining for example each of the several tens of forms of each french verb independentlywe have seen that by ignoring multiword cepts we are forced to give a false or at least an unsatisfactory account of some features in many translationsand finally we have seen that our models are deficient either in fact as with models 3 and 4 or in spirit as with models 1 2 and 5we have argued in section 2 that neither spiritual nor actual deficiency poses a serious problem but this is not entirely truelet w be the sum of pr over wellformed french strings and let i be the sum over illformed french strings lit a deficient model w i 0if pr 0 but i 0 then the model is spiritually deficientif w were independent of e neither form of deficiency would pose a problem but because our models have no longterm constraints w decreases exponentially with 1when computing alignments even this creates no problem because e and f are knownif however we are given f and asked to discover e then we will find that the product pr e is too small for long english strings as compared with short onesas a result we will improperly favor short english stringswe can counteract this tendency in part by replacing pr with c poi e for some empirically chosen constant c this is treatment of the symptom rather than treatment of the disease itself but it offers some temporary reliefthe cure lies in better modelingas we progress from model 1 to model 5 evaluating the expectations that give us counts becomes increasingly difficultfor models 1 and 2 we are able to include the contribution of each of the right now possible alignments exactlyfor later models we include the contributions of fewer and fewer alignmentsbecause most of the probability for each translation is concentrated by these models on a small number of alignments this suboptimal procedure mandated by the complexity of the models yields acceptable resultsin the limit we can contemplate evaluating the expectations using only a single probable alignment for each translationwhen that alignment is the viterbi alignment we call this viterbi trainingit is easy to see that viterbi training converges at each step we reestimate parameters so as to make the current set of viterbi alignments as probable as possible when we use these parameters to compute a new set of viterbi alignments we find either the old set or a set that is yet more probablesince the probability can never be greater than one this process must convergein fact unlike the them algorithm in general it must converge in a finite though impractically large number of steps because each translation has only a finite number of alignmentsin practice we are never sure that we have found the viterbi alignmentif we reinterpret the viterbi alignment to mean the most probable alignment that we can find rather than the most probable alignment that exists then a similarly reinterpreted viterbi training algorithm still convergeswe have already used this algorithm successfully as a part of a system to assign senses to english and french words on the basis of the context in which they appear we expect to use it in models that we develop beyond model 5in models 15 we restrict our attention to alignments with cepts containing no more than one word eachexcept in models 4 and 5 cepts play little role in our developmenteven in these models cepts are determined implicitly by the fertilities of the words in the alignment words for which the fertility is greater than zero make up oneword cepts those for which it is zero do notwe can easily extend the generative process upon which models 3 4 and 5 are based to encompass multiword ceptswe need only include a step for selecting the ceptual scheme and ascribe fertilities to cepts rather than to words requiring that the fertility of each cept be greater than zerothen in equation we can replace the products over words in an english string with products over cepts in the ceptual schemewhen we venture beyond oneword cepts however we must tread lightlyan english string can contain any of 42005 oneword cepts but there are more than 17 billion possible twoword cepts more than 74 trillion threeword cepts and so onclearly one must be discriminating in choosing potential multiword ceptsthe caution that we have displayed thus far in limiting ourselves to cepts with fewer than two words was motivated primarily by our respect for the featureless desert that multiword cepts offer a priorithe viterbi alignments that we have computed with model 5 give us a frame of reference from which to expand our horizons to multiword ceptsby inspecting them we can find translations for a given multiword sequencewe need only promote a multiword sequence to cepthood when these translations differ substantially from what we might expect on the basis of the individual words that it containsin english either a boat or a person can be left high and dry but in french un bateau is not left haut et sec nor une personne haute et secherather a boat is left echoue and a person en planhigh and dry therefore is a promising threeword cept because its translation is not compositionalwe treat each distinct sequence of letters as a distinct wordin english for example we recognize no kinship among the several forms of the verb to eat in french irregular verbs have many formsin figure 7 we have already seen 7 forms of devoiraltogether it has 41 different formsand there would be 42 if the french did not inexplicably drop the circumflex from the masculine plural past participle thereby causing it to collide with the first and second person singular in the passé simple no doubt a source of endless confusion for the beleaguered francophonethe french make do with fewer forms for the multitude of regular verbs that are the staple diet of everyday speechthus manger has only 39 forms models 15 must learn to connect the 5 forms of to eat to the 39 forms of mangerin the 28850 104 french words that make up our training data only 13 of the 39 forms of manger actually appearof course it is only natural that in the proceedings of a parliament forms of manger are less numerous than forms of parler but even for parler only 28 of the 39 forms occur in our dataif we were to encounter a rare form of one of these words say parlass ions or mangeassent we would have no inkling of its relationship to speak or eata similar predicament besets nouns and adjectives as wellfor example composition is the among the most common words in our english vocabulary but compositions is among the least common wordswe plan to ameliorate these problems with a simple inflectional analysis of verbs nouns adjectives and adverbs so that the relatedness of the several forms of the same word is manifest in our representation of the datafor example we wish to make evident the common pedigree of the different conjugations of a verb in french and in english of the singular and plural and singular possessive and plural possessive forms of a noun in english of the singular plural masculine and feminine forms of a noun or adjective in french and of the positive comparative and superlative forms of an adjective or adverb in englishthus our intention is to transform into eg here eat is analyzed into a root eat and an ending x3spres that indicates the present tense form used except in the third person singularsimilarly mange is analyzed into a root manger and an ending 13spres that indicates the present tense form used for the first and third persons singularthese transformations are invertible and should reduce the french vocabulary by about 50 and the english vocabulary by about 20we hope that this will significantly improve the statistics in our modelsthat interesting bilingual lexical correlations can be extracted automatically from a large bilingual corpus was pointed out by brown et al the algorithm that they describe is roughly speaking equivalent to carrying out the first iteration of the them algorithm for our model 1 starting from an initial guess in which each french word is equally probable as a translation for each english wordthey were unaware of a connection to the them algorithm but they did realize that their method is not entirely satisfactoryfor example once it is clearly established that in it is red that produces rouge one is uncomfortable using this sentence as support for red producing porte or door producing rougethey suggest removing words once a correlation between them has been clearly established and then reprocessing the resulting impoverished translations hoping to recover less obvious correlations now revealed by the departure of their more prominent relativesfrom our present perspective we see that the proper way to proceed is simply to carry out more iterations of the them algorithmthe likelihood for model 1 has a unique local maximum for any set of training dataas iterations proceed the count for porte as a translation of red will dwindle awayin a later paper brown et al describe a model that is essentially the same as our model 3they sketch the them algorithm and show that once trained their model can be used to extract wordbyword alignments for pairs of sentencesthey did not realize that the logarithm of the likelihood for model 1 is concave and hence has a unique local maximumthey were also unaware of the trick by which we are able to sum over all alignments when evaluating the counts for models 1 and 2 and of the trick by which we are able to sum over all alignments when transferring parameters from model 2 to model 3as a result they were unable to handle large vocabularies and so restricted themselves to vocabularies of only 9000 wordsnonetheless they were able to align phrases in french with the english words that produce them as illustrated in their figure 3more recently gale and church describe an algorithm similar to the one described in brown et al like brown et al they consider only the simultaneous appearance of words in pairs of sentences that are translations of one anotheralthough algorithms like these are extremely simple many of the correlations between english and french words are so pronounced as to fall prey to almost any effort to expose themthus the correlation of pairs like and many others simply cannot be missedthey shout from the data and any method that is not stone deaf will hear thembut many of the correlations speak in a softer voice to hear them clearly we must model the translation process as brown et al suggest and as brown et al and the current paper actually doonly in this way can one hope to hear the quiet call of or the whisper of the series of models that we have described constitutes a mathematical embodiment of the powerfully compelling intuitive feeling that a word in one language can be translated into a word or phrase in another languagein some cases there may be several or even several tens of translations depending on the context in which the word appears but we should be quite surprised to find a word with hundreds of mutually exclusive translationsalthough we use these models as part of an automatic system for translating french into english they provide as a byproduct very satisfying accounts of the wordbyword alignment of pairs of french and english stringsour work has been confined to french and english but we believe that this is purely adventitious had the early canadian trappers been manchurians later to be outnumbered by swarms of conquistadores and had the two cultures clung stubbornly each to its native tongue we should now be aligning spanish and chinesewe conjecture that local alignment of the component parts of any corpus of parallel texts is inherent in the corpus itself provided only that it be large enoughbetween any pair of languages where mutual translation is important enough that the rate of accumulation of translated examples sufficiently exceeds the rate of mutation of the languages involved there must eventually arise such a corpusthe linguistic content of our program thus far is scant indeedit is limited to one set of rules for analyzing a string of characters into a string of words and another set of rules for analyzing a string of words into a string of sentencesdoubtless even these can be recast in terms of some information theoretic objective functionbut it is not our intention to ignore linguistics neither to replace itrather we hope to enfold it in the embrace of a secure probabilistic framework so that the two together may draw strength from one another and guide us to better natural language processing systems in general and to better machine translation systems in particularwe would like to thank many of our colleagues who read and commented on early versions of the manuscript especially john laffertywe would also like to thank the reviewers who made a number of invaluable suggestions about the organization of the paper and pointed out many weaknesses in our original manuscriptif any weaknesses remain it is not because of their failure to point them out but because of our ineptness at responding adequately to their criticismsenglish vocabulary english word english string random english string length of e random length of e position in e i 0 1 1 word i of e the empty cept french vocabulary french word french string random french string length of f random length of f position in f j 1 2 m word j of f alignment cb length of ti position within a tablet k 1 2 tik word k of ti ir a permutation of the positions of a tableau ik position in f for word k of ti for permutation 7r n neighboring alignments of a neighboring alignments of a with ij pegged b alignment in jv with greatest probability b alignment obtained by applying b repeatedly to a bi_1 alignment in mi with greatest probability biquot i alignment obtained by applying bi1 repeatedly to a a class of english word e b class of french word f a displacement of a word in f vacancies in f pt first position in e to the left of i that has nonzero fertility c average position in f of the words connected to position i of e i position in e of the ith one word cept ci po translation model p with parameter values string length probabilities fertility probabilities fertility probabilities for eo alignment probabilities distortion probabilities distortion probabilities for the first word of a tablet distortion probabilities for the other words of a tablet distortion probabilities for the first word of a tablet distortion probabilities for the other words of a tablet we collect here brief descriptions of our various translation models and the formulae needed for training theman englishtofrench translation model p with parameters 9 is a formula for calculating a conditional probability or likelihood p0 for any string f of french words and any string e of english wordsthese probabilities satisfy where the sum ranges over all french strings f and failure is a special symbol not in the french vocabularywe interpret po as the probability that a translator will produce f when given e and p0 as the probability that he will produce no translation when given e we call a model deficient if p is greater than zero for some e loglikelihood objective functionthe loglikelihood of a sample of translations e s 1 2 s is here c is the empirical distribution of the sample so that c is 1s times the number of times that the pair occurs in the samplewe determine values for the parameters 9 so as to maximize this loglikelihood for a large training sample of translationsfor our models the only alignments that have positive probability are those for which each word of f is connected to at most one word of e relative objective functionwe can compare hidden alignment models po and po using the relative objective function where p 0 p0note that are 0r is related to by jensen inequality summing over e and f and using the definitions and we arrive at equation we cannot create a good model or find good parameter values at a strokerather we employ a process of iterative improvementfor a given model we use current parameter values to find better ones and in this way from initial values we find locally optimal onesthen given good parameter values for one model we use them to find initial parameter values for another modelby alternating between these two steps we proceed through a sequence of gradually more sophisticated modelsimproving parameter valuesfrom jensen inequality we see that 0 is greater than 0 if r is positivewith p p this suggests the following between probability distributions p and qhowever whereas the relative entropy is never negative r can take any valuethe inequality for r is the analog of the inequality d 0 for d iterative procedure known as the them algorithm for finding locally optimal parameter values 0 for a model p note that for any a r is nonnegative at its maximum in 0 since it is zero for othus 0 will not decrease from one iteration to the nextgoing from one model to anotherjensen inequality also suggests a method for using parameter values 0 for one model i to find initial parameter values 0 for another model p in contrast to the case where f p there may not be any 0 for which r is nonnegativethus it could be that even for the best 0 11 m the coefficient of x0 on the righthand side of equation must be zeroit follows that we can express zk as a polynomial in zk k 12 m using equation we can identify the coefficient of x4 in equation we obtain equation by combining equations and the definitions and b6 model 4 translation probabilities parameters fertility probabilities qt e fertility probabilities for e0 n distortion probabilities for the first word of a tablet po p1 distortion probabilities for the other words of a tablet cli di here 64 is an integer a is an english class and b is a french classwhere in equation p is the first position to the left of i for which ci 0 and cp is the ceiling of the average position of the words of rp note that equations and are identical to the corresponding formulae and for model 3generationequations describe the following process for producing f or failure from e 14choose a tableau t by following steps 14 for model 3 5for each i 12 l and each k 12 oi choose a position quotffik as followsif k 1 then choose 7r1 according to the distribution a bif k 1 then choose 7rik greater than rk1 according to the distribution 68finish generating f by following steps 68 for model 3b7 model 5 translation probabilities parameters fertility probabilities t fertility probabilities for e0 n distortion probabilities for the first word of a tablet popi distortion probabilities for the other words of a tablet di d1 here v 12 m
J93-2003
the mathematics of statistical machine translation parameter estimationwe describe a series of five statistical models of the translation process and give algorithms for estimating the parameters of these models given a set of pairs of sentences that are translations of one anotherwe define a concept of wordbyword alignment between such pairs of sentencesfor any given pair of such sentences each of our models assigns a probability to each of the possible wordbyword alignmentswe give an algorithm for seeking the most probable of these alignmentsalthough the algorithm is suboptimal the alignment thus obtained accounts well for the wordbyword relationships in the pair of sentenceswe have a great deal of data in french and english from the proceedings of the canadian parliamentaccordingly we have restricted our work to these two languages but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languageswe also feel again because of the minimal linguistic content of our algorithms that it is reasonable to argue that wordbyword alignments are inherent in any sufficiently large bilingual corpusour model for statistical machine translation is focused on word to word translation and was based on the noisy channel approach
lexical semantic techniques for corpus analysis in this paper we outline a research program for computational linguistics making extensive use of text corpora we demonstrate how a semantic framework for lexical knowledge can suggest richer relationships among words in text beyond that of simple cooccurrence the work suggests how linguistic phenomena such as metonymy and polysemy might be exploitable for semantic tagging of lexical items unlike with purely statistical collocational analyses the framework of a semantic theory allows the automatic construction of predictions about deeper semantic among words appearing in systems illustrate the approach for the acquisition of lexical information for several classes of nominals and how such techniques can finetune the lexical structures acquired from an initial seeding of a machinereadable dictionary in addition to conventional lexical semantic relations we show how information concerning lexical presuppositions and preference relations can also be acquired from corpora when analyzed with the appropriate semantic tools finally we discuss the potential that corpus studies have for enriching the data set for theoretical linguistic research as well as helping to confirm or disconfirm linguistic hypotheses in this paper we outline a research program for computational linguistics making extensive use of text corporawe demonstrate how a semantic framework for lexical knowledge can suggest richer relationships among words in text beyond that of simple cooccurrencethe work suggests how linguistic phenomena such as metonymy and polysemy might be exploitable for semantic tagging of lexical itemsunlike with purely statistical collocational analyses the framework of a semantic theory allows the automatic construction of predictions about deeper semantic relationships among words appearing in collocational systemswe illustrate the approach for the acquisition of lexical information for several classes of nominals and how such techniques can finetune the lexical structures acquired from an initial seeding of a machinereadable dictionaryin addition to conventional lexical semantic relations we show how information concerning lexical presuppositions and preference relations can also be acquired from corpora when analyzed with the appropriate semantic toolsfinally we discuss the potential that corpus studies have for enriching the data set for theoretical linguistic research as well as helping to confirm or disconfirm linguistic hypothesesthe proliferation of online textual information poses an interesting challenge to linguistic researchers for several reasonsfirst it provides the linguist with sentence and word usage information that has been difficult to collect and consequently largely ignored by linguistssecond it has intensified the search for efficient automated indexing and retrieval techniquesfulltext indexing in which all the content words in a document are used as keywords is one of the most promising of recent automated approaches yet its mediocre precision and recall characteristics indicate that there is much room for improvement the use of domain knowledge can enhance the effectiveness of a fulltext system by providing related terms that can be used to broaden narrow or refocus a query at retrieval time or content analysis unfortunately for many domains such knowledge even in the form of a thesaurus is either not available or is incomplete with respect to the vocabulary of the texts indexedin this paper we examine how linguistic phenomena such as metonymy and polysemy might be exploited for the semantic tagging of lexical itemsunlike purely statistical collocational analyses employing a semantic theory allows for the automatic construction of deeper semantic relationships among words appearing in collocational systemswe illustrate the approach for the acquisition of lexical information for several classes of nominals and how such techniques can finetune the lexical structures acquired from an initial seeding of a machinereadable dictionaryin addition to conventional lexical semantic relations we show how information concerning lexical presuppositions and preference relations can also be acquired from corpora when analyzed with the appropriate semantic toolsfinally we discuss the potential that corpus studies have for enriching the data set for theoretical linguistic research as well as helping to confirm or disconfirm linguistic hypothesesthe aim of our research is to discover what kinds of knowledge can be reliably acquired through the use of these methods exploiting as they do general linguistic knowledge rather than domain knowledgein this respect our program is similar to zernik work on extracting verb semantics from corpora using lexical categoriesour research however differs in two respects first we employ a more expressive lexical semantics second our focus is on all major categories in the language and not just verbsthis is important since for fulltext information retrieval information about nominals is paramount as most queries tend to be expressed as conjunctions of nounsfrom a theoretical perspective we believe that the contribution of the lexical semantics of nominals to the overall structure of the lexicon has been somewhat neglected relative to that of verbswhile zernik presents ambiguity and metonymy as a potential obstacle to effective corpus analysis we believe that the existence of motivated metonymic structures actually provides valuable clues for semantic analysis of nouns in a corpuswe will assume for this paper the general framework of a generative lexicon as outlined in pustejovsky in particular we make use of the principles of type coercion and qualia structurethis model of semantic knowledge associated with words is based on a system of generative devices that is able to recursively define new word senses for lexical items in the languagethese devices and the associated dictionary make up a generative lexicon where semantic information is distributed throughout the lexicon to all categoriesthe general framework assumes four basic levels of semantic description argument structure qualia structure lexical inheritance structure and event structureconnecting these different levels is a set of generative devices that provide for the compositional interpretation of words in contextthe most important of these devices is a semantic transformation called type coercionanalogous to coercion in programming languageswhich captures the semantic relatedness between syntactically distinct expressionsas an operation on types within a acalculus type coercion can be seen as transforming a monomorphic language into one with polymorphic types argument event and qualia types must conform to the wellformedness conditions defined by the type system defined by the lexical inheritance structure when undergoing operations of semantic compositionone component of this approach the qualia structure specifies the different aspects of a word meaning through the use of subtypingthese include the subtypes constitutive formal telic and agentiveto illustrate how these are used the qualia structure for book is given below2 this structured representation allows one to use the same lexical entry in different contexts where the word refers to different qualia of the noun denotationfor example the sentences in below refer to different aspects of the general meaning of book3 example 1 this book weighs four ouncesexample 2 john finished a bookthis is an interesting bookexample 1 makes reference to the formal role while 3 refers to the constitutive roleexample 2 however can refer to either the telic or the agentive aspects given abovethe utility of such knowledge for information retrieval is readily apparentthis theory claims that noun meanings should make reference to related concepts and the relations into which they enterthe qualia structure thus can be viewed as a kind of generic template for structuring this knowledgesuch information about how nouns relate to other lexical items and their concepts might prove to be much more useful in fulltext information retrieval than what has come from standard statistical techniquesto illustrate how such semantic structuring might be useful consider the general class of artifact nounsa generative view of the lexicon predicts that by classifying an element into a particular category we can generate many aspects of its semantic structure and hence its syntactic behaviorfor example the representation above for book refers to several word senses all of which are logically related by the semantic template for an artifactual objectthat is it contains information it has a material extension it serves some function and it is created by some particular act or eventin the qualia structures given below we adopt the convention that a 0 denotes conjunction of formulas within the feature structure while a 0 will denote disjunctionsuch an analysis allows us to minimally structure objects according to these four qualiaas an example of how objects cluster according to these dimensions we will briefly consider three object types containers eg book tape record instruments eg gun hammer paintbrush and figureground objects eg door room fireplacebecause of how their qualia structures differ these classes appear in vastly different grammatical contextsas with containers in general information containers permit metonymic extensions between the container and the material contained within itcollocations such as those in examples 4 through 7 indicate that this metonymy is grammaticalized through specific and systematic headpp constructions read the information on the tape instruments on the other hand display classic agentinstrument causative alternations such as those in examples 8 through 11 smash the vase with the hammer the hammer smashed the vase kill him with a gun the gun killed himfinally figureground nominals permit perspective shifts such as those in examples 12 through 15these are nouns that refer to physical objects as well as the specific enclosure or aperture associated with itjohn painted the doorjohn walked through the doorjohn is scrubbing the fireplacethe smoke filled the fireplacethat is paint and scrub are actions on physical objects while walk through and fi are processes in spacesthese collocational patterns we argue are systematically predictable from the lexical semantics of the noun and we term such sets of collocated structures lexical conceptual paradigms 4 to make this point clearer let us consider a specific example of an lcp from the computer science domain namely for the noun tapebecause of the particular metonymy observed for a noun like tape we will classify it as belonging to the containercontainee lcpthis general class is represented as follows where p and q are predicate variables5 the lcp is a generic qualia structure that captures not only the semantic relationship between arguments types of a relation but also through corpustuning the collocation relations that realize these rolesthe telic function of a container for example is the relation hold but this underspecifies which spatial prepositions would adequately satisfy this semantic relation in this view a noun such as tape would have the following qualia structure this states that a tape is an quotinformation containerquot that is also a twodimensional physical object where the information is written onto the objectwith such nouns a logical metonymy exists when the logical argument of a semantic type which is selected by a function of some sort denotes the semantic type itselfthus in this example the type selected for by a verb such as read refers to the quotinformationquot argument for tape while a verb such as carry would select for the quotphysical objectquot argumentthey are however logically related since the noun itself denotes a relationthe representation above simply states that any semantics for tape must logically make reference to the object itself what it can contain what purpose it serves and how it arises this provides us with a semantic representation that can capture the multiple perspectives a single lexical item may assume in different contextsyet the qualia for a lexical item such as tape are not isolated values for that one word but are integrated into a global knowledge base indicating how these senses relate to other lexical items and their sensesthis is the contribution of inheritance and the hierarchical structuring of knowledge in pustejovsky it is suggested that there are two types of relational structures for lexical knowledge a fixed inheritance similar to that of an isa hierarchy and a dynamic structure that operates generatively from the qualia structure of a lexical item to create a relational structure for ad hoc categoriesreviewing briefly the basic idea is that semantics allows for the dynamic creation of arbitrary concepts through the application of certain transformations to lexical meaningsthus for every predicate q we can generate its opposition qsimilarly these two predicates can be related temporally to generate the transition events defining this oppositionthese operations include but may not be limited to negation temporal succession temporal equivalence and act an operator adding agency to an argumentwe will call the concept space generated by these operations the projective conclusion space of a specific quale for a lexical itemto return to the example of tape above the predicates read and copy are related to the telic value by just such an operation while predicates such as mount and dismountie unmountare related to the formal rolefollowing the previous discussion with mounted as the predicate q successive applications of the negation and temporal precedence operators derives the transition verbs mount and dismountwe return to a discussion of this in section 3 and to how this space relates to statistically significant collocations in textit is our view that the approach outlined above for representing lexical knowledge can be put to use in the service of information retrieval tasksin this respect our proposal can be compared to attempts at object classification in information scienceone approach known as faceted classification proceeds roughly as follows collect all terms lying within a field then group the terms into facets by assigning them to categoriestypical examples of this are state property reaction and devicehowever each subject area is likely to have its own sets of categories which makes it difficult to reuse a set of facet classifications9 even if the relational information provided by the qualia structure and inheritance would improve performance in information retrieval tasks one problem still remains namely that it would be very timeconsuming to handcode such structures for all nouns in a domainsince it is our belief that such representations are generic structures across all domains it is our longterm goal to develop methods for automatically extracting these relations and values from online corporain the sections that follow we describe several experiments indicating that the qualia structures do in fact correlate with wellbehaved collocational patterns thereby allowing us to perform structurematching operations over corpora to find these relationsin this section we discuss briefly how a lexical semantic theory can help in extracting information from machinereadable dictionaries we describe research on conversion of a machinetractable dictionary into a usable lexical knowledge base although the results here are preliminary it is important to mention the process of converting an mrd into a lexical knowledge base so that the process of corpustuning is put into the proper perspectivethe initial seeding of lexical structures is being done independently both from the oxford advanced learners dictionary and from lexical entries in the longman dictionary of contemporary english these are then automatically adapted to the format of generative lexical structuresit is these lexical structures that are then statistically tuned against the corpus following the methods outlined in anick and pustejovsky and pustejovsky previous work by amsler calzolari chodorow byrd and heidorn byrd et al markowitz ahlswede and evens and nakamura and nagao showed that taxonomic information and certain semantic relations can be extracted from mrds using fairly simple techniqueslater work by veronis and ide klavans chodorow and wacholder and wilks et al provides us with a number of techniques for transfering information from mrds to a representation language such as that described in the previous sectionour goal is to automate to the extent possible the initial construction of these structuresextensive research has been done on the kind of information needed by natural language programs and on the representation of that information following boguraev et al and wilks et al of 1989 we believe that much of what is needed for nlp lexicons can be found either explicitly or implicitly in a dictionary and empirical evidence suggests that this information gives rise to a sufficiently rich lexical representation for use in extracting information from textstechniques for identifying explicit information in machinereadable dictionaries have been developed by many researchers and are well understoodmany properties of a word sense or the semantic relationships between word senses are available in mrds but this information can only be identified computationally through some analysis of the definition text of an entry some research has already been done in this areaalshawi boguraev et al vossen meijs and den broeder and the work described in wilks et al have made explicit some kinds of implicit information found in mrdshere we propose to refine and merge some of the previous techniques to make explicit the implicit information specified by a theory of generative lexiconsgiven what we described above for the lexical structures for nominals we can identify these semantic relations in the oald and ldoce by pattern matching on the parse trees of definitionsto illustrate what specific information can be derived by automatic seeding from machinereadable dictionaries consider the following examples1 for example the ldoce definition for book is quota collection of sheets of paper fastened together as a thing to be read or to be written inquot while the oald provides a somewhat different definition quotnumber of sheet of papers either printed or blank fastened together in a coverquot note that both definitions are close to but not identical to the information structure suggested in the previous section using a qualia structure for nominalsldoce suggests write in rather than write as the value for the telic role while the oald suggests nothing for this rolefurthermore although the physical contents of a book as quota collection of sheets of paperquot is mentioned nowhere is information made reference to in the definitionwhen the dictionary fails to provide the value for a semantic role the information must be either handentered or the lexical structure must be tuned against a large corpus in the hope of extracting such features automaticallywe turn to this issue in the next two sectionsalthough the two dictionaries differ in substantial respects it is remarkable how systematic the definition structures are for extracting semantic information if there is a clear idea how this information should be structuredfor example from the following oald definition for cigarette cigarette n roll of shredded tobacco enclosed in thin paper for smoking the initial lexical structure below is generatedparsing the ldoce entry for the same noun results in a different lexical structure cigarette n finely cut shredded tobacco rolled in a narrow tube of thin paper for smoking gls one obvious problem with the above representation is that there is no information indicating how the word being defined binds to the relations in the qualiacurrently subsequent routines providing for argument binding analyze the relational structure for particular aspects of noun meaning giving us a lexical structure fairly close to what we need for representation and retrieval purposes although the result is in no way ideal or uniform over all nominal formsquot in a related set of experiments performed while constructing a large lexical database for data extraction purposes we seeded a lexicon with 6000 verbs from ldocethis process and the corpus tuning for both argument typing and subcategorization acquisition are described in cowie guthrie and pustejovsky and pustejovsky et al in summary based on a theory of lexical semantics we have discussed how an mrd can be useful as a corpus for automatically seeding lexical structuresrather than addressing the specific problems inherent in converting mrds into useful lexicons we have emphasized how it provides us in a sense with a generic vocabulary from which to begin lexical acquisition over corporain the next section we will address the problem of taking these initial and often very incomplete lexical structures and enriching them with information acquired from corpus analysisas mentioned in the previous section the power of a generative lexicon is that it takes much of the burden of semantic interpretation off of the verbal system by supplying a much richer semantics for nouns and adjectivesthis makes the lexical structures ideal as an initial representation for knowledge acquisition and subsequent information retrieval tasksa machinereadable dictionary provides the raw material from which to construct computationally useful representations of the generic vocabulary contained within itthe lexical structures discussed in the previous section are one example of how such information can be exploitedmany sublanguages however are poorly represented in online dictionaries if represented at allvocabularies geared to specialized domains will be necessary for many applications such as text categorization and information retrievalthe second area of our research program that we discuss is aimed at developing techniques for building sublanguage lexicons via syntactic and statistical corpus analysis coupled with analytic techniques based on the tenets of generative lexicon theoryto understand fully the experiments described in the next two sections we will refer to several semantic notions introduced in previous sectionsthese include type coercion where a lexical item requires a specific type specification for its argument and 11 as one reviewer correctly pointed out more than simple argument binding is involved herefor example the model must know that paper can enclose shredded tobacco but not the reversesuch information typically part of commonsense knowledge is well outside the domain of lexical semantics as envisioned hereone approach to this problem consistent with our methodology is to examine the corpus and the collocations that result from training on specific qualia relationsfurther work will hopefully clarify the nature of this problem and whether it is best treated lexically or not the argument is able to change type accordinglythis explains the behavior of logical metonymy and the syntactic variation seen in complements to verbs and nominals and cospecification a semantic tagging of what collocational patterns the lexical item may enter intometonymy in this view can be seen as a case of the quotlicensed violationquot of selectional restrictionsfor example while the verb announce selects for a human subject sentences like the dow corporation announced third quarter losses are not only an acceptable paraphrase of the selectionally correct form mr dow jr announced third quarter losses for dow corp but they are the preferred form in the corpora being examinedthis is an example of subject type coercion where the semantics for dow corp as a company must specify that there is a human typically associated with such official pronouncements 12 for one set of experiments we used a corpus of approximately 3000 articles written by digital equipment corporation customer support specialists for an online computer troubleshooting librarythe articles each one to twopage long descriptions of a problem and its solution comprise about 1 million wordsour analysis proceeds in two phasesin the first phase we preprocess the corpus to build a database of phrasal relationshipsthis consists briefly of the following steps indicatorsany words that are ambiguous with respect to category are disambiguated according to a set of several dozen ordered disambiguation heuristics which choose a category based on the categories of the words immediately preceding and following the ambiguous term transitions to indicate likely phrase boundariesno attempt is made to construct a full parse tree or resolve prepositional phrase attachment conjunction scoping etca concordance is constructed identifying for each word appearing in the corpus the set of sentences phrases and phrase locations in which the word appears12 within the current framework a distinction is made between logical metonymy where the metonymic extension or relation is transparent from the lexical semantics of the coerced phrase and conventional metonymy where the relation may not be directly calculated from information provided grammaticallyfor example in the sentence quotthe boston office called todayquot it is not clear from logical metonymy what relation boston bears to office other than location ie it is not obvious that it is a branch officethis is well beyond lexical semantics the database of partially parsed sentences provides the raw material for a number of sublanguage analysesthis begins the second phase of analysis querying and thesaurus browsingwe construct bracketed noun compounds from our database of partial parses in a twostep processthe first simply searches the corpus for contiguous sequences of nounsthen to bracket each compound that includes more than two nouns we test whether possible subcomponents of the phrase exist on their own elsewhere in the corpussample bracketed compounds derived from the computer troubleshooting database include syst them management utility tk50 tape drive database management system2generation of taxonomic relationships on the basis of collocational informationtechnical sublanguages often express subclass relationships in noun compounds of the form as in quotunix operating systemquot and quotc languagequot unfortunately noun compounds are also employed to express numerous other relationships as in quotunix kernelquot and quotc debuggerquot we have found however that collocational evidence can be employed to suggest which noun compounds reflect taxonomic relationships using a strategy similar to that employed by hindle for detecting synonymsgiven a term t we extract from the phrase database those nouns n that appear as the head of any phrase in which t is the immediately preceding termthese nouns represent candidate classes of which t may be a memberwe then generate the set of verbs that take t as direct object and calculate the mutual information value for each verbt collocation we do the same for each noun n under the assumption that instance and class nouns are likely to cooccur with the same verbs we compute a similarity score between t and each noun n by summing the product of the mutual information values for those verbs occurring with both nounsthe noun with the highest similarity score is often the class of which t is an instance as illustrated by the sample results in figure 1for each word displayed in figure 1 its quotclassquot is the head noun with the highest similarity scoreother head nouns occurring with the word as modifier are listed as wellas with all the automated procedures described here this algorithm yields useful but imperfect resultsthe class chosen for quotvmsquot for example is incorrect and may reflect the fact that in a dec troubleshooting database authors see no need to further specify vms as quotvms operating systemquot a more interesting observation is that among the collocations associated with the terms there are often several that might qualify as classes of which the term is an instance eg decwindows could also be classified as quotsoftwarequot tk50 might also qualify as quottapequot from a generative lexicon perspective these alternative classifications reflect multiple inheritance through the noun qualiathat is quotcartridgequot is further specifying the formal role of tape for tk50decwindows is functionally an quotenvironmentquot its telic role while quotsoftwarequot characterizes its formal quale3extraction of information relating to noun qualiaunder certain circumstances it may be possible to elicit information about a noun qualia from automated procedures on a corpusin this line of research we hayed employed the notion of quotlexical conceptual paradigmquot described abovean lcp relates a set of syntactic behaviors to the lexical semantic structures of the participating lexical itemsfor example the set of expressions involving the word quottapequot in the context of its use as a secondary storage device suggests that it fits the container artifact schema of the qualia structure with quotinformationquot and quotfilequot as its containees as mentioned in section 1 containers tend to appear as objects of the prepositions to from in and on as well as in direct object position in which case they are typically serving metonymically for the containeethus the container lcp relates the set of generalized syntactic patterns v ni to from on nk vi n this lcp includes a nominal alternation between the container and containee in the object position of verbsfor tape this alternation is manifested for verbs that predicate the telic role of data storage but not the formal role of physical object which refers to the object as a whole regardless of its contents we have explored the use of heuristics to distinguish those predicates that relate to the telic quale of the nounconsider the word tape which occurs as the direct object in 107 sentences in our corpusit appears with a total of 34 different verbsby applying the mutual information metric to the verbobject pairs we can sort the verbs accordingly giving us the table of verbs most highly associated with tape shown in figure 2while the mutual information statistic does a good job of identifying verbs that semantically relate to the word tape it provides no information about how the verbs relate to the noun qualia structurethat is verbs such as unload position and mount are selecting for the formal quale of tape a physical object that can be physically manipulated with respect to a tape driveread write and copy on the other hand relate to the telic role the function of a tape as a medium for storing informationour hypothesis was that the nominal alternation can help to distinguish the two sets of verbswe reasoned that if the alternation is based on the containercontainee metonymy then it will be those verbs that apply to the telic role of the direct object that participate in the alternationwe tested this hypothesis as followswe generated a candidate set of containees for tape by identifying all the nouns that appeared in the corpus to the left of the adjunct on tapeintersection and set difference for three container nounsthen we took the set of verbs that had one of these containee nouns as a direct object and compared this set to the set of verbs that had the container noun tape as a direct object in the corpusaccording to our hypothesis verbs applying to the telic role should appear in the intersection of these two sets while those applying to the formal role will appear in the set difference verbs with containers as direct objectverbs with containees as direct objectthe difference operation should serve to remove any verbs that cooccur with containee objectsfigure 3 shows the results of intersection and set difference for three container nouns tape disk and directorythe results indicate that the container lcp is able to differentiate nouns with respect to their telic and formal qualia for the nouns tape and disk but not for directorythe poor discrimination in the latter case can be attributed to the fact that a directory is a recursive containera directory contains files and a directory is itself a filetherefore verbs that apply to the formal role of directory are likely to apply to the formal role of objects contained in directories this can be seen as a shortcoming of the container lcp for the task at hand but may be a useful way of diagnosing when containers contain objects functionally similar to themselvesthe result of this corpus acquisition procedure is a kind of minimal faceted analysis for the noun tape as illustrated below showing only the qualia that are relevant to the discussion13 because the technique was sensitive to grammatical position of the object np the argument can be bound to the appropriate variable in the relation expressed in the qualiait should be pointed out that these qualia values do not carry event place variables since such discrimination was beyond the scope of this experimentwhat is interesting about the qualia values is how close they are to the concepts in the projective conclusion space of tape as mentioned in section 1to illustrate this procedure on another semantic category consider the term mouse in its computer artifact sensein our corpus it appears in the object position of the verb use in a quotuse np toquot construction as well as the object of the preposition with following a transitive verb and its object these constructions are symptomatic of its role as an instrument and the vp complement of to as well as the vp dominating the withpp identify the telic predicates for the nounother verbs for which mouse appears as a direct object are currently defaulted into the formal role resulting in an entry for mouse as follows the above experiments have met with limited success enough to warrant continuing our application of lexical semantic theory to knowledge acquisition from corpora but not enough to remove the human from the loopas they currently exist the algorithms described here can be used as tools to help the knowledge engineer extract useful information from online textual sources and in some applications may provide a useful way to heuristically organize sublanguage terminology when human resources are unavailablethe purpose of the research described in this section is to experiment with the automatic acquisition of semantic tags for words in a sublanguage tags well beyond that available from the seeding of mrdsthe identification of semantic tags is the result of type coercion on known syntactic forms to induce a semantic feature such as event or objecta pervasive example of type coercion is seen in the complements of aspectual verbs such as begin and finish and verbs such as enjoythat is in sentences such as quotjohn began the bookquot the normal complement expected is an action or event of some sort most often expressed by a gerundive or infinitival phrase quotjohn began reading the bookquot quotjohn began to read the bookquot in pustejovsky it was argued that in such cases the verb need not have multiple subcategorizations but only one deep semantic type in this case an eventthus the verb coerces its complement into an event related to that objectsuch information can be represented by means of a representational schema called qualia structure which among other things specifies the relations associated with objectscounts for objects of beginvin related work being carried out with mats rooth of the university of stuttgart we are exploring what the range of coercion types is and what environments they may appear in as discovered in corporasome of our initial data suggest that the hypothesis of deep semantic selection may in fact be correct as well as indicating what the nature of the coercion rules may beusing techniques described in church and hindle church and hanks and hindle and rooth figure 4 shows some examples of the most frequent v0 pairs from the ap corpuscorpus studies confirm similar results for quotweakly intensional contextsquot such as the complement of coercive verbs such as vetothese are interesting because regardless of the noun type appearing as complement it is embedded within a semantic interpretation of quotthe proposal toquot thereby clothing the complement within an intensional contextthe examples in figure 5 with the verb veto indicate two things first that such coercions are regular and pervasive in corpora second that almost anything can be vetoed but that the most frequently occurring objects are closest to the type selected by the verbwhat these data show is that the highest count complement types match the type required by the verb namely that one vetoes a bill or proposal to do something not the thing itselfthese nouns can therefore be used with some predictive certainty for inducing the semantic type in coercive environments such as quotveto the expeditionquot this work is still preliminary however and requires further examination in this section we present another experiment indicating the feasibility of inducing semantic tags for lexical items from corporaimagine being able to take the v0 pairs counts for objects of vetov such as those given in section 41 and then applying semantic tags to the verbs that are appropriate to the role they play for that object this is similar to the experiment reported on in section 3here we apply a similar technique to a much larger corpus in order to induce the agentive role for nouns that is the semantic predicate associated with bringing about the objectin this example we look at the behavior of noun phrases and the prepositional phrases that follow themin particular we look at the cooccurrence of nominals with between with and totable 1 shows results of the conflating noun plus preposition patternsthe percentage shown indicates the ratio of the particular collocation to the key wordmutual information statistics for the two words in collocation are also shownwhat these results indicate is that induction of semantic type from conflating syntactic patterns is possiblebased on the semantic types for these prepositions the syntactic evidence suggests that there is an equivalence class where each preposition makes reference to a symmetric relation between the arguments in the following two patterns we then take these results and for those nouns where the association ratios for n with and n between are similar we pair them with the set of verbs governing these quotnp ppquot combinations in corpus effectively partitioning the original v0 set into agentive predicates and agentive predicatesthese are semantic ngrams rather than direct interpretations of the prepositionswhat these expressions in effect indicate is the range of semantic environments they will appear inthat is in sentences like those in example 16 the force of the relational nouns agreement and talks is that they are unsaturated for the predicate bringing about this relationin 17 on the other hand the nps headed by agreement and talks are saturated in this respectif our hypothesis is correct we expect that verbs governing nominals collocated with a withphrase will be mostly those predicates referring to the agentive quale of the nominalthis is because the withphrase is unsaturated as a predicate and acts to identify the agent of the verb as its argument this is confirmed by our data shown in figure 6conversely verbs governing nominals collocating with a betweenphrase will not refer to the agentive since the phrase is saturated alreadyindeed the only verb occurring in this position with any frequency is the copula be namely with the following counts 12 bev ventureothus weak semantic types can be induced on the basis of syntactic behaviorthere is a growing literature on corpusbased acquisition and tuning we share with these researchers a general dependence on wellbehaved collocational patterns and distributional structuresprobably the main distinguishing feature of our approach is its reliance on a fairly well studied semantic framework to aid and guide the semantic induction process itself whether it involves selectional restrictions or semantic typesin the previous section we presented algorithms for extracting collocational information from corpora in order to supplement and finetune the lexical structures seeded by a machinereadable dictionaryin this section we demonstrate that in addition to conventional lexical semantic relations it is also possible to acquire information concerning lexical presuppositions and preferences from corpora when analyzed with the appropriate semantic toolsin particular we will discuss a phenomenon we call discourse polarity and how corpusbased experiments provide clues toward the representation of this phenomenon as well as information on preference relationsas we have seen providing a representational system for lexical semantic relations is a nontrivial taskrepresenting presuppositional information however is even more dauntingnevertheless there are some systematic semantic generalizations associated with such subtle lexical inferencesto illustrate this consider the following examples taken from the wall street journal corpus involving the verb insistbut the bnl sources yesterday insisted that the head office was aware of only a small portion of the credits to iraq made by atlantamr smale who ordinarily insists on a test market before a national rollout told the team to go aheadalthough he said he was skeptical that pringle could survive mr tucker saysthe cantonese insist that their fish be quotfreshquot though one whiff of hong kong harbor and the visitor may yearn for something shipped from distant seasexample 25 money is not the issue mr bush insistsfrom analyzing these and similar data a pattern emerges concerning the use of verbs like insist in discourse namely the cooccurrence with discourse markers denoting negative affect such as although and but as well as literal negatives eg no and notthis is reminiscent of the behavior of negative polarity items such as any more and at allsuch lexical items occur only in the context of negatives within a certain structural configurationin a similar way verbs such as insist seem to require an overt or implicit negation within the immediate discourse context rather than within the clausefor this reason we will call such verbs discourse polarity itemsfor our purposes the significance of such data is twofold first experiments on corpora can test and confirm linguistic intuitions concerning a subtle semantic judgment second if such knowledge is in fact so systematic then it must be at least partially represented in the lexical semantics of the verbto test whether the intuitions supported by the above data could be confirmed in corpora bergler derived the statistical cooccurrence of insist with discourse polarity markers in the 7 millionword corpus of wall street journal articlesshe derived the statistics reported in figure 7let us assume on the basis of this preliminary date presented in bergler that these verbs in fact do behave as discourse polarity itemsthe question theninsist 586 occurrences throughout the corpus insist on 109 these have been cleaned by hand and are actually occurrences of the idiom insist on rather than accidental cooccurrences insist but 117 occurrences of both insist and but in the same sentence insist negation 186 includes not and nt insist sr subjunctive 159 includes would could should and be negative markers with insist in wsjc immediately arises as to how we represent this type of knowledgeusing the language of the qualia structure discussed above we can make explicit reference to the polarity behavior in the following informal but intuitive representation for the verb insistthis entry states that in the reportingverb sense of the word insist is a relation between an individual and a statement that is the negation of a proposition p presupposed in the context of the utteranceas argued in pustejovsky and miller and fellbaum such simple oppositional predicates form a central part of our lexicalization of conceptssemantically motivated collocations such as these extracted from large corpora can provide presuppositional information for words that would otherwise be missing from the lexical semantics of an entrywhile full automatic extraction of semantic collocations is not yet feasible some recent research in related areas is promisinghindle reports interesting results of this kind based on literal collocations where he parses the corpus into predicateargument structures and applies a mutual information measure to weigh the association between the predicate and each of its argumentsfor example as a list of the most frequent objects for the verb drink in his corpus hindle found beer tea pepsi and champagnebased on the distributional hypothesis that the degree of shared contexts is a similarity measure for words he develops a similarity metric for nouns based on their substitutability in certain verb contextshindle thus finds sets of semantically similar nouns based on syntactic cooccurrence datathe sets he extracts are promising for example the ten most similar nouns to treaty in his corpus are agreement plan constitution contract proposal accord amendment rule law and legislationthis work is very close in spirit to our own investigation here the emphasis on syntactic cooccurrence enables hindle to extract his similarity lists automatically they are therefore easy to compile for different corpora different sublanguages etchere we are attempting to use these techniques together with a model of lexical meaning to capture deeper lexical semantic collocations eg the generalization that the list of objects occurring for the word drink contains only liquidsin the final part of this section we turn to how the analysis of corpora can provide lexical semantic preferences for verb selectionas discussed above there is a growing body of research on deriving collocations from corpora here we employ the tools of semantic analysis from section 1 to examine the behavior of metonymy with reporting verbswe will show on the basis of corpus analysis how verbs display marked differences in the ability to license metonymic operations over their argumentssuch information we argue is part of the preference semantics for a sublanguage as automatically derived from corpusmetonymy can be seen as a case of quotlicensed violationquot of selectional restrictionsfor example while the verb announce selects for a human subject sentences like the phantasie corporation announced third quarter losses are not only an acceptable paraphrase of the selectionally correct form mr phantasie jr announced third quarter losses for phantasie corp but they are the preferred form in the wall street journalthis is an example of subject type coercion as discussed in section 1for example the qualia structure for a noun such as corporation might be represented as below the metonymic extension in this example is straightforward a spokesman executive or otherwise legitimate representative quotspeaking forquot a company or institution can be metonymically replaced by that company or institutionwe find that this type of metonymic extension for the subject is natural and indeed very frequent with reporting verbs bergler such as announce report release and claim while it is in general not possible with other verbs selecting human subjects eg the verbs of contemplation however there are subtle differences in the occurrence of such metonymies for the different members of the same semantic verb class that arise from corpus analysisa reporting verb is an utterance verb that is used to relate the words of a sourcein a careful study of seven reporting verbs on a 250000word corpus of time magazine articles from 1963 we found that the preference for different metonymic extensions varies considerably within this field figure 8 shows the findings for the words insist deny admit claim announce said and told for two metonymic extensions namely where a group stands for an individual and where a company or other institution stands for the individual 19 the difference in patterns of metonymic behavior is quite striking semantically similar verbs seem to pattern similarly over all three categories admit insist and deny show a closer resemblance to each other than to any of the others while said and preference for metonymies for said in a 160000word fragment of the wall street journal corpus told form a category by themselvesthere may be a purely semantic explanation why said and told seem not to prefer the metonymic use in subject position eg perhaps these verbs relate more closely to the act of uttering or perhaps they are too informal stylisticallyevidence from other corpora however suggests that such information is accurately characterized as lexical preferencean initial experiment on a subset of the wall street journal corpus for example shows that said has a quite different metonymic distribution there reported in figure 9in this corpus we discovered that subject selection for an individual person appeared in only 50 of the sentences while a companyinstitution appeared in 34 of the casesthis difference could either be attributed to a difference in style between time magazine and the wall street journal or perhaps to a difference in general usage between 1963 and 1989the statistics presented here can of course not determine the reason for the difference but rather help establish the lexical semantic preferences that exist in a certain corpus and sublanguagean important question related to the extraction of preference information is what the corpus should berecent effort has been spent constructing balanced corpora containing text from different styles and sources such as novels newspaper texts scientific journal articles etcthe assumption is of course that given a representative mix of samples of language use we can extract the general properties and usage of wordsbut if we gain access to sophisticated automatic corpus analysis tools such as those discussed above and indeed if we have specialized algorithms for sublanguage extraction then homogeneous corpora might provide better datathe few examples of lexical preference mentioned in this section might not tell us anything conclusive for the definitive usage of a word such as said if there even exists such a notionnevertheless the statistics provide an important tool for text analysis within the corpus from which they are derivedbecause we can systematically capture the violation of selectional restrictions there is no need for a text analysis system to perform extensive commonsense inferencingthus such presupposition and preference statistics are vital to efficient processing of real textin this paper we have presented a particularly directed program of research for how text corpora can contribute to linguistics and computational linguisticswe first presented a representation language for lexical knowledge the generative lexicon and demonstrated how it facilitates the structuring of lexical relations among words looking in particular at the problems of metonymy and polysemysuch a framework for lexical knowledge suggests that there are richer relationships among words in text beyond that of simple cooccurrence that can be extracted automaticallythe work suggests how linguistic phenomena such as metonymy and polysemy might be exploited for knowledge acquisition for lexical itemsunlike purely statistical collocational analyses the framework of a semantic theory allows the automatic construction of predictions about deeper semantic relationships among words appearing in collocational systemswe illustrated the approach for the acquisition of lexical information for several classes of nominals and how such techniques can finetune the lexical structures acquired from an initial seeding of a machinereadable dictionaryin addition to conventional lexical semantic relations we then showed how information concerning lexical presuppositions and preference relations can also be acquired from corpora when analyzed with the appropriate semantic toolsin conclusion we feel that the application of computational resources to the analysis of text corpora has and will continue to have a profound effect on the direction of linguistic and computational linguistic researchunlike previous attempts at corpus research the current focus is supported and guided by theoretical tools and not merely statistical techniqueswe should furthermore welcome the ability to expand the data set used for the confirmation of linguistic hypothesesat the same time we must remember that statistical results themselves reveal nothing and require careful and systematic interpretation by the investigator to become linguistic datathis research was supported by darpa contract mda90491c9328we would like to thank scott waterman for his assistance in preparing the statisticswe would also like to thank mats rooth scott waterman and four anonymous reviewers for useful comments and discussion
J93-2005
lexical semantic techniques for corpus analysisin this paper we outline a research program for computational linguistics making extensive use of text corporawe demonstrate how a semantic framework for lexical knowledge can suggest richer relationships among words in text beyond that of simple cooccurrencethe work suggests how linguistic phenomena such as metonymy and polysemy might be exploitable for semantic tagging of lexical itemsunlike with purely statistical collocational analyses the framework of a semantic theory allows the automatic construction of predictions about deeper semantic relationships among words appearing in collocational systemswe illustrate the approach for the acquisition of lexical information for several classes of nominals and how such techniques can finetune the lexical structures acquired from an initial seeding of a machinereadable dictionaryin addition to conventional lexical semantic relations we show how information concerning lexical presuppositions and preference relations can also be acquired from corpora when analyzed with the appropriate semantic toolsfinally we discuss the potential that corpus studies have for enriching the data set for theoretical linguistic research as well as helping to confirm or disconfirm linguistic hypotheseswe present an interesting framework for the acquisition of semantic relations from corpora not only relying on statistics but guided by theoretical lexicon principleswe show how statistical techniques such as mutual information measures can contribute to automatically acquire lexical information regarding the link between a noun and a predicatewe use generalized syntactic patterns for extracting qualia structures from a partially parsed corpus
coping with ambiguity and unknown words through probabilistic models heights ny katz s m quotestimation of probabilities from sparse data for the language model component of a speech in transactions on acoustics speech and signal processing vol assp35 no 3 kuhn r and de mori are quota cachebased natural language model for recognitionquot in transactions on pattern analysis and machine intelligence 12570583 kupiec j quotaugmenting a hidden markov model for phrasedependent taggingquot in speech and language workshop from spring 1990 through fall 1991 we performed a battery of small experiments to test the effectiveness of supplementing knowledgebased techniques with probabilistic modelsthis paper reports our experiments in predicting parts of speech of highly ambiguous words predicting the intended interpretation of an utterance when more than one interpretation satisfies all known syntactic and semantic constraints and learning case frame information for verbs from example usesfrom these experiments we are convinced that probabilistic models based on annotated corpora can effectively reduce the ambiguity in processing text and can be used to acquire lexical information from a corpus by supplementing knowledgebased techniquesbased on the results of those experiments we have constructed a new natural language system for extracting data from text eg newswire textnatural language processing and at in general have focused mainly on building rulebased systems with carefully handcrafted rules and domain knowledgeour own natural language database query systems janus parlancetm1 and delphi have used these techniques quite successfullyhowever as we move from the application of understanding database queries in limited domains to applications of processing openended text we found challenges that questioned our previous assumptions and suggested probabilistic models instead1we could no longer assume a limited vocabularyrather in the domain of terrorist incidents of the third message understanding conference roughly 20000 vocabulary items appear in a corpus 430000 words longadditional text from that domain would undoubtedly contain new wordsprobabilistic models offer a mathematically grounded empirically based means of predicting the most likely interpretationto see whether our four hypotheses effectively addressed the four concerns above we chose to test the hypotheses on two wellknown problems ambiguity and inferring syntactic and semantic information about unknown wordsguided by the past success of probabilistic models in speech processing we have integrated probabilistic models into our language processing systemsearly speech research used purely knowledgebased approaches analogous to knowledgebased approaches in nlp systems todaythese required much detailed handcrafted knowledge from several sources however when it became clear that these techniques were too brittle and not scalable speech researchers turned to probabilistic modelsthese provided a flexible control structure for combining multiple sources of knowledge and algorithms for training the system on large bodies of data since probability theory offers a general mathematical modeling tool for estimating how likely an event is probability theory may be applied at all levels in natural language processing because some set of events can be associated with each algorithmfor example in morphological processing in english the events are the use of a word with a particular part of speech in a string of wordsat the level of syntax an event is the use of a particular structure the model predicts what the most likely rule is given a particular situationone can similarly use probabilities for assigning semantic structure we report in section 2 on our experiments on the assignment of part of speech to words in textthe effectiveness of such models is well known and they are currently in use in parsers our work is an incremental improvement on these models in three ways much less training data than theoretically required proved adequate we integrated a probabilistic model of word features to handle unknown words uniformly within the probabilistic model and measured its contribution and we have applied the forwardbackward algorithm to accurately compute the most likely tag setin section 3 we demonstrate that probability models can improve the performance of knowledgebased syntactic and semantic processing in dealing with structural ambiguity and with unknown wordsthough the probability model employed is not new our empirical findings are novelwhen a choice among alternative interpretations produced by a unificationbased parser and semantic interpreter must be made a simple contextfree probability model reduced the error rate by a factor of two compared with using no modelit is well known that a unification parser can process an unknown word by collecting the assumptions it makes while trying to find an interpretation for a sentenceas a second result we found that adding a contextfree probability model improved the unification predictions of syntactic and semantic properties of an unknown word reducing the error rate by a factor of two compared with no modelin section 4 we report an experiment in learning case frame information of unknown verbs from examplesthe probabilistic algorithm is critical to selecting the appropriate generalizations to make from a set of examplesthe effectiveness of the semantic case frames inferred is measured by testing how well those case frames predict the correct attachment point for prepositional phrasesin this case a significant new model synthesizing both semantic and syntactic knowledge is employedidentifying the part of speech of a word illustrates both the problem of ambiguity and the problem of unknown wordsmany words are ambiguous in several ways as in the following a round table adjective a round of cheese noun to round out your interests verb to work the year round adverb even in context part of speech can be ambiguous as in the famous example quottime flies like an arrowquot where the first three words are ambiguous in two ways resulting in four grammatical interpretations of the sentencein processing text such as newswire ambiguity at the word level is highin an analysis of texts from the wall street journal we found that the average number of parts of speech per word was approximately twodetermining the part of speech of an unknown word can help the system to know how the word functions in the sentence for instance that it is a verb stating an action or state of affairs that it is a common noun stating a class of persons places or things that it is a proper noun naming a particular person place or thing etcif it can do that well then more precise classification and understanding is feasiblethe most critical feature to us is to have local criteria for ranking the alternative parts of speech rather than relying solely on a globally correct parsethe probability model we selected offers these featuresthe name of our component for part of speech is post in our work we have used wellknown probability models known as hidden markov models therefore none of the background in section 21 is novelif we want to determine the most likely syntactic part of speech or tag for each word in a sentence we can formulate a probabilistic tagging modellet us assume that we want to know the most likely tag sequence t ft1 t2 trl given a particular word sequence where p is the a priori probability of tag sequence t p is the conditional probability of word sequence w occurring given that a sequence of tags t occurred and p is the unconditioned probability of word sequence w then in principle we can consider all possible tag sequences evaluate p of each and choose the tag sequence t that is most likely ie the sequence that maximizes psince w is the same for all hypothesized tag sequences we can disregard pwe can rewrite the probability of each sequence as a product of the conditional probabilities of each word or tag given all of the previous tagstypically one makes two simplifying assumptions to cut down on the number of probabilities to be estimatedrather than assuming w depends on all previous words and all previous tags one assumes w depends only on tthis independence assumption of course is not correctyet it so reduces the number of probabilities that must be estimated and therefore so reduces the amount of data needed to estimate probabilities that it is a worthwhile simplifying assumptionit is an empirical issue whether alternative assumptions would yield significantly better performancesecond rather than assuming the tag t depends on the full sequence of previous tags we can assume that local context is sufficienttypically individuals have assumed tag t depends only on t_i and t1_2 or only on tz_i this assumed locality is termed a markov independence assumptionusing a tritag model we then have the following if we have sufficient training data we can estimate the tag ngram sequence of probabilities and the probability of each word given a tag using a tagged corpus to train the model is called quotsupervised trainingquot since a human has prepared the correct training datawe conducted supervised training to derive both a bitag and a tritag model based on a corpus from the university of pennsylvania which was created as part of the treebank project consisting of wall street journal articles texts from the library of america transcribed radio broadcasts and transcribed dialoguesthe full treebank consists of approximately 4 million words of textof the 47 parts of speech 36 are word tags and 11 are punctuation tagsof the word tags 22 are tags for open class words and 14 for closed class wordseach word or punctuation mark has been tagged as shown in the following example where nns is plural noun vbd is past tense verb rb is adverbial vbn is past participle verba bitag model predicts the relative likelihood of a particular tag given the preceding tag eg how likely is the tag vbd on the second word in the above example given that the previous word was tagged nnsa tritag model predicts the relative likelihood of a particular tag given the two preceding tags eg how likely is the tag rb on the third word in the above example given that the two previous words were tagged nns and vbdwhile the bitag model is faster at processing time the tritag model has a lower error ratethe algorithm for supervised training is straightforwardone counts for each possible pair of tags the number of times that the pair was followed by each possible third tagthe number of times a given third tag t occurs after tags 11 and t2 divided by the number of times t1 and t2 are followed by any third tag is an estimate of the probability of pone also estimates from the training data the conditional probability of each particular word given a known tag this is called the quotword emitquot probabilitythis is simply the number of times a particular word appears as part of speech t divided by the number of times part of speech t appears in the corpusno matter how large the training corpus one may not see all pairs or triples of tags nor all words used in each part of speech possible in the language nor all wordsit seems unwise to assume that the probability of an unseen event is zeroto deal with the previously unseen one employs one of several estimation techniques called quotpaddingquot thus far we have employed the simplest of these techniques for estimating p if t1t2t3 was not present in the training corpussuppose triples beginning with ti t2 appear m times in the corpussuppose further that for j distinct tags t tit2t was not present in the corpusthen we estimate p 1m so that the probability of tags given ti t2 sum to one we subtract 1jm from the probability of each triple that actually was observed in the corpus ie if ti t2ti was observed k times in the corpus then we estimate p km 1 jmgiven these probabilities one can then find the most likely tag sequence for a given word sequenceusing the viterbi algorithm we selected the path whose overall probability was highest and then took the tag predictions from that pathwe replicated the earlier results that this process is able to predict the parts of speech with only a 34 error rate when the possible parts of speech of each of the words in the corpus are knownthis is in fact about the rate of discrepancies among human taggers on the treebank project while supervised training is shown here to be very effective it requires a correctly tagged corpushow much manually annotated data is requiredin our experiments we demonstrated that the training set can in fact be much smaller than might have been expectedone rule of thumb suggests that the training set needs to be large enough to contain on average ten instances of each type of tag sequence that occursthis would imply that a tritag model using 47 possible parts of speech would need a bit more than 1 million words of training if all possible tag sequences occurhowever we found that much less training data is necessary since many possible sequences do not occurit can be shown that if the average number of tokens of each trigram that has been observed is ten then the lower bound on the probability of new trigrams is 110thus the likelihood of a new trigram is fairly lowwhile theoretically the set of possible events is all permutations of the tags in practice only a relatively small number of tritag sequences actually occurout of about 97000 possible triples we found only 6170 unique triples when we trained on 64000 words and about 10000 when we trained on 1000000 wordsthus even size of trkag training sets though an additional 4000 sequences are observed in the full training set they are so rare that they do not significantly affect the overall accuracyin our initial experiments which were limited to known words the error rate for a supervised tritag model increased only from 330 to 387 when the size of the training set was reduced from 1 million words to 64000 words all that is really necessary recalling the rule of thumb is enough training to allow for ten of each of the tag sequences that do occurthis result is applicable to new tag sets subdomains or languageswe simply continue to increase the amount of training data until the number of training tokens is at least ten times the number of different sequences observed so faralternatively we can stop when the singleton events account for a small enough percentage of the total datathus in applications such as tagging where a significant number of the theoretically possible events do not occur in practice we can use supervised training of probabilistic models without needing prohibitively large corporaof course performance of post is also affected by the estimates of p for known words and unknown wordshow to estimate p for unknown words is covered in the next sectionfor an observed word a small training set of 64000 words may still be adequate for estimates of pwe found that by treating words observed only once as if they had not been observed at all that performance actually increased slightlythis suggests that adequate performance can be obtained from a relatively small training setwe are not aware of any other published studies documenting empirically the impact of training set size on performancesources of openended text such as a newswire present natural language processing technology with a major challenge what to do with words the system has never seen beforecurrent technology depends on handcrafted linguistic and domain knowledgefor instance the system that performed most successfully in the evaluation of software to extract data from text at the second message understanding conference held at the naval ocean systems center june 1989 would simply halt processing a sentence when a new word was encounteredusing the upenn set of parts of speech unknown words can be in any of 22 categoriesa tritag model can be used to estimate the most probable onerandom choice among the 22 open classes would be expected to show an error rate for new words of 95the best previously reported error rate based on probabilistic models was 75 in our first tests using the trkag model we showed an error rate of only 516however this model only took into account the context of the word and no information about the word itselfin many languages including english word endings give strong indicators of the part of speechfurthermore capitalization information when available can help to indicate whether a word is a proper nounwe have developed a novel probabilistic model that takes into account features of the word in determining the likelihood of the word given a part of speechthis was used instead of the quotword emitquot probabilities p for known wordsto estimate p for an unknown word we first determined the features we thought would distinguish parts of speechthere are four independent categories of features inflectional endings derivational endings hyphenation and capitalization these are not necessarily independent though we are treating them as such for our testsour initial test had 3 inflectional endings and 32 derivational endings capitalization has four values in our system in order to take into account the first word of a sentencewe can incorporate these features of the word into the probability that this particular word will occur given a particular tag using the following we estimate the probability of each ending for each tag directly from supervised training datawhile these probabilities are not strictly independent the approximation is good enough to make a marked difference in classification of unknown wordsas the results in figure 2 show the use of orthographic endings of words reduces the error rate on the unknown words by a factor of threewe tested capitalization separately since some data such as that in the third message understanding conference is uppercase onlytitles and bibliographies will cause similar distortions in a system trained on mixed case and using capitalization as a featurefurthermore some languages such as japanese have no explicit marking of proper nounsinterestingly the capitalization feature contributed very little to the reduction in error rates whereas using the word features contributed a great dealhowever it does undeniably reduce confusion with respect to the proper noun categorysome wellknown previous efforts have dealt with unknown words using various heuristicsfor instance church program parts has a prepass prior to applying the tritag probability model that predicts proper nouns based on capitalizationthe new aspects of our work are incorporating the treatment of unknown words uniformly within the probability model approximating the component probabilities for unknowns directly from the training data and measuring the contribution of the tritag model of the ending and of capitalizationin sum adding a probability model of typical endings of words to the trkag model has yielded an accuracy of 82 for unknown wordsadding a model of capitalization to the other two models further increased the accuracy to 85the total effect of bbn model has been a reduction of a factor of five in the error rate of the best previously reported performancedecreasing error rate with use of word featuresan alternative mode of running post is to return the set of most likely tags for each word rather than a single tag for eachin our first test the system returned the sequence of most likely tags for the sentencethis has the advantage of eliminating ambiguity however even with a rather low error rate of 37 there are cases in which the system returns the wrong tag which can be fatal for a parsing system trying to deal with sentences averaging more than 20 words in lengthde marcken developed an approximate method for finding multiple tags for each word given the preceding words and one following wordwe addressed this problem by adding the ability of the tagger to return for each word an ordered list of tags marked by their probability using the forward backward algorithm that yields a more precise method of determining the probability of each possible tag since it sums over all possible tag sequences taking into account the entire sentence and not just the preceding tagsthe forward backward algorithm is normally used in unsupervised training to estimate the model that finds the maximum likelihood of the parameters of that modelthe exact probability of a particular tag given a particular word is computed directly by the product of the quotforwardquot and quotbackwardquot probabilities to that tag divided by the probability of the word sequence given this modelfigure 3 shows kbest tagging output with the correct tag for each word marked in boldnote that the probabilities are in natural log base e thus for each difference of 1 there is a factor of 2718 in the probabilityin two of the words the first tag is not the kbest tags and probabilities correct onehowever in all instances the correct tag is included in the setnote the first word quotbaileyquot is unknown to the system therefore all of the open class tags are possiblein order to reduce the ambiguity further we tested various ways to limit how many tags were returned based on their probabilitiesoften one tag is very likely and the others while possible are given a low probability as in the word quotinquot abovetherefore we tried removing all tags whose probability was less than some arbitrary threshold for example removing all tags whose likelihood is more than e2 less likely than the most likely tagso only tags within the threshold 20 of the most likely would be included this reduced the ambiguity for known words from 193 tags per word to 123 and for unknown words from 152 to 20however the negative side of using cutoffs is that the correct tag may be excludednote that a threshold of 20 would exclude the correct tag for the word quotcontrolsquot aboveby changing the threshold to 40 we are sure to include all the correct tags in this example but the ambiguity for known words increases from 123 to 124 and for unknown words from 20 to 37 for an ambiguity rating of 157 overallwe are continuing experiments to determine the most effective way of limiting the number of tags returned and hence decreasing ambiguity while ensuring that the correct tag is likely to be in the setbalancing the tradeoff between ambiguity and accuracy is very dependent on the use the tagging will be put toit is dependent both on the component that the tagged text directly feeds into such as a parser that can efficiently follow many parses but cannot recover easily from errors versus one capable of returning a partial parse and on the application such as an application requiring high accuracy versus one requiring high speed in all of the tests discussed so far we both trained and tested on sets of articles in the same domain the wall street journal subset of the penn treebank projecthowever an important measure of the usefulness of the system is how well it performs in other domainswhile we would not expect high performance in radically different kinds of text such as transcriptions of conversations or technical manuals we would hope for similar performance on newspaper articles from different sources and on other topicswe tested this hypothesis using data from the third message understanding conference the goal of muc3 was to extract data from texts on terrorism in latin american countriesthe texts are a mixture of news interviews and speechesthe university of pennsylvania treebank project tagged 400 muc messages which we divided into 90 training and 10 testingfor our first test we used the original probability tables trained from the wall street journal articles but tested on muc messageswe then retrained the probabilities on the muc messages and ran a second test on muc messages with an average improvement of three percentage points in both bi and tri tagsthe full results are shown in figure 4 85 of the words in the test were unknown while the results using the new tables are an improvement in these firstbest tests we saw the best results using kbest mode which obtained a 7 error ratewe ran several tests using our kbest algorithm with various thresholdsas described in section 24 the threshold limits how many tags are returned based on their probabilitieswhile this reduces the ambiguity compared to considering all possibilities it also increases the error ratefigure 5 shows this tradeoff from effectively no threshold on the righthand side of the graph which has a 7 error rate and an ambiguity of 3 through a cutoff of 2 which has an error rate of 29 but an ambiguity of nearly zeroie one tag per wordin all of the results reported here we are using wordpartofspeech tables derived from training rather than online dictionaries to determine the possible tags for a given wordthe advantage of the tables is that the training provides the probability of a word given a tag whereas the dictionary makes no distinctions between common and uncommon uses of a wordthe disadvantage of this is that uses of a word that did not occur in the training set will be unknown to the systemfor example in the training portion of the wsj corpus the word quotputquot only occurred as a verbhowever in our test set it occurred as a noun in the compound quotput optionquot since for efficiency reasons we only consider those tags known to be possible for a word this will cause an errorwe have since integrated online dictionaries into the system so that alternative word senses will be considered while still not opening the set of tags considered for a known word to all open class tagsthis will not completely eliminate the problem since words are often used in novel ways as in this example from a public radio plea for funds quotyou can mastercard your pledgequot comparison of original and trained probabilitiesthe performance of today natural language understanding systems is hindered by the following three complementary problems our results on problems and above are presented in this sectionthe problem of partial interpretation when no complete interpretation can be found is touched upon in section 4probabilities can quantify the likelihood of alternative complete interpretations of a sentencein these experiments we used the grammar of the delphi component from bbn harc system which combines syntax and semantics in a unification formalismwe employed a contextfree model which estimates the probability of each rule in the grammar independently in the contextfree model we associate a probability with each rule of the grammarfor each distinct major category of the grammar there is a set of contextfree rules for each rule one estimates the probability of the righthand side given the lefthand side pwith supervised training where a set of correct parse trees is provided as training one estimates p i lhs by the number of times rule lhs rhsi appears in the training set divided by the number of times lhs appears in the treesthe probability of a syntactic structure s given the input string w is then modeled by the product of the probabilities of the rules used in s chitrao and grishman used a similar contextfree modelusing this model we explored the following issues probability of a parse tree given wordsour intention is to use the treebank corpus being developed at the university of pennsylvania as a source of correct structures for traininghowever in our first experiments we used small training sets taken from an existing questionanswering corpus of sentences about a personnel databaseto our surprise we found that as little as 80 sentences of supervised training are sufficient to improve the ranking of the interpretations foundin our tests the nlp system produces all interpretations satisfying all syntactic and semantic constraintsfrom that set the intended interpretation must be chosenthe contextfree probability model reduced the error rate on an independent test set predictions of probabilistic language model by a factor of two to four compared with no model ie random selection from the interpretations satisfying all knowledgebased constraintswe tested the predictive power of rule probabilities using this model both in unsupervised and in supervised modein the former case the input is all parse trees for the sentences in the training setin the latter case the training data included a specification of the correct parse as hand picked by the grammar author from among the parse trees produced by the systemthe detailed results from using a training set of 81 sentences appear in the histogram in figure 7the fact that so little data was adequate deserves further scrutinythe grammar had approximately 1050 rules one third of which are lexical eg a category goes to a wordestimating the lexical level is best handled via the partofspeech techniques covered in the previous sectiontherefore there were 700 nonlexical rulesthe training corpus consisted of 81 sentences whose parses averaged approximately 35 rules per sentencetherefore the corpus of trees included approximately 2850 rule occurrences or about 4 per rule on average over all ruleshowever as few as half of the rules were actually employed leading to an average of roughly 8 rule occurrences per rule observedtherefore there was close to the amount of data one would predict as desirableone further note about counting rule occurrences in the unification grammarrather than counting different unification bindings as different rules we counted the rule with unbound variables representing an equivalence class of rules with bound variablesthe quotbest possiblequot error rates for each test indicates the percentage of cases for which none of the interpretations produced by the system was judged correct so that no selection scheme could achieve a lower error rate than thatthe quotchancequot score gives the error rate that would be expected with random selection from all interpretations producedthe quottestquot column shows the error rate with the supervised or unsupervised probability model in questionthe first supervised test had an 814 improvement the second a 508 improvement and the third a 56 improvementthese results state how much better than chance the given model did as a percentage of the maximum possible improvementwe expect to improve the model performance by recording probabilities for other features in addition to just the set of rules involved in producing themfor example in the grammar used for this test two different attachments for a prepositional phrase produced trees with the same set of rules but differing in shapethus the simple contextfree model based on the product of rule probabilities could not capture preferences concerning such attachmentby adding to the model probabilities for such additional features we expect that the power of the probabilistic model to automatically select the correct parse can be substantially increasedsecond a much more reliable estimate of p can be estimated as described in section 2in fact one should be able to improve the estimate of a tree likelihood via p p pone purpose for probabilistic models is to contribute to handling new words or partially understood sentenceswe have done preliminary experiments that show that there is promise in learning lexical syntactic and semantic features from context when probabilistic tools are used to help control the ambiguityin our experiments we used a corpus of sentences each with one word that the system did not knowto create the corpus we began with a corpus of sentences known to parse from a personnel questionanswering domainwe then replaced one word in each sentence with an undefined wordfor example in the following sentence the word quotcontactquot is undefined in the system who in division four is the contact for mitthat word has both a noun and a verb part of speech however the pattern of parts of speech of the words surrounding quotcontactquot causes the tritag model to return a high probability that the word is a nounusing unification variables for all possible features of a noun the parser produces multiple parsesapplying the contextfree rule probabilities to select the most probable of the resulting parses allows the system to conclude both syntactic and semantic facts about quotcontactquot syntactically the system discovers that it is a count noun with third person singular agreementsemantically the system learns that quotcontactquot is in the semantic class personsfurthermore the partially specified semantic representation for the sentence as a whole also shows the semantic relation to schools which is expressed here by the for phrasethus even a single use of an unknown word in context can supply useful data about its syntactic and semantic featuresprobabilistic modeling plays a key role in this processwhile contextsensitive techniques for inferring lexical features can contribute a great deal they can still leave substantial ambiguityas a simple example suppose the word quotlistquot is undefined in the sentence list the employeesthe tritag model predicts both a noun and a verb part of speech in that positionusing an underspecified noun sense combined with the usual definitions for the rest of the words yields no parseshowever an underspecified verb sense yields three parses differing in the subcategorization frame of the verb quotlistquot for more complex sentences even with this very limited protocol the number of parses for the appropriate word sense can reach into the hundredsusing the rule probabilities acquired through supervised training the likelihood of the ambiguous interpretations resulting from a sentence with an unknown word was computedthen we tested whether the tree ranked most highly matched the tree previously selected by a person as the correct onethis tree equivalence test was based on the tree structure and on the rule applied at each node while an underspecified tree might have some lessspecified feature values than the chosen fully specified tree it would still be equivalent in the sense aboveof 160 inputs with an unknown word in 130 cases the most likely tree matched the correct one for an error rate of 1875 while picking at random would have resulted in an error rate of 37 for an improvement by a factor of 2this suggests that probabilistic modeling can be a powerful tool for controlling the high degree of ambiguity in efforts to automatically acquire lexical datawe have also begun to explore heuristics for combining lexical data for a single word acquired from a number of partial parsesthere are some cases in which the best approach is to unify the two learned sets of lexical features so that the derived sense becomes the sum of the information learned from the two examplesfor instance the verb subcategorization information learned from one example could be thus combined with agreement information learned from anotheron the other hand there are many cases including alternative subcategorization frames where each of the encountered options needs to be included as a separate alternativetraditionally natural language processing has focused on obtaining complete syntactic analyses of all input and on semantic analysis based on handcrafted knowledgehowever grammars are incomplete text often contains new words and there are errors in textfurthermore as research activities tackle broader domains if the research results are to scale up to realistic applications handcrafting knowledge must give way to automatic knowledge base constructionan alternative to traditional parsers is represented in fidditch mitfp and cass instead of requiring complete parses a forest is frequently produced each tree in the forest representing a nonoverlapping fragment of the inputhowever algorithms for finding the semantics of the whole from the disjoint fragments have not previously been developed or evaluatedwe have been comparing several differing algorithms from various sites to evaluate both the effectiveness of such a strategy in correctly predicting fragmentsthis is reported first the central experiment in this section tests the feasibility of learning case frame information for verbs from examplesin the method tested we assume that a body of fully parsed sentences such as those from treebank are availablewe furthermore assume that every head noun and head verb has a lexical link to a unary predicate in a taxonomic domain model that unary predicate is the most specific semantic class of entities denoted by the headwordfrom the parsed examples and the lexical links to the domain model an algorithm identifies case frame relations for the verbsif an algorithm is to learn case frame relations from text a basic concern is to reliably identify noun phrases and their semantic category even if neither full syntactic nor full semantic analysis is possiblefirst we discuss reliably finding them based on local syntactic informationin the next section we describe finding their semantic categorytwo of our experiments have focused on the identification of core noun phrases a primary way of expressing entities in texta core np is defined syntactically as the maximal simple noun phrase ie the largest one containing no postmodifiershere are some examples of core nps within their full noun phrases a joint venture with the chinese government to build an automobileparts assembly plant a 509 million loss from discontinued operations in the third quarter because of the proposed sale such complex full nps require too many linguistic decisions to be directly processed without detailed syntactic and semantic knowledge about each word an assumption that need not be true for openended textwe tested two differing algorithms on text from the wall street journal using bbn partofspeech tagger tagged text was parsed using the full unification grammar of delphi to find only core nps 695 in 100 sentenceshandscoring of the results indicated that 85 of the core nps were identified correctlysubsequent analysis suggested that half the errors could be removed with only a little additional work suggesting that over 90 performance is achievablein a related test we explored the bracketings produced by church parts program we extracted 200 sentences of wsj text by taking every tenth sentence from a collection of manually corrected parse trees we evaluated the np bracketings in these 200 sentences by hand and tried to classify the errorsof 1226 phrases in the 200 sentences 131 were errors for a 107 error ratethe errors were classified by hand as follows the 90 success rate in both tests suggests that identification of core nps can be achieved using only local information and with minimal knowledge of the wordsnext we consider the issue of what semantics should be assigned and how reliably that can be accomplishedin trying to extract prespecified data from openended text such as a newswire it is clear that full semantic interpretation of such texts is not on the horizonhowever our hypothesis is that it need not be for automatic data base updatethe type of information to be extracted permits some partial understandingfor semantic processing minimally for each noun phrase one would like to identify the class in the domain model that is the smallest predefined class containing the np denotationsince we have assumed that the lexicon has a pointer to the most specific class in the domain model the issue reduces to whether we can algorithmically predict the word if any in a noun phrase that denotes the np semantic classfor each clause one would like to identify the corresponding event class or state of affairs denotedour pilot experiment focused on the reliability of identifying the minimal class for each noun phraseassigning a semantic class to a core noun phrase can be handled via some structural rulesusually the semantic class of the headword is correct for the semantic class not only of the core noun phrase but also of the complete noun phrase it is part ofadditional rules cover exceptions such as quotset of quotthese heuristics correctly predicted the semantic class of the whole noun phrase 99 of the time in the sample of over 1000 noun phrases from the wsj that were correctly predicted by church parts programfurthermore even some of the nps whose left boundary was not predicted correctly by parts nevertheless were assigned the correct semantic classone consequence of this is that the correct semantic class of a complex noun phrase can be predicted even if some of the words in the noun phrase are unknown and even if its full structure is unknownthus fully correct identification of core noun phrase boundaries and of noun phrase boundaries may not be necessary to accurately produce database updatesthis result is crucial to our method of inferring case frames of verbs from examplessimple rules can predict which word designates the semantic clause of a noun phrase very reliablywe can use these simple rules plus lexical lookup to identify the basic semantic class of a noun phrasesemantic knowledge called selection restrictions or case frames governs what phrases make sense with a particular verb or noun traditionally such semantic knowledge is handcrafted though some software aids exist to enable greater productivity instead of handcrafting this semantic knowledge our goal is to learn that knowledge from examples using a threestep process noun verb and proper noun in the sample with the semantic class corresponding to it in the domain modelfor instance dawn would be annotated explode would be and yunguyo would be for our experiment 560 nouns and 170 verbs were defined in this waywe estimate that this semantic annotation proceeded at about 90 words per hour432 supervised trainingfrom the treebank project at the university of pennsylvania we used 20000 words of muc3 texts that had been bracketed according to major syntactic categorythe bracketed constituents for the sentence below appears in figure 8from the example one can clearly infer that bombs can explode or more properly that bomb can be the logical subject of explode that at dawn can modify explode etcnaturally good generalizations based on the instances are more valuable than the instances themselvessince we have a hierarchical domain model and since the manual semantic annotation states the relationship between lexical items and concepts in the domain model we can use the domain model hierarchy as a given set of categories for generalizationhowever the critical issue is selecting the right level of generalization given the set of examples in the supervised training setwe have chosen a known statistical procedure that selects the minimum level of generalization such that there is sufficient data in the training set to support discrimination of cases of attaching phrases to their headthis leads us to the next topic estimation of probabilities from the supervised training set433 estimation of probabilitiesthe case relation or selection restriction to be learned is of the form x p 0 where x is a headword or its semantic class p is a case eg logical subject logical object preposition etc and 0 is a head word or its semantic classone factor in the probability that 0 attaches to x with case p is p an estimate of the likelihood of attaching po to x given p and 0we chose to model a second multiplicative factor p the probability of an attachment where d words separate the headword x from the phrase to be attached for instance in the example previously discussed in the town is attached to the verb explode at a distance of four words of yunguyo is attached to the noun town at a distance of one word back etcthus we estimate the probability of attachment as p psince a 20000word corpus does not constitute enough data to estimate the probability of all triples we used an extension and generalization of an algorithm to automatically move up the hierarchical domain model from x to its parent and from 0 to its parentthe quotbackingoffquot that was originally proposed for the estimation of probabilities of ngram sequences of words starts with the most detailed modelin this case we start with the explicit probability of the phrase po attaching to the word xif we have no examples of x p 0 in the training set we consider with some penalty a class of x or 0thus the event becomes less specific but more likely to have been observedwe back off on the detail until we can estimate the probability from the training setthe katz algorithm gives a way to estimate the backoff penalty as the probability that we would not have observed the more detailed triple even though it was possible434 the experimentby examining the table of triples x p 0 that were learned it was clear that meaningful information was induced from the examplesfor instance and were learned which correspond to two cases of importance in the muc domainas a consequence useful semantic information was learned by the training algorithmhowever we ran a far more meaningful evaluation of what was learned by measuring how effective the learned information would be at predicting 166 prepositional phrase attachments that were not made by our partial parserfor example in the following sentence in the peruvian town can be attached syntactically at three places modifying dawn modifying today or modifying explodea bomb exploded today at dawn in the peruvian town of yunguyo near the lake very near where the presidential summit was to take placeclosest attachment a purely syntactic constraint worked quite effectively having a 25 error rateusing the semantic probabilities alone p had poorer performance a 34 error ratehowever the richer probability model pi p outperformed both the purely semantic model and the purely syntactic model yielding an 18 error ratehowever the degree of reduction of error rate should not be taken as the final word for the following reasons import in the muc3 domain their semantic type is vague ie etcin addition to the work discussed earlier on tools to increase the portability of natural language systems another recent paper is directly related to our goal of inferring case frame information from exampleshindle and rooth focused only on prepositional phrase attachment using a probabilistic model whereas our work applies to all case relationstheir work used an unsupervised training corpus of 13 million words to judge the strength of prepositional affinity to verbs eg how likely it is for to to attach to the word go for from to attach to the word leave or for to to attach to the word flightthis lexical affinity is measured independently of the object of the prepositionby contrast we are exploring induction of semantic relations from supervised training where very little training may be availablefurthermore we are looking at triples of headword syntactic case and headword in hindle and rooth test they evaluated their probability model in the limited case of verbnoun phraseprepositional phrasetherefore no model at all would be at least 50 accuratein our test many of the test cases involved three or more possible attachment points for the prepositional phrase which provided a more realistic testan interesting next step would be to combine these two probabilistic models in order to get the benefit of domainspecific knowledge as we have explored and the benefits of domainindependent knowledge as hindle and rooth have exploredthe experiments on the effectiveness of finding core nps using only local information were run by midsummer 1990in fall 1990 another alternative the fast partial parser which is a derivative of earlier work became available to usit finds fragments using a stochastic part of speech algorithm and a nearly deterministic parserit produces fragments averaging three to four words in lengthfigure 9 shows an example output for the sentencea bomb exploded today at dawn in the peruvian town of yunguyo near the lake very near where the presidential summit was to take placecertain sequences of fragments appear frequently as illustrated in tables 1 and 2one frequently occurring pair is an s followed by a pp since there is more than one way the parser could attach the pp and syntactic grounds alone for attaching the pp would yield poor performance semantic preferences applied by a postprocess that combines fragments are called forin our approach we propose using local syntactic and semantic information rather than assuming a global syntactic and semantic form will be foundthe first step is to compute a semantic interpretation for each fragment found without assuming that the meaning of each word is knownfor instance as described above the semantic class for any noun phrase can be computed provided the head noun has semantics in the domainbased on the data above a reasonable approach is an algorithm that moves lefttoright through the set of fragments produced by fpp deciding to attach fragments based on semantic criteriato avoid requiring a complete global analysis a window two constituents wide is used to find patterns of possible relations among phrasesfor example an s followed by a pp invokes an action of finding all points along the quotright edgequot of the s tree where a pp could attach applying the fragment combining patterns at each such spot and ranking the alternativesas is evident in table 2 fpp frequently does not attach punctuationthis is to be expected since punctuation is used in many ways and there is no deterministic basis for attaching the constituent following the punctuation to the constituent preceding ittherefore if the pair being examined by the combining algorithms ends in punctuation the algorithm looks at the constituent following it trying to combine it with the constituent to the left of the punctuationa similar case is when the pair ends in a conjunctionhere the algorithm tries to combine the constituent to the right of the conjunction with that on the left of the conjunction
J93-2006
coping with ambiguity and unknown words through probabilistic modelsfrom spring 1990 through fall 1991 we performed a battery of small experiments to test the effectiveness of supplementing knowledgebased techniques with probabilistic modelsthis paper reports our experiments in predicting parts of speech of highly ambiguous words predicting the intended interpretation of an utterance when more than one interpretation satisfies all known syntactic and semantic constraints and learning case frame information for verbs from example usesfrom these experiments we are convinced that probabilistic models based on annotated corpora can effectively reduce the ambiguity in processing text and can be used to acquire lexical information from a corpus by supplementing knowledgebased techniquesbased on the results of those experiments we have constructed a new natural language system for extracting data from text eg newswire textour model incorporates the treatment of unknown words within the probability model
empirical studies on the disambiguation of cue phrases phrases are linguistic expressions such as now and function as explicit indicators of structure of a discourse for example signal the beginning of a subtopic or a return a previous topic while mark subsequent material as a response to prior material or as an explanatory comment however while cue phrases may convey discourse structure each also one or more alternate uses while be used sententially as an adverbial for example the discourse use initiates a digression although distinguishing discourse and sentential uses of cue phrases is critical to the interpretation and generation of discourse the question of how speakers and hearers accomplish this disambiguation is rarely addressed this paper reports results of empirical studies on discourse and sentential uses of cue phrases in which both textbased and prosodic features were examined for disambiguating power based on these studies it is proposed that discourse versus sentential usage may be distinguished by intonational features specifically pitch accent and prosodic phrasing a prosodic model that characterizes these distinctions is identified this model is associated with features identifiable from text analysis including orthography and part of speech to permit the application of the results of the prosodic analysis to the generation of appropriate intonational features for discourse and sentential uses of cue phrases in synthetic speech phrases and phrases that directly signal the structure of a discourse been variously termed words discourse markers discourse connectives particles the computational linguistic and conversational analysis cue phrases are linguistic expressions such as now and well that function as explicit indicators of the structure of a discoursefor example now may signal the beginning of a subtopic or a return to a previous topic while well may mark subsequent material as a response to prior material or as an explanatory commenthowever while cue phrases may convey discourse structure each also has one or more alternate useswhile incidentally may be used sententially as an adverbial for example the discourse use initiates a digressionalthough distinguishing discourse and sentential uses of cue phrases is critical to the interpretation and generation of discourse the question of how speakers and hearers accomplish this disambiguation is rarely addressedthis paper reports results of empirical studies on discourse and sentential uses of cue phrases in which both textbased and prosodic features were examined for disambiguating powerbased on these studies it is proposed that discourse versus sentential usage may be distinguished by intonational features specifically pitch accent and prosodic phrasinga prosodic model that characterizes these distinctions is identifiedthis model is associated with features identifiable from text analysis including orthography and part of speech to permit the application of the results of the prosodic analysis to the generation of appropriate intonational features for discourse and sentential uses of cue phrases in synthetic speechcue phrases words and phrases that directly signal the structure of a discourse have been variously termed clue words discourse markers discourse connectives and discourse particles in the computational linguistic and conversational analysis literaturethese include items such as now which marks the introduction of a new subtopic or return to a previous one well which indicates a response to previous material or an explanatory comment incidentally by the way and that reminds me which indicate the beginning of a digression and anyway and in any case which indicate a return from a digressionthe recognition and appropriate generation of cue phrases is of particular interest to research in discourse structurethe structural information conveyed by these phrases is crucial to many tasks such as anaphora resolution the inference of speaker intention and the recognition of speaker plans and the generation of explanations and other text despite the crucial role that cue phrases play in theories of discourse and their implementation however many questions about how cue phrases are identified and defined remain to be examinedin particular the question of cue phrase polysemy has yet to receive a satisfactory solutioneach lexical item that has one or more discourse senses also has one or more alternate sentential senses which make a semantic contribution to the interpretation of an utteranceso sententially now may be used as a temporal adverbial incidentally may also function as an adverbial and well may be used with its adverbial or attributive meaningsdistinguishing between whether a discourse or a sentential usage is meant is obviously critical to the interpretation of discourseconsider the cue phrase nowroughly the sentential or deictic use of now makes reference to a span of time that minimally includes the utterance timethis time span may include little more than moment of utterance as in example 1 or it may be of indeterminate length as in example 2fred yeah i think we will look that up and possibly uh after one of your breaks harryharry ok we will take one nowjust hang on bill and we will be right back with youharry you know i see more coupons now than i have ever seen before and i will bet you have toothese examples are taken from a radio callin program quotthe harry gross show speaking of your moneyquot which we will refer to as this corpus will be described in more detail in section 4in contrast the discourse use of now signals a return to a previous topic as in the two examples of now in example 3 or introduces a subtopic as in example 4 harry fred whatta you have to say about this ira problemfred ok you see now unfortunately harry as we alluded to earlier when there is a distribution from an ira that is taxable discussion of caller beneficiary status now the five thousand that you are alluding to uh of the doris i have a couple quick questions about the income taxthe first one is my husband is retired and on social security and in 81 he few odd jobs for a friend uh around the property and uh he was reimbursed for that to the tune of about 640now where would he where would we put that on the formexample 5 nicely illustrates both the discourse and sentential uses of now in a single utterancenow now that we have all been welcomed here it is time to get on with the business of the conferencein particular the first now illustrates a discourse usage and the second a sentential usagethis example is taken from a keynote address given by ronald brachman to the first international conference on expert database systems in 1986we will refer to this corpus as rjb86the corpus will be described in more detail in section 5while the distinction between discourse and sentential usages sometimes seems quite clear from context in many cases it is notfrom the text alone example 6 is potentially ambiguous between a temporal reading of now and a discourse interpretationnow in at our approach is to look at a knowledge base as a set of symbolic items that represent somethingon the temporal reading example 6 would convey that at this moment the at approach to knowledge bases has changed on the discourse reading now simply initiates the topic of the at approach to knowledge basesin this paper we address the problem of disambiguating cue phrases in both text and speechwe present results of several studies of cue phrase usage in corpora of recorded transcribed speech in which we examined textbased and prosodic features to find which best predicted the discoursesentential distinctionbased on these analyses we present an intonational model for cue phrase disambiguation in speech based on prosodic phrasing and pitch accentwe associate this model with features identifiable from text analysis principally orthography and part of speech that can be automatically extracted from large corporaon a practical level this association permits the application of our findings to the identification and appropriate generation of cue phrases in synthetic speechon a more theoretical level our findings provide support for theories of discourse that rely upon the feasibility of cue phrase disambiguation to support the identification of discourse structureour results provide empirical evidence suggesting how hearers and readers may distinguish between discourse and sentential uses of cue phrasesmore generally our findings can be seen as a case study demonstrating the importance of intonational information to language understanding and generationin section 2 we review previous work on cue phrases and discuss the general problem of distinguishing between discourse and sentential usesin section 3 we introduce the theory of english intonation adopted for our prosodic analysis in section 4 we present our initial empirical studies which focus on the analysis of the cue phrases now and well in multispeaker spontaneous speechin section 5 we demonstrate that these results generalize to other cue phrases presenting results of a larger and more comprehensive study an examination of all cue phrases produced by a single speaker in a 75minute presentationfinally in section 6 we discuss the theoretical and practical applications of our findingsthe critical role that cue phrases play in understanding and generating discourse has often been noted in the computational linguistics literaturefor example it has been shown that cue phrases can assist in the resolution of anaphora by indicating the presence of a structural boundary or a relationship between parts of a discourse in example 7 interpretation of the anaphor it as coindexed with the system is facilitated by the presence of the cue phrases say and then marking potential antecedents in quotas an expert database for an expert systemquot as structurally unavailableif the system attempts to hold rules say as an expert database for an expert system then we expect it not only to hold the rules but to in fact apply them for us in appropriate situationshere say indicates the beginning of a discourse subtopic and then signals a return from that subtopicsince the potential but incorrect antecedents occur in the subtopic while the pronoun in question appears in the return to the major topic the incorrect potential antecedents can be ruled out on structural groundswithout such discourse segmentation the incorrect potential antecedents might have been preferred given their surface proximity and number agreement with the pronoun in questionnote that without cue phrases as explicit indicators of this topic structure one would have to infer the relationships among discourse segments by appeal to a more detailed analysis of the semantic content of the passagefor example in taskoriented dialogs planbased knowledge could be used to assist in the recognition of discourse structure however such analysis is often beyond the capabilities of current natural language processing systemsmany domains are also not taskorientedadditionally cue phrases are widely used in the identification of rhetorical relations among portions of a text or discourse and have been claimed in general to reduce the complexity of discourse processing and to increase textual coherence in natural language processing systems previous attempts to characterize the set of cue phrases in the linguistic and in the computational literature have typically been extensional with each cue phrase or set of phrases associated with one or more discourse or conversational functionsin the linguistic literature cue phrases have been the subject of a number of theoretical and descriptive corpusbased studies that emphasize the diversity of meanings associated with cue phrases as a class within an overarching framework of function such as discourse cohesiveness or conversational moves and the diversity of meanings that an individual item can convey in the computational literature the functions assigned to each cue phrase while often more specific than those identified in the linguistics literature are usually theory or domaindependentreichman and hobbs associate groups of cue phrases with the rhetorical relations among segments of text that they signal in these approaches the cue phrase taxonomy is dependent upon the set of rhetorical relations assumedalternatively cohen adopts a taxonomy of connectives based on quirk to assign each class of cue phrase a function in her model of argument understandinggrosz and sidner in their tripartite model of discourse structure classify cue phrases based on the changes they signal to the attentional and intentional stateszukerman presents a taxonomy of cue phrases based on three functions in the generation of tutorial explanations knowledge organization knowledge acquisition and affect maintenancetable 14 in the appendix compares the characterization of items classed as cue phrases in a number of these classification schemesthe question of cue phrase sense ambiguity has been noted in both the computational and the linguistic literature although only cursory attention has been paid to how disambiguation might take placea common assumption in the computational literature is that hearers can use surface position within a sentence or clause to distinguish discourse from sentential usesin fact most systems that recognize or generate cue phrases assume a canonical position for discourse cue phrases within the clause schiffrin also assumes that discourse uses of cue phrases are utterance initialhowever discourse uses of cue phrases can in fact appear noninitially in a clause as illustrated by the item say in example 8 however if we took that language and added one simple operator which we called restriction which allowed us for example to form relational concepts like say son and daughter that is a child who is always male or is always femalealso sentential usages can appear clause initially as in example 9 we have got to get to some inferential capabilityfurther meaning of the structures is crucially importantfurthermore surface clausal position itself may be ambiguous in the absence of orthographic disambiguationconsider example 10 evelyn i seeso in other words i will have to pay the full amount of the uh of the tax now what about pennsylvania state taxcan you give me any information on thathere now would be assigned a sentential interpretation if associated with the preceding clause i will have to pay the full amount of the tax now but a discourse interpretation if associated with the succeeding clause now what about pennsylvania state taxthus surface position alone appears inadequate to distinguish between discourse and sentential usagehowever when we listen to examples such as example 10 we have little difficulty in identifying a discourse meaning for nowsimilarly the potentially troublesome case cited in example 6 is easily disambiguated when one listens to the recording itselfwhat is missing from transcription that helps listeners to make such distinctions easilyhalliday and hassan note that their class of continuatives which includes items such as now of course well anyway surely and after all vary intonationally with respect to cohesive functionin particular continuatives are often quotreducedquot intonationally when they function quotcohesivelyquot to relate one part of a text to another unless they are quotvery definitely contrastivequot that is continuatives are unaccented with reduced vowel forms unless they are marked as unusually prominent intonationallyfor example they note that if now is reduced it can indicate quotthe opening of a new stage in the communicationquot such as a new point in an argument or a new incident in a storyon the other hand noncohesive uses which we would characterize as sentential tend to be of nonreduced accented formsso perhaps it is the intonational information present in speech but missing generally in transcription which aids hearers in disambiguating between discourse and sentential uses of cue phrasesempirical evidence from more general studies of the intonational characteristics of word classes tends to support this possibilitystudies of portions of the londonlund corpus such as altenberg have provided intonational profiles of word classes including discourse items conjunctions and adverbials that are roughly compatible with the notion that cue phrases tend to be deaccented although the notion of discourse item used in this study is quite restrictive1 however while the instance of now in example 6 is in fact reduced as halliday and hassan propose that in example 10 while interpreted as a discourse use is nonetheless clearly intonationally prominentfurthermore both of the nows in example 5 are also prominentso it would seem that intonational prominence alone is insufficient to disambiguate between sentential and discourse usesin this paper we present a more complex model of intonational features and textbased features that can serve to disambiguate between sentential and discourse instances of cue phrasesour model is based on several empirical studies two studies of individual cue phrases in which we develop our model and a more comprehensive study of cue phrases as a class in which we confirm and expand our modelbefore describing these studies and their results we must first describe the intonational features examined in our analysesthe importance of intonational information to the communication of discourse structure has been recognized in a variety of studies however just which intonational features are important and how they communicate discourse information is not well understoodprerequisite however to addressing these issues is the adoption of a framework of intonational description to identify which intonational features will be examined and how they will be characterizedfor the studies discussed below we have adopted pierrehumbert theory of english intonation which we will describe briefly belowin pierrehumbert phonological description of english intonational contours or tunes are described as sequences of low and high tones in the fundamental frequency contour the physical correlate of pitchthese tunes have as their domain the intonational phrase and are defined in terms of the pitch accent phrase accent and boundary tone which together comprise an intonational phraseone of the intonational features we examine with respect to cue phrases is the accent status of each cue that is whether or not the cue phrase is accented or made intonationally prominent and if it is accented what type of pitch accent it bearspitch accents usually appear as peaks or valleys in the fo contourthey are aligned with the stressed syllables of lexical items making those items prominentnote that while every lexical item in english has a lexically stressable syllable which is the rhythmically most prominent syllable in the word not every stressable syllable is in fact accented so lexical stress is distinguished from pitch accentlexical items that do bear pitch accents are said to be accented while those not so marked are said to be deaccenteditems that are deaccented tend to be function words or items that are given in a discourse for example in figure 1 now is deaccented while cue is accentedcontrast figure 1 with figure 2for ease of comparison we present fo contours of synthetic speech where the xaxis represents time and the yaxis frequency in hz2 in figure 1 the first fo peak occurs on let us in figure 2 the first peak occurred on nowthe most prominent accent in a phrase is termed the nuclear stress or nuclear h accent on now accent of the phrasein both figures 1 and 2 cue bears nuclear stressin addition to the fo excursions illustrated in figures 15 accented syllables tend to be longer and louder than deaccented syllables so there are a number of acoustic correlates of this perceptual phenomenonin pierrehumbert description of english there are six types of pitch accent all composed of either a single low or high tone or an ordered pair of low and high tones such as lh or hlin each case the tone aligned with the stressed syllable of the accented lexical item is indicated by a star thus if telephone is uttered with a lh accent the low tone is aligned with the stressed syllable tell and the h tone falls on the remainder of the wordfor simple pitch accents of course the an lh accent single tone is aligned with the stressthe pitch accents in pierrehumbert description of english include two simple tonesh and land four complex oneslh lh hl and hlthe most common accent h comes out as a peak on the accented syllable l accents occur much lower in the speaker pitch range than h and are phonetically realized as local fo minimathe accent on now in figure 3 is a lfigure 4 shows a version of the sentence in figures 13 with a lh accent on the first instance of nownote that there is a peak on now as there was in figure 2but now a striking valley occurs just before this peakjulia hirschberg and diane litman disambiguation of cue phrases in pierrehumbert and hirschberg a compositional approach to intonational meaning is proposed in which pitch accents are viewed as conveying information status such as newness or salience about the denotation of the accented items and the relationship of denoted entities states or attributes to speaker and hearer mutual beliefs about the discoursein particular it is claimed that speakers use h accents to indicate that an item represents new information which should be added to their mutual belief spacefor example standard declarative utterances in english commonly involve h accentsl accents on the other hand are used to indicate that an item is salient in the discourse but for some reason should not be part of what is added to the mutual belief space standard yesno question contour in english employs l accentsthe meanings associated with the hl accents are explained in terms of the accented item ability to be inferred from the mutual belief space hl items are marked as inferable from the mutual belief space but nonetheless part of what is to be added to that space hl accents are inferable and not to be added to speaker and hearer mutual beliefslh accents are defined in terms of the evocation of a scale defined as a partially ordered set following lh accents often associated with the conveyance of uncertainty or of incredulity evoke a scale but predicate nothing of the accented item with respect to the mutual belief space lh accents commonly associated with contrastive stress also evoke a scale but do add information about the accented item to speaker and hearer mutual belief space another intonational feature that is considered in our study of cue phrases is prosodic phrasingthere are two levels of such phrasing in pierrehumbert theory the intonational phrase and the intermediate phrase a smaller subunita wellformed intermediate phrase consists of one or more pitch accents plus a high or low phrase accentthe phrase accent controls the pitch between the last pitch accent of the current intermediate phrase and the beginning of the nextor the end of the utterancean intonational phrase is composed of one of more intermediate phrases plus a boundary toneboundary tones may be high or low also and fall exactly at the edge of the intonational phraseso each intonational phrase ends with a phrase accent and a boundary tonea given sentence may be uttered with considerable variation in phrasingfor example the utterance in figure 2 was produced as a single intonational phrase whereas in figure 5 now is set off as a separate phraseintuitively prosodic phrases divide an utterance into meaningful quotchunksquot of information variation in phrasing can change the meaning hearers assign to tokens of a given sentencefor example the interpretation of a sentence like bill does not drink because he is unhappy is likely to change depending upon whether it is uttered as one phrase or twouttered as a single phrase this sentence is commonly interpreted as conveying that bill does indeed drinkbut the because of his drinking is not his unhappinessuttered as two phrases it is more likely to convey that bill does not drinkand the reason for his abstinence is his unhappinessin effect variation in phrasing appears to change the scope of negation in the sentencewhen the sentence is uttered as a single phrase the negative is interpreted as having wide scopeover the entire phrase and thus the entire sentencewhen bill does not drink is separated from the second clause by a phrase boundary the scope of negation is limited to just the first clausethe occurrence of phrase accents and boundary tones in the fo contour together with other phrasefinal characteristics such as pause decrease in amplitude glottalization of phrasefinal syllables and phrasefinal syllable lengthening enable us to identify intermediate and intonational phrases in natural speechidentification of pitch accents and phrase boundaries using a prosodic transcription system based on the one employed here has been found to be quite reliable between transcribers3 meaningful intonational variation has been found in studies of phrasing choice of accent type and location overall tune type and variation in pitch range where the pitch range of an intonational phrase is defined by its toplineroughly the highest peak in the fo contour of the phraseand the speaker baseline the lowest point the speaker realizes in normal speech measured across all utterancesin the studies described below we examined each of these features in addition to textbased features to see which best predicted cue phrase disambiguation and to look for associations among textbased and intonational featuresour first study of cue phrase disambiguation investigated multispeaker usage of the cue phrase now in a recorded transcribed radio callin program our corpus consisted of four days of the radio callin program quotthe harry gross show speaking of your moneyquot recorded during the week of february 1 1982 in this philadelphia program gross offered financial advice to callers for the february 3 show he was joined by an accountant friend fred levythe four shows provided approximately ten hours of conversation between expert and callersthe corpus was transcribed by martha pollack and julia hirschberg in 1982 in connection with another studywe chose now for this initial study for several reasonsfirst the corpus contained numerous instances of both discourse and sentential usages of now second now often appears in conjunction with other cue phrases eg well now ok now right nowthis allowed us to study how adjacent cue phrases interact julia hirschberg and diane litman disambiguation of cue phrases with one anotherthird now has a number of desirable phonetic characteristicsas it is monosyllabic possible variation in stress patterns do not arise to complicate the analysisbecause it is completely voiced and introduces no segmental effects into the fo contour it is also easier to analyze pitch tracks reliablyour model was initially developed from a sample consisting of 48 occurrences of nowall the instances from two sides of tapes of the show chosen at randomtwo instances were excluded since the phrasing was difficult to determine due to hesitation or interruptionto test the validity of our initial hypotheses we then replicated our study with a second sample from the same corpus the first 52 instances of now taken from another four randomly chosen sides of tapeswe excluded two tokens from these tapes because of lack of available information about phrasing or accent and five others because we were unable to decide whether the tokens were discourse or sententialour data analysis included the following stepsfirst the authors determined separately and by ear whether individual tokens were discourse or sentential usages and tagged the transcript of the corpus accordinglywe then digitized and pitchtracked the intonational phrase containing each token plus the preceding and succeeding intonational phrases if produced by the same speakerintonational features were determined by one of the authors from the speech and pitch tracks separately from the discoursesentential judgmentdiscourse and sentential uses were then compared along several dimensions of these comparisons the first three turned out to distinguish between discourse and sentential now quite reliablyin particular a combination of accent type phrasal composition and phrasal position reliably distinguished between the tokens in the corpusof the 100 tokens of now from the combined 48 and 52token corpora just over onethird of our samples were judged to be sentential and just under twothirds discoursethe first striking difference between the two appeared in the composition of the intermediate phrase containing the item as illustrated in table 1of all the 4 the pitch tracks in the first two studies were produced with a pitch tracker written by mark libermanfor the third study we used a pitch tracker written by david talkin and waves speech analysis software in our prosodic analysis sentential uses of now only one appeared as the only item in an intermediate phrase while 26 discourse nows represented entire intermediate phrasesof these 26 one half constituted the only lexical item in a full intonational phraseso our findings suggested that now set apart as a separate intermediate phrase is very likely to be interpreted as conveying a discourse meaning rather than a sentential oneanother clear distinction between discourse and sentential now emerged when we examined the surface position of now within its intermediate phraseas table 2 illustrates 62 of the 63 discourse nows were firstinphrase absolutely first or followed only another cue phrase in their intermediate phrase of these 59 were also absolutely first in their intonational phrase that is first in major prosodic phrase and not preceded by any other cue phrasesonly five sentential tokens were firstinphrasealso while 22 sentential nows were phrase final only one discourse token was so positionedso once intermediate phrases are identified discourse and sentential now appear to be generally distinguishable by position within the phrasefinally discourse and sentential occurrences were distinguishable in terms of presence or absence of pitch accentand by type of pitch accent where accentedbecause of the large number of possible accent types and since there are competing reasons to accent or deaccent items such as accenting to indicate contrastive stress or deaccenting to indicate an item is already given in the discourse we might expect these findings to be less clear than those for phrasingin fact although their interpretation is more complicated the results are equally strikingresults of an analysis of the 97 occurrences from this sample for which accent type could be precisely determined are presented in table 3of those tokens not included two discourse tokens were judged either l or h with a compressed pitch range and one discourse token was judged either deaccented or lnote first that large numbers of discourse and sentential tokens were uttered with a h or complex accent16 discourse and 32 sentential tokensthe chief similarity here lies in the use of the h accent type with 14 discourse uses and 14 sentential 7 other sentential tokens are ambiguous between h and complexnote also that discourse now was much more likely overall to be deaccented31 of the 60 discourse tokens accenting of now in larger intonational phrases n72deaccented h or complex l sentential 5 31 0 discourse 31 0 5 versus 5 of the 37 sentential nows no sentential now was uttered with a l accentalthough 13 discourse nows werean even sharper distinction in accent type is found if we exclude those nows that are alone in intermediate phrase from the analysisrecall from table 1 that all but one of these tokens represented a discourse usethese nows were always accented since it is generally the case that each intermediate phrase contains at least one pitch accentof the discourse tokens representing entire intermediate phrases for which we can distinguish accent type precisely 14 bore h accentsthis suggests that one similarity between discourse and sentential nowthe frequent h accentmight disappear if we limit our comparison to those tokens forming part of larger intonational phrasesin fact such is the case as is shown in table 4the majority 31 of sentential nows forming part of larger intonational phrases received a h or complex pitch accent while all 36 discourse nows forming part of larger intonational phrases were deaccented or bore a l accentin fact those discourse nows not distinguishable from sentential by being set apart as separate intonational phrases were generally so distinguishable with respect to pitch accentof the three discourse tokens whose pitch accent type was not identifiable which were omitted from table 3 two were set apart as separate intonational phrases and one was judged either to bear a l pitch accent or to be deaccentedthus all three could be distinguished from sentential tokens in terms of accent type and phrasingfurthermore of the five deaccented sentential nows in table 4 none was firstinphrase while only one of the deaccented discourse tokens was similarly noninitialin fact of the 100 tokens in our initial study of now all but two were distinguishable as discourse or sentential in terms of a combination of position in phrase phrasal composition and accentthus we were able to hypothesize from our study of now that discourse uses were either uttered as a single intermediate phrase or in a phrase containing only cue phrases or uttered at the beginning of a longer intermediate phrase or preceded only by other cue phrases in the phrase and with a l pitch accent or without a pitch accent 5 only one of the 37 cue phrases judged to be of sentential type was uttered as a single phraseif firstinphrase they were nearly always uttered with a h or complex pitch accent if not firstinphrase they could bear any type of pitch accent or be deaccented these results are summarized in figure 6since the preponderance of tokens in our sample from one professional speaker might well skew our results we compared characteristics of phrasing and accent for host and nonhost datathe results showed no significant differences between host and caller tokens in terms of the hypotheses proposed abovefirst host and callers produced discourse and sentential tokens in roughly similar proportions405 sentential for the host and 349 for his callerssimilarly there was no distinction between host and nonhost data in terms of choice of accent type or accenting versus deaccentingour findings for position within phrase also hold for both host and nonhost datahowever in tendency to set discourse now apart as a separate intonational or intermediate phrase there was an interesting distinctionwhile callers tended to choose from among the two options for discourse now in almost equal numbers the host chose this option only 273 of the timehowever although host and caller data differed in the proportion of occurrences of the two classes of discourse now that emerge from our data as a whole the existence of the classes themselves was confirmedwhere the host did not produce discourse nows set apart as separate intonational or intermediate phrases he always produced discourse nows that were deaccented or accented with a l accentwe hypothesize then that while individual speakers may choose different strategies to realize discourse now they appear to choose from among the same limited number of optionsour conclusion from this study that intonational features play a crucial role in the distinction between discourse and sentential usage in speech clearly poses problems for textdo readers use strategies different from hearers to make this distinction and if so what might they beare there perhaps orthographic correlates of the intonational features that we have found to be important in speechas a first step toward resolving these questions we examined the orthographic features of the transcripts of our corpus which as noted in section 3 had been prepared independently of this study and without regard for intonational analysiswe examined transcriptions of all tokens of now in our combined sample to determine whether prosodic phrasing was reliably associated with orthographic markingthere were no likely orthographic clues to accent type or placement such as capitalization in the transcriptsof all 60 instances of now that were absolutely first in their intonational phrase 34 were preceded by punctuationa comma dash or end punctuationand 17 were first in speaker turn and thus orthographically marked by indication of speaker nameso in 51 cases first position in intonational phrase coincided with orthographic indicators in the transcriptno now that was not absolutely first in its intonational phrasefor example none that was merely first in its intermediate phrasewas so markedof those 23 nows coming last in an intermediate or intonational phrase however only 14 were immediately followed by a similar orthographic cluefinally of the 13 instances of now that formed separate intonational phrases only two were distinguished orthographically by being both preceded and followed by some orthographic indicatorand none of the nows that formed complete intermediate phrases but not complete intonational phrases was so markedthese findings suggest that of the intonational features we found useful in disambiguating cue phrases in speech only the feature first in intonational phrase has any clear orthographic correlatethis correlation however seems potentially to be a useful oneof the 63 discourse nows in our corpus recall that 59 were first in their intonational phraseof these 59 48 were preceded by orthographic indicators in the transcription as described aboveof sentential cues 22 were last in their intermediate phrase and of these 13 were followed by some orthographic indicator in the transcriptionof 34 cue phrases that were neither preceded nor followed by orthographic markings in the transcription the majority were sentential usesif we predict sententialdiscourse usage based simply on the presence or absence of preceding and succeeding orthographic markings we would predict that cue phrases preceded by orthographic indicators represent discourse uses and that phrases either followed by orthographic indicators or neither preceded nor followed would be sentential uses for a total of 82 correct predictions for the 100 cue phrases in this studythus 82 of nows might be orthographically distinguishedwe will have more to say on the role of orthography in disambiguating cue phrases in connection with the study described in section 5based on the findings of our study of now we proposed that listeners may use prosodic information to disambiguate discourse from sentential uses of cue phrases however although we chose to study now for its ambiguity between discourse and sentential uses it may of course also be seen as representative of sense ambiguities between temporals and nontemporals or deictics and nondeicticsthus if indeed our findings generalize it might be to a class we had not intended to investigateto discover further evidence that our results did indeed apply to the discoursesentential use disambiguation we conducted another multispeaker study this time of the discourse and sentential uses of the single cue phrase wellagain our corpus consisted of recordings of the harry gross radio callin programin addition we used tokens from several other corpora of recorded transcribed speech including the corpus described in section 5this time we included no more than three tokens from any speaker to minimize the potential effect of speaker idiosyncracyour findings for this study of well were almost identical to results from the earlier study of now described abovebriefly of the 52 instances of well we examined all but one token fit the model constructed from the results of the now study depicted in figure 6in particular of the 25 sentential uses of well none constituted a single intermediate or intonational phraseonly two sentential tokens were firstinphrase and both of these bore h pitch accentshowever of the 27 discourse tokens of well 14 were indeed alone in their intonational or intermediate phrasesall of the remaining 13 occurred firstinphrase and of these 12 were deaccentedin all 51 of the tokens in this study fit our model the single counterexample was one discourse token which bore a h pitch accent and was part of a larger phraseour study of well thus appeared to confirm our earlier results and in particular to lend support to our hypothesis that cue phrases can be distinguished intonationallyhowever although we had shown that two cue phrases appeared to pattern similarly in this respect we had still not demonstrated that our model could be extended to cue phrases in generalto address this larger issue we next conducted a singlespeaker multicue phrase studyin this study we examined all cue phrases consisting of a single lexical item that were produced by one speaker during 75 minutes approximately 12500 words of recorded speechresults of a pilot study of this corpus are reported in litman and hirschberg we limited ourselves here to the examination of single lexical items since the hypothesis we had previously developed applies only to such items eg it would be meaningless to ask whether a larger phrase bears a pitch accent or notthe corpus consisted of a keynote address given from notes by ronald brachman at the first international conference on expert database systems in 1986this talk yielded 953 tokens based upon a set of possible cue phrases derived from cohen grosz and sidner litman and hirschberg reichman schiffrin warner and zuckerman and pearl the frequency distribution of the tokens is shown in table 5by far the most frequent cue phrase occurring in our corpus is the conjunction and representing 320 tokensthe next most frequent item is now with only 69 occurrencesother items occurring more than 50 times each in the corpus are but like or and sonote that there are 444 conjunctionsand but and orcomprising nearly half of the cue phrases in our corpusin addition to the items shown in table 5 we searched the corpus unsuccessfully for instances of the following cue phrases proposed in the literature accordingly alright alternately alternatively altogether anyway boy consequently conversely fine furthermore gee hence hey incidentally likewise listen meanwhile moreover namely nevertheless nonetheless nor oh though yethowever note that the set of items included in table 14 is not identical to the set we have considered in this paperin particular we do consider the items actually basically essentially except generally no right since and yes although they are not considered in the studies included in table 14we do not consider again equally hopefully last only overall still thus too unless where whereas and why although these have been included by others in the set of possible cue phrasesthe temporal pattern of cue phrase use in the corpus exhibits some interesting featureswhile tokens were distributed fairly evenly during the middle portion of the talk the first and last portions were less regularthe first decile of the transcript defined by length in words contained 140 cue phrases a higher proportion than any other decile of the corpus while the second decile contained only 73 and the last decile of the talk contained an even lower proportion of cue phrases only 64 so it appears that at least for this genre cue phrases occur more frequently in the introductory remarks and less frequently in the conclusionto classify each token as discourse or sentential the authors separately judged each one by ear from the taped address while marking a transcriptionwhere we could not make a decision we labeled the token ambiguous so any token could be judged quotdiscoursequot quotsententialquot or quotambiguousquot the address was transcribed independently of our study by a member of the text processing pool at att bell laboratoriesin examining the transcription we found that 39 cue phrases had been omitted by the transcriber one token each of actually essentially or and well three tokens each of so and ok nine tokens of and and twenty tokens of nowit seemed significant that all but five of these were subsequently termed discourse uses by both judgesthat is that discourse uses seemed somehow omissible to the transcriberone of the authors then assessed each token prosodic characteristics as described in section 4in examining our classification judgments we were interested in areas of disagreement as well as agreementthe set of tokens whose classification we both agreed upon and found unambiguous provided a testbed for our investigation of the intonational features marking discourse and sentential interpretationwe examined the set of tokens one or both of us found ambiguous to determine how intonation might in fact have contributed to that ambiguitytable 6 presents the distribution of our judgments where classifiable includes those tokens we both assigned either discourse or sentential ambiguous identifies those we both were unable to classify partial disagreement includes those only one of us was able to classify and complete disagreement represents those tokens one of us classified as discourse and the other as sententialof the 953 tokens in this corpus we agreed in our judgments of 878 cue phrases as discourse or sententialanother 59 tokens we both judged ambiguouswe disagreed on only 16 items for 11 of these the disagreement was between classifiable and ambiguouswhen we examined the areas of ambiguity and disagreement in our judgments we found that a high proportion of these involved judgments of coordinate conjunction tokens and or and but which as we previously noted represent nearly half of the tokens in this studytable 6 shows that comparing conjunction with nonconjunction we agreed on the classification of 495 nonconjunction tokens but only 383 conjunctionswe both found 48 conjunctions ambiguous but only 11 nonconjunctions 48 of the 59 tokens we agreed were ambiguous in the corpus were in fact coordinate conjunctionsof the 16 tokens on which we simply disagreed 13 were conjunctionsthe fact that conjunctions account for a large number of the ambiguities we found in the corpus and the disagreements we had about classification is not surprising when we note that the discourse meanings of conjunction as described in the literature seem to be quite similar to the meanings of sentential conjunctionfor example the discourse use of and is defined as parallelism in cohen a marker of addition or equential continuity in schriffin and conjunction in warner these definitions fail to provide clear guidelines for distinguishing discourse uses from sentential as in cases such as example 11 here while the first and seems intuitively sentential the second is much more problematicbut instead actually we are bringing some thoughts on expert databases from a place that is even stranger and further away and that of course is the magical world of artificial intelligencehowever while similarities between discourse and sentential interpretations appear to make conjunction more difficult to classify than other cue phrases the same similarities may make the need to classify them less important from either a text generation or a text understanding point of viewonce we had classified the tokens in the corpus we analyzed them for their prosodic and syntactic features as well as their orthographic context in the same way we had examined tokens for the earlier two studiesin each case we noted whether the cue phrase was accented or not and if accented we noted the type of accent employedwe also looked at whether the token constituted an entire intermediate or intonational phrasepossibly with other cue phrasesor not and what each token position within its intermediate phrase and larger intonational phrase wasfirstinphrase last or otherwe also examined each item part of speech using church partofspeech taggerfinally we investigated orthographic features of the transcript that might be associated with a discoursesentential distinction such as immediately preceding and succeeding punctuation and paragraph boundariesin both the syntactic and orthographic analyses we were particularly interested in discovering how successful nonprosodic features that might be obtained automatically from a text would be in differentiating discourse from sentential uses51 results of the intonational analysis we looked first at the set of 878 tokens whose classification as discourse or sentential we both agreed uponour findings from this set confirmed the prosodic model we found in the studies described above to distinguish discourse from sentential uses successfullythe distribution of these judgments with respect to the prosodic model of discourse and sentential cue phrases depicted in figure 6 is shown in table 7recall that the prosodic model in figure 6 includes the following intonational profiles discourse type a in which a cue phrase constitutes an entire intermediate phrase or is in a phrase containing only other cue phrases and may have any type of pitch accent discourse type b in which a cue phrase occurs at the beginning of a larger intermediate phrase or is preceded only by other cue phrases and bears a l pitch accent or is deaccented sentential type a in which the cue phrase occurs at the beginning of a larger phrase and bears a h or complex pitch accent and sentential type b in which the cue phrase occurs in noninitial position in a larger phrasetable 7 shows that our prosodic model fits the new data reasonably well successfully predicting 662 of the classified tokensof the 341 cue phrases we both judged discourse 301 fit the prosodic discourse model 50 of these were of discourse type a and 251 were of discourse type bof the 537 tokens we both judged sentential 361 fit one of the prosodic sentential modelsthe overall ratio of cue phrases judged discourse to those judged sentential was about 23a x2 test shows significance at the 001 levelwhile these results are highly significant they clearly do not match the previous findings for now and well discussed in section 4 in which all but three tokens fit our modelso for this larger study the tokens which did not fit our prosodic model remain to be explainedin fact there is some regularity among these counterexamplesfor example 8 of the items judged discourse that did not fit our discourse prosodic model were tokens of the cue phrase sayall of these failed to fit our prosodic discourse model by virtue of the fact that they occurred in noninitial phrasal position such items are illustrated in example 8of the 176 items judged sentential that failed to fit our sentential prosodic model 138 were conjunctionsof these 11 fit the discourse type a prosodic model and 127 fit the discourse type b modelboth judges found such items relatively difficult to distinguish between discourse and sentential use as discussed abovetable 8 shows how judgments are distributed with respect to our prosodic model when coordinate conjunctions are removed from the sampleour model thus predicts 422 of nonconjunction cue phrase distinctions somewhat better than the 662 successful predictions for all classified cue phrases as shown in table 7our prosodic model itself can of course be decomposed to examine the contributions of individual features to discoursesentential judgmentstable 9 shows the distribution of judgments by all possible feature complexes for all tokensnote that four cells are empty since all items alone in their intermediate phrase must perforce come first in itthis distribution reveals that there is considerable agreement when cue phrases appear alone in their intermediate phrase such items are most frequently judged to be discourse usesthere is also considerable agreement on the classification of the tokens between the authors in such casesthere is even greater agreement when cue phrases appear in noninitial position in a larger intermediate phrase these tend to be judged sententialwhen the token is deaccented or receives a complex or high accent the fit with the model as well as the agreement figures on classification are especially strikinga small majority of tokens in the l accent class do not fit the sentential prosodic model note that the agreement feature complexes are coded as follows initial 0 or noconsists of a single intermediate phrase or notmedial f or nfappears firstinphrase or notfinal d h l or cdeaccented or bears a h l or complex pitch accent level producing this classification was goodhowever as with the ofd subtype of discourse type a which also has the worst results for its class we have the fewest tokens for this prosodic typetokens that fit discourse type b in figure 6first in a larger phrase and deaccented or first in a larger phrase and bearing a l accent appear more problematic of the former there was more disagreement than agreement between the judge classification and the prosodic prediction of the classificationand of the 153 sentential items that fit this discourse prosodic model 127 are conjunctionsthe level of disagreement for the judge classifications was also highest for discourse type bwhile there is more agreement that tokens corresponding to sentential model a and characterized as nofhfirst in a larger phrase with a h accentor nofcfirst in a larger phrase and bearing a complex pitch accent are sentential this agreement is certainly less striking than in the case of tokens corresponding to sentential model b and characterized here as nonfnoninitial in a larger phrase with any type of pitch accentsince discourse type b and sentential type a differ from each other only in type of pitch accent we might conclude that the pitch accent feature is not as powerful a discriminator as the fact that a potential cue phrase is alone in its intermediate phrase or firstinphrasefinally table 10 presents a breakdown by lexical item of some of the data in table 9in this table we show the prosodic characteristics of classified cue phrases indicating the number of items that fit our prosodic models and which models they fit and the number that did notfirst note that some cue phrases in our singlespeaker study were always identified as sentential actually also because except first generally look next no right second see since therefore and yesa few were only identified as discourse finally however and okin section 42 we examined the possibility that different speakers might favor one prosodic strategy for realizing discourse or sentential usage over another based on the data used in our study of nowoverall the speaker in rjb86 favored the prosodic model discourse b over discourse a for cue uses in 251 casesfor sentential uses this speaker favored the sentential a model slightly over sentential b employing the former in 204 of caseshowever it is also possible that a speaker might favor prosodic strategies that are specific to particular cue phrases to convey that they are discourse or sententialfor example from table 10 we see that most discourse uses of all coordinate conjunctions fit our prosodic model discourse b while all occurrences of finally and further fit discourse aof cue phrases classified as sentential actually first look right say see so well most frequently fit sentential a while and most frequently fits sentential bas in our previous studies we also examined potential nonprosodic distinctions between discourse and sentential usesof the orthographic and syntactic features we examined we found presence or absence of preceding punctuation and part of speech to be most successful in distinguishing discourse from sentential useswe also examined how and when cue phrases occurred adjacent to other cue phrasesalthough the data are sparseonly 118 of our tokens occurred adjacent to other cue phrases they suggest that cooccurrence data may provide information useful for cue phrase disambiguationin particular of the 26 discourse usages of cue phrases preceded by other classifiable cue phrases 20 were also discourse usagessimilarly out of 29 sentential usages preceded by a classified cue 21 were preceded by another sentential usewith respect to classified cue phrases that were followed by other classified cue phrases 20 out of 28 discourse usages were followed by a discourse usage while 21 out of 27 sentential usages were followed by other sentential usestable 11 presents the orthography found in the transcription of the cue phrases present in the recorded speechthe orthographic markers used by the transcriber include commas periods dashes and paragraph breaksfor the 843 tokens536 judged sentential and 307 judged discoursewhose classification both judges agreed upon and excluding those items that the transcriber omitted orthography or its absence is a useful predictor of discourse or sentential usein particular of the 213 tokens preceded by punctuation 176 are discourse usagesnote however that many discourse usages are not marked by preceding orthography the 176 marked tokens represent only 573 of all discourse uses in this sampleonly 37 of sentential usages were also preceded by orthographic indicatorstwelve tokens that are succeeded but not preceded by orthographic markings are discourse and 21 are sententialall of the tokens in rjb86 that are both preceded and succeeded by orthography are discourse usages although again these 25 tokens represent only 81 of the discourse tokens in the sampleso the presence of preceding orthographic indicatorsespecially in conjunction with succeeding indicatorsappears to be a reliable textual indicator that a potential cue phrase should be interpreted as a discourse use predicting correctly in 176 caseswhile we found that discourse uses are not always reliably marked by such indicators in the rjb86 transcription it is possible to predict the discoursesentential distinction from orthography alone for this corpus in 675 casesin our study of now described in section 43 we found that in 51 cases cue phrases that were first in intonational phrase were marked orthographicallyin the current singlespeaker study first position in intonational phrase was orthographically marked in only 199 of 429 or 464 of casesso in this study the association between position in intonational phrase and orthographic marking appears much weakerwe also found that part of speech could be useful in distinguishing discourse from sentential usagealthough less useful than orthographic cuesas shown in table 127 if we simply predict discourse or sentential use by the assignment most frequently associated with a given part of speech church partofspeech algorithm predicts discourse or sentential use in 561 cases for tokens where both judges agreed on discoursesentential assignmentfor example we assume that since the majority of conjunctions and verbs are judged sentential these parts of speech are predictors of sentential status and since most adverbials are associated with discourse uses these are predictors of discourse status and so onif we employ both orthographic indicators and part of speech as predictors of the discoursesentential distinction we achieve only slightly better prediction than with orthographic cues alonethat is if we consider both an item partofspeech tag and adjacent orthographic indicators we model the rjb86 data only marginally more accuratelytable 13 models correctly 677 transcribed classified tokens in rjb86 from orthographic and partofspeech informationfor example given a coordinating conjunction our model would predict that it would be a discourse use if preceded by orthography and a sentential use otherwisein fact the only difference from orthography alone is the way succeeding orthography can signal a discourse use for a singular or mass noun and a sentential use for adverbswhile the use of orthographic and partofspeech data represents only a fractional improvement over orthographic information alone it is possible that since the latter is not subject to transcriber idiosyncracy such an approach may prove more reliable than orthography alone in the general caseand for texttospeech applications it is not clear how closely orthographic conventions for unrestricted written text will approximate the regularities we have observed in our transcribed corporaour findings for our singlespeaker multicue phrase study support the intonational model of discoursesentential characteristics of cue phrases that we proposed based on our earlier multispeaker singlecue phrase studies of now and well in each study discourse uses of cue phrases fit one of two prosodic models in one the cue phrase was set apart as a separate intermediate phrase possibly with other cue phrases in the other the cue phrase was firstinphrase possibly preceded by other cue phrases and either was deaccented or bore a l pitch accentsentential uses also fit one of two prosodic models in both they were part of a larger intermediate phrasein one model they were firstinphrase and bore a h or complex pitch accentthus distinguishing them from discourse uses that were firstinphrasein the other they were not firstinphrase and bore any type pitch accentthe association between discoursesentential models and discoursesentential judgments for this study as for our previous studies of now and well is significant at the 001 levelhowever for the singlespeaker multicue phrase data in rjb86 our prosodic models successfully classified only 662 tokens a considerably smaller proportion than for the previous studieswe found one major reason for the poorer performance of our models on the multicue phrase dataa large percentage of the tokens that do not fit our prosodic models were coordinate conjunctionswhen these are removed from our sample our prosodic models correctly classify 442 tokens it is also worth noting that coordinate conjunctions were among the most difficult cue phrases to classify as discourse or sententialto improve our notion of the factors that distinguish discourse from sentential uses we made a more general examination of the set of items that we were unable to classifyin addition to the finding that conjunctions were difficult to classify we also found that certain prosodic configurations appeared to make tokens more or less difficult to classifyof the 75 unclassified tokens for rjb86 55 were tokens of discourse model b or sentential model arecall that discourse model b identifies items that are firstinphrase and are deaccented or bear a l pitch accent sentential model a identifies items that are also firstinphrase but bear a h or complex pitch accentdiscourse model a items that are alone in intermediate phrase and sentential model b items that are not firstinphrase appear easier to classifythus it appears that prosodic configurations that are distinguished solely by differences in pitch accent rather than upon differences in phrasing and position within a phrase may be less useful indicators of the discoursesentential distinctionfurthermore we found that orthographic cues successfully disambiguate between discourse and sentential usage in 675 cases part of speech was less successful in distinguishing discourse from sentential use disambiguating only 561 cases in the study using both orthography and part of speech for predicting the discoursesentential distinction in our corpus was nearly equivalent to using orthography alone predicting 677 cases correctlythe relationship between the orthography of transcription and the orthography of written text will be an important determinant of whether orthography alone can be used for prediction in texttospeech applications if the latter is less useful partofspeech may provide additional powerin this paper we have examined the problem of disambiguating cue phrases in both text and speechwe have presented results of several analyses of cue phrase usage in corpora of recorded transcribed speech in which we examined a number of textbased and prosodic features to find which best predicted a discoursesentential distinctionbased on these studies we have proposed an intonational model for cue phrase disambiguation in speech based on intonational phrasing and pitch accent and a model for cue phrase disambiguation in text based on orthographic indicators and partofspeech informationwork on the meanings associated with particular intonational features such as phrasing and pitch accent type provides an explanation for the different prosodic configurations associated with discourse and sentential uses of cue phrasesas we have demonstrated above discourse uses of cue phrases fit one of two modelsin one model discourse model a discourse uses are set apart as separate intermediate phrasesrecall from section 3 that intonational phrasing can serve to divide speech into units of information for purposes such as scope disambiguationso a broader discourse scope for a cue phrase may be signalled by setting it apart from other items that it might potentially modify if interpreted more narrowlythat is in an utterance such as now let us talk about cue phrases now may be more likely to be interpreted in its discourse sense if it is physically set apart from the verb it might otherwise modify in its sentential guisewe have also seen that a discourse cue phrase may be part of a larger intermediate phrase and deaccented or given a l pitch accentdiscourse model bwhile the absence of a pitch accent generally tends to convey that an item represents old information or is inferrable in the discourse deaccenting is also frequently associated with function wordsprepositions pronouns and articlescue phrases in the deaccented subset of discourse model b may like function words be seen as conveying structural information rather than contributing to the semantic content of an utterancethe alternative version of discourse model b in which a cue phrase that is part of a larger phrase receives a l pitch accent might be understood in terms of the interpretation proposed by pierrehumbert and hirschberg for the l accentin this account the l accent is analyzed as conveying that an item is salient in the discourse but for some reason should not be added to speaker and hearer mutual belief spacethis subset of discourse model b cue phrases may thus be analyzed as conveying salient information about the discourse but not adding to the semantic content of speaker and hearer beliefsthe textbased and prosodic models of cue phrases we have proposed from our studies of particular cue phrases spoken by multiple speakers and of multiple cue phrases spoken by a single speaker have both practical and theoretical importfrom a practical point of view the construction of both textbased and prosodic models permit improvement in the generation of synthetic speech from unrestricted textfrom our text based model we know when to convey a discourse or a sentential use of a given cue phrasefrom our prosodic model we know how to convey such a distinctionthese distinctions have in fact been implemented in a new version of the bell labs texttospeech system from a theoretical point of view our findings demonstrate the feasibility of cue phrase disambiguation in both text and speech and provide a model for how that disambiguation might be accomplishedthese results strengthen the claim that the discourse structures crucial to computational models of interaction in this case certain lexical indicators of discourse structure can indeed be identifiedwe thank ron brachman for providing one of our corpora and jan van santen for helpful comments on this workthis work was partially supported by darpa under contract n0003984c0165
J93-3003
empirical studies on the disambiguation of cue phrasescue phrases are linguistic expressions such as now and well that function as explicit indicators of the structure of a discoursefor example now may signal the beginning of a subtopic or a return to a previous topic while well may mark subsequent material as a response to prior material or as an explanatory commenthowever while cue phrases may convey discourse structure each also has one or more alternate useswhile incidentally may be used sententially as an adverbial for example the discourse use initiates a digressionalthough distinguishing discourse and sentential uses of cue phrases is critical to the interpretation and generation of discourse the question of how speakers and hearers accomplish this disambiguation is rarely addressedthis paper reports results of empirical studies on discourse and sentential uses of cue phrases in which both textbased and prosodic features were examined for disambiguating powerbased on these studies it is proposed that discourse versus sentential usage may be distinguished by intonational features specifically pitch accent and prosodic phrasinga prosodic model that characterizes these distinctions is identifiedthis model is associated with features identifiable from text analysis including orthography and part of speech to permit the application of the results of the prosodic analysis to the generation of appropriate intonational features for discourse and sentential uses of cue phrases in synthetic speechin the literature there is still no consistent definition for discourse markerswe find that into national phrasing and pitch accent play a role in disambiguating cue phrases and hence in helping determine discourse structure
tagging english text with a probabilistic model in this paper we present some experiments on the use of a probabilistic model to tag english text ie to assign to each word the correct tag in the context of the sentence the main novelty of these experiments is the use of untagged text in the training of the model we have used a simple triclass markov model and are looking for the best way to estimate the parameters of this model depending on the kind and amount of training data provided two approaches in particular are compared and combined using text that has been tagged by hand and computing relative frequency counts using text without tags and training the model as a hidden markov process according to a maximum likelihood principle experiments show that the best training is obtained by using as much tagged text as possible they also show that maximum likelihood training the procedure that is routinely used to estimate hidden markov models parameters from training data will not necessarily improve the tagging accuracy in fact it will generally degrade this accuracy except when only a limited amount of handtagged text is available institut eurecom in this paper we present some experiments on the use of a probabilistic model to tag english text ie to assign to each word the correct tag in the context of the sentencethe main novelty of these experiments is the use of untagged text in the training of the modelwe have used a simple triclass markov model and are looking for the best way to estimate the parameters of this model depending on the kind and amount of training data providedtwo approaches in particular are compared and combined experiments show that the best training is obtained by using as much tagged text as possiblethey also show that maximum likelihood training the procedure that is routinely used to estimate hidden markov models parameters from training data will not necessarily improve the tagging accuracyin fact it will generally degrade this accuracy except when only a limited amount of handtagged text is availablea lot of effort has been devoted in the past to the problem of tagging text ie assigning to each word the correct tag in the context of the sentencetwo main approaches have generally been considered derouault and merialdo 1986 derose 1988 church 1989 beale 1988 marcken 1990 merialdo 1991 cutting et al 1992more recently some work has been proposed using neural networks through these different approaches some common points have emerged these kinds of considerations fit nicely inside a probabilistic formulation of the problem which offers the following advantages in this paper we present a particular probabilistic model the triclass model and results from experiments involving different ways to estimate its parameters with the intention of maximizing the ability of the model to tag text accuratelyin particular we are interested in a way to make the best use of untagged text in the training of the modelwe suppose that the user has defined a set of tags consider a sentence w w1w2 wn and a sequence of tags t tit2 tn of the same lengthwe call the pair an alignmentwe say that word w has been assigned the tag t in this alignmentwe assume that the tags have some linguistic meaning for the user so that among all possible alignments for a sentence there is a single one that is correct from a grammatical point of viewa tagging procedure is a procedure 0 that selects a sequence of tags for each sentence0wt0 there are two measures for the quality of a tagging procedure in practice performance at sentence level is generally lower than performance at word level since all the words have to be tagged correctly for the sentence to be tagged correctlythe standard measure used in the literature is performance at word level and this is the one considered herein the probabilistic formulation of the tagging problem we assume that the alignments are generated by a probabilistic model according to a probability distribution p in this case depending on the criterion that we choose for evaluation the optimal tagging procedure is as follows we call this procedure viterbi taggingit is achieved using a dynamic programming scheme where 0 is the tag assigned to word w by the tagging procedure in the context of the sentence w we call this procedure maximum it is interesting to note that the most commonly used method is viterbi tagging although it is not the optimal method for evaluation at word levelthe reasons for this preference are presumably that however in our experiments we will show that viterbi and ml tagging result in very similar performanceof course the real tags have not been generated by a probabilistic model and even if they had been we would not be able to determine this model exactly because of practical limitationstherefore the models that we construct will only be approximations of an ideal model that does not existit so happens that despite these assumptions and approximations these models are still able to perform reasonably wellwe have the mathematical expression the triclass model is based on the following approximations in order to define the model completely we have to specify the values of all h and k probabilitiesif nw is the size of the vocabulary and nt the number of different tags then there are the total number of free parameters is then note that this number grows only linearly with respect to the size of the vocabulary which makes this model attractive for vocabularies of a very large sizethe triclass model by itself allows any word to have any taghowever if we have a dictionary that specifies the list of possible tags for each word we can use this information to constrain the model if t is not a valid tag for the word w then we are sure that there are thus at most as many nonzero values for the k probabilities as there are possible pairs allowed in the dictionaryif we have some tagged text available we can compute the number of times n a given word w appears with the tag t and the number of times n the sequence appears in this textwe can then estimate the probabilities h and k by computing the relative frequencies of the corresponding events on this data these estimates assign a probability of zero to any sequence of tags that did not occur in the training databut such sequences may occur if we consider other textsa probability of zero for a sequence creates problems because any alignment that contains this sequence will get a probability of zerotherefore it may happen that for some sequences of words all alignments get a probability of zero and the model becomes useless for such sentencesto avoid this we interpolate these distributions with uniform distributions ieonsider the interpolated model defined by where number of words that have the tag t the interpolation coefficient a is computed using the deleted interpolation algorithm the value of this coefficient is expected to increase if we increase the size of the training text since the relative frequencies should be more reliablethis interpolation procedure is also called quotsmoothingquot smoothing is performed as follows it can be noted that more complicated interpolation schemes are possiblefor example different coefficients can be used depending on the count of with the intuition that relative frequencies can be trusted more when this count is highanother possibilitity is to interpolate also with models of different orders such as hrf or hrf smoothing can also be achieved with procedures other than interpolationone example is the quotbackingoffquot strategy proposed by katz using a triclass model m it is possible to compute the probability of any sequence of words w according to this model where the sum is taken over all possible alignmentsthe maximum likelihood training finds the model m that maximizes the probability of the training text where the product is taken over all the sentences w in the training textthis is the problem of training a hidden markov model a wellknown solution to this problem is the forwardbackward or baumwelch algorithm which iteratively constructs a sequence of models that improve the probability of the training datathe advantage of this approach is that it does not require any tagging of the text but makes the assumption that the correct model is the one in which tags are used to best predict the word sequencethe viterbi algorithm is easily implemented using a dynamic programming scheme the maximum likelihood algorithm appears more complex at first glance because it involves computing the sum of the probabilities of a large number of alignmentshowever in the case of a hidden markov model these computations can be arranged in a way similar to the one used during the fb algorithm so that the overall amount of computation needed becomes linear in the length of the sentence the main objective of this paper is to compare rf and ml trainingthis is done in section 72we also take advantage of the environment that we have set up to perform other experiments described in section 73 that have some theoretical interest but did not bring any improvement in practiceone concerns the difference between viterbi and ml tagging and the other concerns the use of constraints during trainingwe shall begin by describing the textual data that we are using before presenting the different tagging experiments using these various training and tagging methodswe use the quottreebankquot data described in beale it contains 42186 sentences from the associated pressthese sentences have been tagged manually at the unit for computer research on the english language in collaboration with ibm youk and the ibm speech recognition group in yorktown heights in fact these sentences are not only tagged but also parsedhowever we do not use the information contained in the parsein the treebank 159 different tags are usedthese tags were projected on a smaller system of 76 tags designed by evelyne tzoukermann and peter brown the results quoted in this paper all refer to this smaller systemwe built a dictionary that indicates the list of possible tags for each word by taking all the words that occur in this text and for each word all the tags that are assigned to it somewhere in the textin some sense this is an optimal dictionary for this data since a word will not have all its possible tags but only the tags that it actually had within the textwe separated this data into two parts in this experiment we extracted n tagged sentences from the training datawe then computed the relative frequencies on these sentences and built a quotsmoothedquot model using the procedure previously describedthis model was then used to tag the 2000 test sentenceswe experimented with different values of n for each of which we indicate the value of the interpolation coefficient and the number and percentage of correctly tagged wordsresults are indicated in table 1as expected as the size of the training increases the interpolation coefficient increases and the quality of the tagging improveswhen n 0 the model is made up of uniform distributionsin this case all alignments for a sentence are equally probable so that the choice of the correct tag is just a choice at randomhowever the percentage of correct tags is relatively high because note that this behavior is obviously very dependent on the system of tags that is usedit can be noted that reasonable results are obtained quite rapidlyusing 2000 tagged sentences the tagging error rate is already less than 5using 10 times as much data provides an improvement of only 15ml training viterbi tagging in ml training we take all the training data available but we only use the word sequences not the associated tags this is possible since the fb algorithm is able to train the model using the word sequence onlyin the first experiment we took the model made up of uniform distributions as the initial onethe only constraints in this model came from the values k that were set to zero when the tag t was not possible for the word w we then ran the fb algorithm and evaluated the quality of the taggingthe results are shown in figure 1this figure shows that ml training both improves the perplexity of the model and reduces the tagging error ratehowever this error rate remains at a relatively high levelhigher than that obtained with a rf training on 100 tagged sentenceshaving shown that ml training is able to improve the uniform model we then wanted to know if it was also able to improve more accurate modelswe therefore took as the initial model each of the models obtained previously by rf training and for each one performed ml training using all of the training word sequencesthe results are shown graphically in figure 2 and numerically in table 2these results show that when we use few tagged data the model obtained by relative frequency is not very good and maximum likelihood training is able to improve ithowever as the amount of tagged data increases the models obtained by relative frequency are more accurate and maximum likelihood training improves on the initial iterations only but after deterioratesif we use more than 5000 tagged sentences even the first iteration of ml training degrades the taggingthese results call for some commentsml training is a theoretically sound procedure and one that is routinely and successfully used in speech recognition to estimate the parameters of hidden markov models that describe the relations between sequences of phonemes and the speech signalalthough ml training is guaranteed to improve perplexity perplexity is not necessarily related to tagging accuracy and it is possible to improve one while degrading the otheralso in the case of tagging ml training from various initial points the relations between words and tags are much more precise than the relations between phonemes and speech signals some characteristics of ml training such as the effect of smoothing probabilities are probably more suited to speech than to taggingfor this experiment we considered the initial model built by rf training over the whole training data and all the successive models created by the iterations of ml trainingfor each of these models we performed viterbi tagging and ml tagging on the same test data then evaluated and compared the number of tagging errors produced by these two methodsthe results are shown in table 3the models obtained at different iterations are related so one should not draw strong conclusions about the definite superiority of one tagging procedurehowever the difference in error rate is very small and shows that the choice of the tagging procedure is not as critical as the kind of training materialfollowing a suggestion made by f jelinek we investigated the effect of constraining the ml training by imposing constraints on the probabilitiesthis idea comes from the observation that the amount of training data needed to properly estimate the model increases with the number of free parameters of the modelin the case of little training data adding reasonable constraints on the shape of the models that are looked for reduces the number of free parameters and should improve the quality of the estimateswe tried two different constraints the twconstrained ml training is similar to the standard ml training except that the probabilities p are not changed at the end of an iterationthe results in table 4 show the number of tagging errors when the model is trained with the standard or twconstrained ml trainingthey show that the twconstrained ml training still degrades the rf training but not as quickly as the standard mlwe have not tested what happens when smaller training data is used to build the initial model tconstraint this constraint is more difficult to implement than the previous one because the probabilities p are not the parameters of the model but a combination of these parameterswith the help of r polyak we have designed an iterative procedure that allows the likelihood to be improved while preserving the values of pwe do not have sufficient space to describe this procedure herebecause of its greater computational complexity we have only applied it to a biclass model ie a model where the initial model is estimated by relative frequency on the whole training data and viterbi tagging is usedas in the previous experiment the results in table 5 show the number of tagging errors when the model is trained with the standard or tconstrained ml trainingthey show that the tconstrained ml training still degrades the rf training but not as quickly as the standard mlagain we have not tested what happens when smaller training data is used to build the initial model8conclusion the results presented in this paper show that estimating the parameters of the model by counting relative frequencies over a very large amount of handtagged text lead to the best tagging accuracymaximum likelihood training is guaranteed to improve perplexity but will not necessarily improve tagging accuracyin our experiments ml training degrades the performance unless the initial model is already very badthe preceding results suggest that the optimal strategy to build the best possible model for tagging is the following whichever occurs firsti would like to thank peter brown fred jelinek john lafferty robert mercer salim roukos and other members of the continuous speech recognition group for the fruitful discussions i had with them throughout this worki also want to thank one of the referees for his judicious comments
J94-2001
tagging english text with a probabilistic modelin this paper we present some experiments on the use of a probabilistic model to tag english text ie to assign to each word the correct tag in the context of the sentencethe main novelty of these experiments is the use of untagged text in the training of the modelwe have used a simple triclass marlcov model and are looking for the best way to estimate the parameters of this model depending on the kind and amount of training data providedtwo approaches in particular are compared and combinedusing text that has been tagged by hand and computing relative frequency countsusing text without tags and training the model as a hidden markov process according to a maximum likelihood principleexperiments show that the best training is obtained by using as much tagged text as possiblethey also show that maximum likelihood training the procedure that is routinely used to estimate hidden markov models parameters from training data will not necessarily improve the tagging accuracyin fact it will generally degrade this accuracy except when only a limited amount of handtagged text is availablewe attempted to improve hmm pos tagging by expectation maximization with unlabeled datawe introduced the still standard procedure of using a bigram hidden markov model trained via expectation maximizationin the context of pos tagging we introduce a method that he calls maximum likelihood tagging
regular models of phonological rule systems this paper presents a set of mathematical and computational tools for manipulating and reasoning about regular languages and regular relations and argues that they provide a solid basis for computational phonology it shows in detail how this framework applies to ordered sets of contextsensitive rewriting rules and also to grammars in koskenniemi twolevel formalism this analysis provides a common representation of phonological constraints that supports efficient generation and recognition by a single simple interpreter this paper presents a set of mathematical and computational tools for manipulating and reasoning about regular languages and regular relations and argues that they provide a solid basis for computational phonologyit shows in detail how this framework applies to ordered sets of contextsensitive rewriting rules and also to grammars in koskenniemi twolevel formalismthis analysis provides a common representation of phonological constraints that supports efficient generation and recognition by a single simple interpreterordered sets of contextsensitive rewriting rules have traditionally been used to describe the pronunciation changes that occur when sounds appear in different phonological and morphological contextsintuitively these phenomena ought to be cognitively and computationally simpler than the variations and correspondences that appear in natural language syntax and semantics yet the formal structure of such rules seems to require a complicated interpreter and an extraordinarily large number of processing stepsin this paper we show that any such rule defines a regular relation on strings if its noncontextual part is not allowed to apply to its own output and thus it can be modeled by a symmetric finitestate transducerfurthermore since regular relations are closed under serial composition a finite set of rules applying to each other output in an ordered sequence also defines a regular relationa single finitestate transducer whose behavior simulates the whole set can therefore be constructed by composing the transducers corresponding to the individual rulesthis transducer can be incorporated into efficient computational procedures that are far more economical in both recognition and production than any strategies using ordered rules directlysince orthographic rules have similar formal properties to phonological rules our results generalize to problems of word recognition in written textthe mathematical techniques we develop to analyze rewriting rule systems are not limited just to that particular collection of formal devicesthey can also be applied to other recently proposed phonological or morphological rule systemsfor example we can show that koskenniemi twolevel parallel rule systems also denote regular relationssection 2 below provides an intuitive grounding for the rest of our discussion by illustrating the correspondence between simple rewriting rules and transducerssection 3 summarizes the mathematical tools that we use to analyze both rewriting and twolevel systemssection 4 describes the properties of the rewriting rule formalisms we are concerned with and their mathematical characterization is presented in sections 5 and 6a similar characterization of twolevel rule systems is provided in section 7by way of introduction we consider some of the computational issues presented by simple morphophonemic rewriting rules such as these according to these rules an underspecified abstract nasal phoneme n appearing in the lexical forms inpractical and intractable will be realized as the m in impractical and as the n in intractableto ensure that these and only these results are obtained the rules must be treated as obligatory and taken in the order givenas obligatory rules they must be applied to every substring meeting their conditionsotherwise the abstract string inpractical would be realized as in practical and inpractical as well as impractical and the abstract n would not necessarily be removed from intractableordering the rules means that the output of the first is taken as the input to the secondthis prevents inpractical from being converted to in practical by rule 2 without first considering rule 1these obligatory rules always produce exactly one result from a given inputthis is not the case when they are made to operate in the reverse directionfor example if rule 2 is inverted on the string intractable there will be two results intractable and intractablethis is because intractable is derivable by that rule from both of these stringsof course only the segments in intractable will eventually match against the lexicon but in general both the n and n results of this inversion can figure in valid interpretationscompare the words undecipherable and indecipherablethe n in the prefix un unlike the one in in does not derive from the abstract n since it remains unchanged before labials thus the results of inverting this rule must include undecipherable for undecipherable but indecipherable for indecipherable so that each of them can match properly against the lexiconwhile inverting a rule may sometimes produce alternative outputs there are also situations in which no output is producedthis happens when an obligatory rule is inverted on a string that it could not have generatedfor example input cannot be generated by rule 1 because the n precedes a labial and therefore would obligatorily be converted to m there is therefore no output when rule 1 is inverted on inputhowever when rule 2 is inverted on input it does produce input as one of its resultsthe effect of then inverting rule 1 is to remove the ambiguity produced by inverting rule 2 leaving only the unchanged input to be matched against the lexiconmore generally if recognition is carried out by taking the rules of a grammar in reverse order and inverting each of them in turn later rules in the new sequence act as filters on ambiguities produced by earlier onesthe existence of a large class of ambiguities that are introduced at one point in the recognition process and eliminated at another has been a major source of difficulty in efficiently reversing the action of linguistically motivated phonological grammarsin a large grammar the effect of these spurious ambiguities is multiplicative since the information needed to cut off unproductive paths often does not become available until after they have been pursued for some considerable distanceindeed speech understanding systems that use phonological rules do not typically invert them on strings but rather apply them to the lexicon to generate a list of all possible word forms recognition is then accomplished by standard tablelookup procedures usually augmented with special devices to handle phonological changes that operate across word boundariesanother approach to solving this computational problem would be to use the reversed cascade of rules during recognition but to somehow make the filtering information of particular rules available earlier in the processhowever no general and effective techniques have been proposed for doing thisthe more radical approach that we explore in this paper is to eliminate the cascade altogether representing the information in the grammar as a whole in a single more unified device namely a finitestate transducerthis device is constructed in two phasesthe first is to create for each rule in the grammar a transducer that exactly models its behaviorthe second is to compose these individual rule transducers into a single machine that models the grammar as a wholejohnson was the first to notice that the noncyclic components of standard phonological formalisms and particularly the formalism of the sound pattern of english were equivalent in power to finitestate devices despite a superficial resemblance to general rewriting systemsphonologists in the spe tradition as well as the structuralists that preceded them had apparently honored an injunction against rules that rewrite their own output but still allowed the output of a rule to serve as context for a reapplication of that same rulejohnson realized that this was the key to limiting the power of systems of phonological ruleshe also realized that basic rewritingrules were subject to many alternative modes of application offering different expressive possibilities to the linguisthe showed that phonological grammars under most reasonable modes of application remain within the finitestate paradigmwe observed independently the basic connections between rewritingrule grammars and finitestate transducers in the late 1970s and reported them at the 1981 meeting of the linguistic society of america the mathematical analysis in terms of regular relations emerged somewhat lateraspects of that analysis and its extension to twolevel systems were presented at conferences by kaplan in courses at the 1987 and 1991 linguistics institutes and at colloquia at stanford university brown university the university of rochester and the university of helsinkiour approach differs from johnson in two important waysfirst we abstract away from the many details of both notation and machine description that are crucial to johnson method of argumentationinstead we rely strongly on closure properties in the underlying algebra of regular relations to establish the major result that phonological rewriting systems denote such sets of stringpairswe then use the correspondence between regular relations and finitestate transducers to develop a constructive relationship between rewriting rules and transducersthis is accomplished by means of a small set of simple operations each of which implements a simple mathematical fact about regular languages regular relations or bothsecond our more abstract perspective provides a general framework within which to treat other phonological formalisms existing or yet to be devisedfor example twolevel morphology which evolved from our early considerations of rewriting rules relies for its analysis and implementation on the same algebraic techniqueswe are also encouraged by initial successes in adapting these techniques to the autosegmental formalism described by kay supposing for the moment that rule 2 is optional figure 1 shows the transition diagram of a finitestate transducer that models ita finitestate transducer has two tapesa transition can be taken if the two symbols separated by the colon in its label are found at the current position on the corresponding tapes and the current position advances across those tape symbolsa pair of tapes is accepted if a sequence of transitions can be taken starting at the startstate and at the beginning of the tapes and leading to a final state at the end of both tapesin the machine in figure 1 there is a transition from state 0 to state 0 that translates every phoneme into itself reflecting the fact that any phoneme can remain unchanged by the optional rulethese are shown schematically in the diagramthis machine will accept a pair of tapes just in case they stand in a certain relation they must be identical except for possible replacements of n on the first tape with n on the secondin other words the second tape must be one that could have resulted from applying the optional rule to the string on the first tapebut the rule is in fact obligatory and this means that there must be no occurrences of n on the second tapethis condition is imposed by the transducer in figure 2in this diagram the transition label quototherquot abbreviates the set of labels aabb zz the identity pairs formed from all symbols that belong to the alphabet but are not mentioned explicitly in this particular rulethis diagram shows no transition over the pair nn and the transducer therefore blocks if it sees n on both tapesthis is another abbreviatory convention that is typically used in implementations to reduce transducer storage requirements and we use it here to simplify the state diagrams we drawin formal treatments such as the one we present below the transition function is total and provides for transitions from every state over every pair of symbolsany transition we do not show in these diagrams in fact terminates at a single nonfinal state the quotfailurequot state which we also do not showfigure 3 is the more complicated transducer that models the obligatory behavior of rule 1 this machine blocks in state 1 if it sees the pair nm not followed by one of the labials p b m it blocks in state 2 if it encounters the pair nn followed by a labial on both tapes thus providing for the situation in which the rule is not applied even though its conditions are satisfiedif it does not block and both tapes are eventually exhausted it accepts them just in case it is then in one of the final states 0 or 2 shown as double circlesit rejects the tapes if it ends up in the nonfinal state 1 indicating that the second tape is not a valid translation of the first onewe have described transducers as acceptors of pairs of tapes that stand in a certain relationbut they can also be interpreted asymmetrically as functions either from more abstract to less abstract strings or the other way aroundeither of the tapes can contain an input string in which case the output will be written on the otherin each transition the machine matches the symbol specified for the input tape and writes the one for the outputwhen the first tape contains the input the machine models the generative application of the rule when the second tape contains the input it models the inversion of the rulethus compared with the rewriting rules from which they are derived finitestate transducers have the obvious advantage of formal and computational simplicitywhereas the exact procedure for inverting rules themselves is not obvious it is clearly different from the procedure required for generatingthe corresponding transducers on the other hand have the same straightforward interpretation in both directionswhile finitestate transducers are attractive for their formal simplicity they have a much more important advantage for our purposesa pair of transducers connected through a common tape models the composition of the relations that those transducers representthe pair can be regarded as performing a transduction between the outer tapes and it turns out that a single finitestate transducer can be constructed that performs exactly this transduction without incorporating any analog of the intermediate tapein short the relations accepted by finitestate transducers are closed under serial compositionfigure 4 shows the composition of the mmachine in figure 3 and the nmachine in figure 2this transducer models the cascade in which the output of rule 1 is the input to rule 2this machine is constructed so that it encodes all the possible ways in which the mmachine and nmachine could interact through a common tapethe only interesting interactions involve n and these are summarized in the following table input mmachine output input nmachine output n labial follows m tri n nonlabial follows n n an n in the input to the mmachine is converted to m before a labial and this m remains unchanged by the nmachinethe only instances of n that reach the nmachine must therefore be followed by nonlabials and these must be converted to n accordingly after converting n to m the composed machine is in state 1 which it can leave only by a transition over labialsafter converting n to n it enters state 2 from which there is no labial transitionotherwise state 2 is equivalent to the initial statefigure 5 illustrates the behavior of this machine as a generator applied to the abstract string intractablestarting in state 0 the first transition over the quototherquot arc produces i on the output tape and returns to state 0two different transitions are then possible for the n on the input tapethese carry the machine into states 1 and 2 and output the symbols m and n respectivelythe next symbol on the input tape is t since this is not a labial no transition is possible from state 1 and that branch of the process therefore blockson the other branch the t matches the quototherquot transition back to state 0 and the machine stays in state 0 for the remainder of the stringsince state 0 is a final state this is a valid derivation of the string intractablefigure 6 is a similar representation for the generation of impracticalfigures 7 and 8 illustrate this machine operating as a recognizeras we pointed out earlier there are two results when the cascade of rules that this machine represents is inverted on the string intractableas figure 7 shows the n can be mapped into n by the nn transition at state 0 or into n by the transition to state 2the latter transition is acceptable because the following t is not a labial and thus matches against the quototherquot transition to state 0when the following symbol is a labial as in figure 8 the process blocksnotice that the string input that would have been written on the intermediate tape before the machines were composed is blocked after the second symbol by constraints coming from the mmachinerepeated composition reduces the machines corresponding to the rules of a complete phonological grammar to a single transducer that works with only two tapes one containing the abstract phonological string and the other containing its phonetic realizationgeneral methods for constructing transducers such as these rely on fundamental mathematical notions that we develop in the next sectionformal languages are sets of strings mathematical objects constructed from a finite alphabet e by the associative operation of concatenationformal language theory has classified string sets the subsets of e in various ways and has developed correspondences between languages grammatical notations for describing their member strings and automata for recognizing thema similar conceptual framework can be established for string relationsthese are the collections of ordered tuples of strings the subsets of e x x ewe begin by defining an nway concatenation operation in terms of the familiar concatenation of simple stringsif x and y yn are ntuples of strings then the concatenation of x and y written x y or simply xy is defined by that is the nway concatenation of two stringtuples is the tuple of strings formed by string concatenation of corresponding elementsthe length of a stringtuple i xi can be defined in terms of the lengths of its component strings this has the expected property that ix yl i i yl even if the elements of x or of y are of different lengthsjust as the empty string c is the identity for simple string concatenation the ntuple all of whose elements are is the identity for nway concatenation and the length of such a tuple is zerowith these definitions in hand it is immediately possible to construct families of string relations that parallel the usual classes of formal languagesrecall for example the usual recursive definition of a regular language over an alphabet e other families of relations can also be defined by analogy to the formal language casefor example a system of contextfree rewriting rules can be used to define a contextfree nrelation simply by introducing ntuples as the terminal symbols of the grammarthe standard contextfree derivation procedure will produce tree structures with ntuple leaves and the relational yield of such a grammar is taken to be the set of nway concatenations of these leavesour analysis of phonological rule systems does not depend on expressive power beyond the capacity of the regular relations however and we therefore confine our attention to the mathematical and computational properties of these more limited systemsthe relations we refer to as quotregularquot to emphasize the connection to formal language theory are often known as quotrational relationsquot in the algebraic literature where they have been extensively studied the descriptive notations and accepting automata for regular languages can also be generalized to the ndimensional casean nway regular expression is simply a regular expression whose terms are ntuples of alphabetic symbols or for ease of writing we separate the elements of an ntuple by colonsthus the expression ab ec describes the tworelation containing the single pair and abc qrs describes the threerelation i n 0the regularexpression notation provides for concatenation union and kleeneclosure of these termsthe accepting automata for regular nrelations are the nway finitestate transducersas illustrated by the twodimensional examples given in section 2 these are an obvious extension of the standard onetape finitestate machinesthe defining properties of the regular languages regular expressions and finitestate machines are the basis for proving the wellknown kleene correspondence theorems showing the equivalence of these three stringset characterizationsthese essential properties carry over in the nway generalizations and therefore the correspondence theorems also generalizein particular simple analogs of the standard inductive proofs show that every nway regular expression describes a regular nrelation every regular nrelation is described by an nway regular expression every ntape finitestate transducer accepts a regular nrelation and every regular nrelation is accepted by an ntape finitestate transducerthe strength of our analysis method comes from the equivalence of these different characterizationswhile we reason about the regular relations in algebraic and settheoretic terms we conveniently describe the sets under discussion by means of regular expressions and we prove essential properties by constructive operations on the corresponding finitestate transducersin the end of course it is the transducers that satisfy our practical computational goalsa nondeterministic finitestate machine is a quintuple where e is a finite alphabet q is a finite set of states q e q is the initial state and f c q is the set of final statesthe transition function 6 is a total function that maps q x e to 2 the set of all subsets of q and every state s in q is vacuously a member of 6we extend the function 6 to sets of states so that for any p c q and a e e 6 up 6we also define the usual extension of 6 to a transition function 6 on e as follows for all r in q 6 6 and for all you c e and a e e6 6 6 athus the machine accepts a string x just in case 6 n f is nonempty that is if there is a sequence of transitions over x beginning at the initial state and ending at a set of states at least one of which is finalwe know of course that every regular language is also accepted by a deterministic free finitestate machine but assuming vacuous transitions at every state reduces the number of special cases that have to be considered in some of the arguments belowa nondeterministic nway finitestate transducer is defined by a quintuple similar to that of an fsm except for the transition function 6 a total function that maps q x e x x e to 2partly to simplify the mathematical presentation and partly because only the binary relations are needed in the analysis of rewriting rules and koskenniemi twolevel systems from here on we frame the discussion in terms of binary relations and twotape transducershowever the obvious extensions of these properties do hold for the general case and they may be useful in developing a formal understanding of autosegmental phonological and morphological theories the transition function 6 of a transducer also extends to a function 6 that carries a state and a pair of strings onto a set of statestransitions in fsts are labeled with pairs of symbols and we continue to write them with a colon separatorthus youv labels a transition over a you on the first tape and a v on the seconda finitestate transducer t defines the regular relation r the set of pairs such that 6 contains a final statethe pair e plays the same role as a label of transducer transitions that the singleton c plays in onetape machines and the eremoval algorithm for onetape machines can be generalized to show that every regular relation is accepted by an e free transducerhowever it will also be convenient for some arguments below to assume the existence of vacuous ee transitionswe write xry if the pair belongs to the relation r the image of a string x under a relation r which we write xr is the set of strings y such that is in r similarly ry is the set of strings that r carries onto ywe extend this notation to sets of strings in the obvious way xare uxex xrthis relational notation gives us a succinct way of describing the use of a corresponding transducer as either a generator or a recognizerfor example if r is the regular relation recognized by the transducer in figure 4 then rintractable is the set of strings that r maps to intractable namely intractable intractable as illustrated in figure 7similarly intractableir is the set of strings intractable that r maps from intractable we rely on the equivalence between regular languages and relations and their corresponding finitestate automata and we frequently do not distinguish between themwhen the correspondence between a language l and its equivalent machine must be made explicit we let m denote a finitestate machine that accepts l similarly we let t denote a transducer that accepts the relation r as provided by the correspondence theoremwe also rely on several of the closure properties of regular languages for regular languages li and l2 l1l2 is the regular language containing all strings xi x2 such that xi e li and x2 e l2we use superscripts for repeated concatenation ln contains the concatenation of n members of l and l contains strings with arbitrary repetitions of strings in l including zerothe operator opt is used for optionality so that opt is l you we write l for the complement of l the regular language containing all strings not in l namely e l finally rev denotes the regular language consisting of the reversal of all the strings in l there are a number of basic connections between regular relations and regular languagesthe strings that can occur in the domain and range of a regular relation are re and range er are the regular languages accepted by the finitestate machines derived from t by changing all transition labels ab to a and b respectively for all a and b in e given a regular language l the identity relation d that carries every member of l into itself is regular it is characterized by the fst obtained from an fsm m by changing all transition labels a to aaclearly for all languages l l dom rangethe inverse r1 of a regular relation r is regular since it is accepted by a transducer formed from t by changing all labels ab to bathe reversal rev consisting of pairs containing the reversal of strings in r pairs is also regular its accepting transducer is derived from t by generalizing the standard onetape fsm construction for regular language reversalgiven a pair of regular languages l1 and l2 whose alphabets can without loss of generality be assumed equal the relation l1 x l2 containing their cartesian product is regularto prove this proposition we let m1 and m2 be fsms accepting l1 and l2 respectively and define the fst where for any s1 eqi s2 e q2 and ab e e this result holds trivially when x and y are both c by the general definition of 6if a and b are in e and you and v are in e then using the definition of 6 and the definition just given for 6 of the cartesian product machine we have thus 6 xy contains a final state if and only if both 51 and 6 contain final states so t accepts exactly the strings in l1 x l20 note that l x l is not the same as id because only the former can map one member of l onto a different oneif l contains the singlecharacter strings a and b then id only contains the pairs and while l x l also contains and a similar construction is used to prove that regular relations are closed under the composition operator discussed in section 2a pair of strings belongs to the relation r1 0 r2 if and only if for some intermediate string z e r1 and e r2if t and t the composition r1 or2 is accepted by the composite fst where a b f for some c e t1 e s and t2 e 6 in essence the 6 for the composite machine is formed by canceling out the intermediate tape symbols from corresponding transitions in the component machinesby an induction on the number of transitions patterned after the one above it follows that for any strings x and y the composite transducer enters a final state just in case both component machines do for some intermediate zthis establishes that the composite transducer does represent the composition of the relations r1 and r2 and that the composition of two regular relations is therefore regularcomposition of regular relations like composition of relations in general is associative 0 r3 r1 0 r1 0 r2 0 r3for relations in general we also know that range range r2we can use this fact about the range of a composition to prove that the image of a regular language under a regular relation is a regular language but these other results do not concern us herethat is if l is a regular language and r is an arbitrary regular relation then the languages lr and rl are both regularif l is a regular language we know there exists a regular relation id that takes all and only members of l into themselvessince l range it follows that idor is regular and we have already observed that the range of any regular relation is a regular languageby symmetry of argument we know that rl is also regularjust like the class of regular languages the class of regular relations is by definition closed under the operations of union concatenation and repeated concatenationalso the pumping lemma for regular languages immediately generalizes to regular relations given the definitions of stringtuple length and nway concatenation and the correspondence to finitestate transducersthe regular relations differ from the regular languages however in that they are not closed under intersection and complementationsuppose that r1 is the relation 1 n 0 and r2 is the relation i n 0these relations are regular since they are defined by the regular expressions ab c and e b ac respectivelythe intersection r1 n r2 is i n 0the range of this relation is the contextfree language picquot which we have seen is not possible if the intersection is regularthe class of regular relations is therefore not closed under intersection and it immediately follows that it is also not closed under complementation by de morgan law closure under complementation and union would imply closure under intersectionnonclosure under complementation further implies that some regular relations are accepted by only nondeterministic transducersif for every regular relation there is a deterministic acceptor then the standard technique of interchanging its final and nonfinal states could be used to produce an fst accepting the complement relation which would therefore be regularclosure under intersection and relative difference however are crucial for our treatment of twolevel rule systems in section 7but these properties are required only for the samelength regular relations and it turns out that this subclass is closed in the necessary waysthe samelength relations contain only stringpairs such that the length of x is the same as the length of yit may seem obvious that the relevant closure properties do hold for this subclass but for the sake of completeness we sketch the technical details of the constructions by which they can be establishedwe make use of some auxiliary definitions regarding the pathlanguage of a transducera pathstring for any finitestate transducer t is a sequence of symbolpairs u1 v1 u2 v2 un vn that label the transitions of an accepting path in t the pathlanguage of t notated as paths is simply the set of all pathstrings for t paths is obviously regular since it is accepted by the finitestate machine constructed simply by interpreting the transition labels of t as elements of an alphabet of unanalyzable pairsymbolsalso if p is a finitestate machine that accepts a pairsymbol language we define the pathrelation rel to be the relation accepted by the fst constructed from p by reinterpreting every one of its pairsymbol labels as the corresponding symbol pair of a transducer labelit is clear for all fsts t that rel r the relation accepted by t now suppose that r1 and r2 are regular relations accepted by the transducers ti and t2 respectively and note that paths n paths is in fact a regular language of pairsymbols accepted by some fsm p thus rel exists as a regular relationmoreover it is easy to see that rel c ri n r2this is because every stringpair belonging to the pathrelation is accepted by a transducer with a pathstring that belongs to the pathlanguages of both t1 and t2thus that pair also belongs to both r1 and r2the opposite containment does not hold of arbitrary regular relationssuppose a pair belongs to both r1 and r2 but that none of its accepting paths in t1 has the same sequence of transition labels as an accepting path in t2then there is no path in paths n paths corresponding to this pair and it is therefore not contained in relthis situation can arise when the individual transducers have transitions with containing labelsone transducer may then accept a particular string pair through a sequence of transitions that does not literally match the transition sequence taken by the other on that same pair of stringsfor example the first fst might accept the pair by the transition sequence a c because while the other accepts that same pair with the sequence a c b c this stringpair belongs to the intersection of the relations but unless there is some other accepting path common to both machines it will not belong to relindeed when we apply this construction to fsts accepting the relations we used to derive the contextfree language above we find that rel is the empty relation instead of the settheoretic intersection r1 n r2however if r1 and r2 are accepted by transducers none of whose accepting paths have containing labels then a stringpair belonging to both relations will be accepted by identically labeled paths in both transducersthe language paths n paths will contain a pathstring corresponding to that pair that pair will belong to rel and rel will be exactly r1 n r2thus we complete the proof that the samelength relations are closed under intersection by establishing the following proposition r is a samelength regular relation if and only if it is accepted by an free finitestate transducerthe transitions of an free transducer t set the symbols of the stringpairs it accepts in onetoone correspondence so trivially r is samelengththe proof in the other direction is more tedioussuppose r is a samelength regular relation accepted by some transducer t which has transitions of the form youc or v we systematically remove all econtaining transitions in a finite sequence of steps each of which preserves the accepted relationa path from the startstate to a given nonfinal state will contain some number of you e transitions and some number of e v transitions and those two numbers will not necessarily be identicalhowever for all paths to that state the difference between those numbers will be the same since the discrepancy must be reversed by each path that leads from that state to a final statelet us define the imbalance characterizing a state to be the difference in the number of you e and e v transitions on paths leading to that statesince an acyclic path cannot produce an imbalance that differs from zero by more than the number of states in the machine the absolute value of the imbalance is bounded by the machine sizeon each iteration our procedure has the effect of removing all states with the maximum imbalancefirst we note that transitions of the form youv always connect a pair of states with the same imbalancesuch transitions can be eliminated in favor of an equivalent sequence of transitions e v and you e through a new state whose imbalance is one less than the imbalance of the original two statesnow suppose that k 0 is the maximum imbalance for the machine and that all you v transitions between states of imbalance k have been eliminatedif q is a kimbalance state it will be entered only by you e transitions from k 1 states and left only by ev transitions also to k 1 statesfor all transitions you e from a state p to q and all transitions v from q to r we construct a new transition youv from p to r then we remove state q from the machine along with all transitions entering or leaving itthese manipulations do not change the accepted relation but do reduce by one the number of kimbalance stateswe repeat this procedure for all k states and then move on to the k 1 states continuing until no states remain with a positive imbalancea symmetric procedure is then used to eliminate all the states whose imbalance is negativein the end t will have been transformed to an free transducer that still accepts r 0 the samelength regular relations are obviously closed under union concatenation composition inverse and reverse in addition to intersection since all of these operations preserve both regularity and string lengthan additional pathlanguage argument shows that they are also closed under relative differencelet t1 and t2 be efree acceptors for r1 and r2 and construct an fsm p that accepts the regular pairsymbol language paths pathsa stringpair belongs to the regular relation rel if and only if it has an accepting path in ti but not in t2thus rel is r1 r2being a subset of r1 it is also samelengthlet us summarize the results to this pointif l1 l2 and l are regular languages and r1 r2 and r are regular relations then we know that the following relations are regular r1 you r2 r1 r2 r r1 r1 0 r2 id l1 x l2 rev we know also that the following languages are regular furthermore if r1 r2 and r are in the samelength subclass then the following also belong to that restricted subclass ri you r2 ri r2 r r1 ri 0 r2 rev ri n r2 r1 r2 id is also samelength for all l intersections and relative differences of arbitrary regular relations are not necessarily regular howeverwe emphasize that all these settheoretic algebraic operations are also constructive and computational in nature fsms or fsts that accept the languages and relations that these operations specify can be constructed directly from machines that accept their operandsour rule translation procedures makes use of regular relations and languages created with five special operatorsthe first operator produces a relation that freely introduces symbols from a designated set s this relation intro is defined by the expression id you re x stif the characters a and b are in e and s is for example then intro contains an infinite set of string pairs including and so onnote that intro1 removes all elements of s from a string if s is disjoint from e the second is the ignore operatorgiven a regular language l and a set of symbols s it produces a regular language notated as ls and read as quotl ignoring squot the strings of ls differ from those of l in that occurrences of symbols in s may be freely interspersedthis language is defined by the expression ls range o introit includes only strings that would be in l if some occurrences of symbols in s were ignoredthe third and fourth operators enable us to express ifthen and ifandonlyif conditions on regular languagesthese are the operators ifpthens and ifsthenp suppose li and l2 are regular languages and consider the set of strings ifpthens x i for every partition xi x2 of x if xi e l1 then x2 e 12 a string is in this set if each of its prefixes in li is followed by a suffix in l2this set is also a regular language it excludes exactly those strings that have a prefix in li followed by a suffix not in l2 and can therefore be defined by this operator the regularlanguage analog of the logical equivalence between p q and involves only concatenation and complementation operations under which regular languages are closedwe can also express the symmetric requirement that a prefix be in li if its suffix is in l2 by the expression finally we can combine these two expressions to impose the requirement that a prefix be in li if and only if its suffix is in l2 these five special operators being constructive combinations of more primitive ones can also serve as components of practical computationthe double complementation in the definitions of these conditional operators and also in several other expressions to be introduced later constitutes an idiom for expressing universal quantificationwhile a regular expression a37 expresses the proposition that an instance of 0 occurs between some instance of a and some instance of y the expression ce3y claims that an instance of 3 intervenes between every instance of a and a following instance of yphonological rewriting rules have four partstheir general form is this says that the string 0 is to be replaced by the string tp whenever it is preceded by a and followed by p if either a or p is empty it is omitted and if both are empty the rule is reduced to the contexts or environments a and p are usually allowed to be regular expressions over a basic alphabet of segmentsthis makes it easy to write say a vowelharmony rule that replaces a vowel that is not specified for backness as a back or front vowel according as the vowel in the immediately preceding syllable is back or frontthis is because the kleene closure operator can be used to state that any number of consonants can separate the two vowelsthe rule might be formulated as follows where b is the back counterpart of the vowel v and b is another back vowelthere is less agreement on the restrictions that should apply to and 0 the portions that we refer to as the center of the rulethey are usually simple strings and some theorists would restrict them to single segmentshowever these restrictions are without interesting mathematical consequences and we shall be open to all versions of the theory if we continue to take it that these can also denote arbitrary regular languagesit will be important to provide for multiple applications of a given rule and indeed this will turn out to be the major source of difficulty in reexpressing rewriting rules in terms of regular relations and finitestate transducerswe have already remarked that our methods work only if the part of the string that is actually rewritten by a rule is excluded from further rewriting by that same rulethe following optional rule shows that this restriction is necessary to guarantee regularity aba b if this rule is allowed to rewrite material that it introduced on a previous application it would map the regular language ab into the contextfree language atb i i where are not in e this means that the replacement operator can be defined solely in terms of these distinct contextmarking brackets without regard to what a and p actually specify and what they might have in common with each other or with 0 and 0in essence we assume that the replacement relation for the above rule applies to the upper strings shown below and that all three string pairs are acceptable because each of the corresponding bb pairs is bracketed by vvvvvvvvvvvvvvvvvvpbb immediately preceding p and bthe rule properly applies to rewrite the n because it is bracketed by on the other hand the is missing and the rule does not apply to the n in the preprocessed version of intractable namely b the set of both markersthen our next approximation to the replacement relation is defined as follows this allows arbitrary strings of matching symbols drawn from eu between rule applications and requires to key off a 00 replacementthe subscript m also indicate that can be ignored in the middle of the replacement since the appearance of left or rightcontext strings is irrelevant in the middle of a given rule applicationfigure 9 shows the general form of the statetransition diagram for a transducer that accepts a replacement relationas before the startstate is labeled 0 and only transitions are shown from which the finalstate is reachablewe must now define relations that guarantee that contextmarkers do in fact appear on the strings that replace applies to and only when sanctioned by instances of a and p we do this in two stagesfirst we use simple relations to construct a prologue operator that freely introduces the context markers in m an output string of prologue is just like the corresponding input except that brackets appear in arbitrary positionsthe relation prologuei removes all brackets that appear on its inputsecond we define more complex identity relations that pair a string with itself if and only if those markers appear in the appropriate contextsthe pifs operator is the key component of these contextidentifying predicatesthe condition we must impose for the left context is that the leftcontext bracket allow for these possibilitiesthis disregards slightly too many brackets however since an instance of a where the leftcontext operator is defined as follows we parameterize this operator for the leftcontext pattern and the actual brackets so that it can be used in other definitions belowthe other complication arises in rules intended to insert or delete material in the string so that either 0 or 1 includes the empty string e consider the lefttoright rule iterated applications of this rule can delete an arbitrary sequence of a converting strings of the form baaaa a into simply bthe single b at the beginning serves as leftcontext for applications of the rule to each of the subsequent athis presents a problem for the constructions we have developed so far the replace relation requires a distinct it accepts strings that have at least one labeled transitions represent the fact that is being ignoredthe machine on the right accepts the language leftcontext it requires includes strings if and only if every substring belonging to p is immediately preceded by a rightcontext bracket alternatively taking advantage of the fact that the reversal of a regular language is also a regular language we can define rightcontext in terms of leftcontext these context identifiers denote appropriate stringsets even for rules with unspecified contexts if the vacuous contexts are interpreted as if the empty string had been specifiedthe empty string indicates that adjacent symbols have no influence on the rule applicationif an omitted a is interpreted as 6 for example every leftcontext string will have one and only one leftcontext bracket at its beginning its end and between any two e symbols thus permitting a rule application at every positionwe now have components for freely introducing and removing context brackets for rejecting strings with mislocated brackets and for representing the rewrite action of a rule between appropriate context markersthe regular relation that models the optional application of a rule is formed by composition of these piecesthe order of composition depends on whether the rule is specified as applying iteratively from left to right or from right to leftas noted in section 4 the difference is that for lefttoright rules the leftcontext expression a can match against the output of a previous application of the same rule but the rightcontext expression p must match against the as yet unchanged input stringthese observations are directly modeled by the order in which the various rule components are combinedfor a lefttoright rule the right context is checked on the input side of the replacement while the left context is checked on the output sidethe regular relation and corresponding transducer for a leftboth left and rightcontext brackets are freely introduced on input strings strings in which the rightcontext bracket is mislocated are rejected and the replacement takes place only between the nowconstrained rightcontext brackets and the still free leftcontext markersthis imposes the restriction on leftcontext markers that they at least appear before replacements although they may or may not freely appear elsewherethe leftcontext checker ensures that leftcontext markers do in fact appear only in the proper locations on the outputfinally all brackets are eliminated yielding strings in the output languagethe contextchecking situation is exactly reversed for righttoleft rules the leftcontext matches against the unchanged input string while the rightcontext matches against the outputrighttoleft optional application can therefore be modeled simply by interchanging the contextchecking relations in the cascade above to yield the transducer corresponding to this regular relation somewhat paradoxically models a righttoleft rule application while moving from left to right across its tapessimultaneous optional rule application in which the sites of all potential string modifications are located before any rewriting takes place is modeled by a cascade that identifies both left and right contexts on the input side of the replacement these compositions model the optional application of a rulealthough all potential application sites are located and marked by the context checkers these compositions do not force a cb0 replacement to take place for every instance of cb appearing in the proper contextsto model obligatory rules we require an additional constraint that rejects string pairs containing sites where the conditions of application are met but the replacement is not carried outthat is we must restrict the relation so that disregarding for the moment the effect of overlapping applications every substring of the form a0p in the first element of a pair corresponds to a alpp in the second element of that pairwe can refine this restriction by framing it in terms of our contextmarking brackets the replace relation must not contain a pair with the substring in one element corresponding to something distinct from in the otherwe might try to formulate this requirement by taking the complement of a relation that includes the undesired correspondences as suggested by the expression this expression might be taken as the starting point for various augmentations that would correctly account for overlapping applicationshowever pursuing this line of attack will not permit us to establish the fact that obligatory rules also define regular mappingsfirst it involves the complement of a regular relation and we observed above that the complement of a regular relation is not necessarily regularsecond even if the resulting relation itself turned out to be regular the obvious way of entering it into our rule composition is to intersect it with the replacement relation and we also know that intersection of relations leads to possibly nonregular resultsproving that obligatory rules do indeed define regular mappings requires an even more careful analysis of the roles that contextbrackets can play on the various intermediate strings involved in the rule compositiona given leftcontext bracket can serve in the replace relation in one of three waysfirst it can be the start of a rule application provided it appears in front of an appropriate configuration of 0 and rightcontext bracketssecond it can be ignored during the identity portions of the strings the regions between the changes sanctioned by the replacement relationthird it can be ignored because it comes in the middle or center of another rule application that started to the left of the bracket in question and extends further to the rightsuppose we encode these three different roles in three distinct leftbracket symbols were previously used as auxiliary characters appearing in intermediate stringswith a slight abuse of notation we now let them act as cover symbols standing for the sets of left and right brackets 1 respectively and we let m be the combined set a substring on the input side of the replacement is then a missed lefttoright application if it matches the simple pattern thus we can force obligatory application of a lefttoright rule by requiring that the strings on the input side of its replacement contain no such substrings or to put it in formal terms that the input strings belong to the regular language obligatory where obligatory is defined by the following operator by symmetry a missed application of a righttoleft rule matches the pattern and obligatory is the appropriate input filter to disallow all such substringsnote that the obligatory operator involves only regular languages and not relations so that the result is still regular despite the complementation operationwe must now arrange for the different types of brackets to appear on the input to replace only in the appropriate circumstancesas before the context identifiers must ensure that none of the brackets can appear unless preceded by the appropriate context and that every occurrence of a context is marked by a bracket freely chosen from the appropriate set of threethe leftcontext and rightcontext operators given above will have exactly this effect when they are applied with the new meanings given to and m the replace operator must again be modified however because it alone distinguishes the different roles of the context bracketsthe following final definition chooses the correct brackets for all parameters of rule application the behavior of obligatory rules is modeled by inserting the appropriate filter in the sequence of compositionslefttoright obligatory rules are modeled by the cascade we remark that even obligatory rules do not necessarily provide a singleton output stringif the language v contains more than one string then outputs will be produced for each of these at each application sitemoreover if 0 contains strings that are suffixes or prefixes of other strings in 0 then alternatives will be produced for each length of matcha particular formalism may specify how such ambiguities are to be resolved and these stipulations would be modeled by additional restrictions in our formulationfor example the requirement that only shortest 0 matches are rewritten could be imposed by ignoring only one of in the mapping part of replace depending on the direction of applicationthere are different formulations for the obligatory application of simultaneous rules also depending on how competition between overlapping application sites is to be resolvedintersecting the two obligatory filters as in the following cascade models the case where the longest substring matching 0 is preferred over shorter overlapping matches the operators can be redefined and combined in different ways to model other regimes for overlap resolutiona rule contains the special boundary marker when the rewriting it describes is conditioned by the beginning or end of the stringthe boundary marker only makes sense when it appears in the context parts of the rule specifically when it occurs at the left end of a leftcontext string or the right end of a rightcontext stringno special treatment for the boundary marker would be required if appeared as the first and last character of every input and output string and nowhere elseif this were the case the compositional cascades above would model exactly the intended interpretation wherein the application of the rule is edgesensitiveordinary input and output strings do not have this characteristic but a simple modification of the prologue relation can simulate this situationwe defined prologue above as introwe now augment that definition we have composed an additional relation that introduces the boundary marker at the beginning and end of the already freely bracketed string and also rejects strings containing the boundary marker somewhere in the middlethe net effect is that strings in the cascade below the prologue are boundarymarked bracketed images of the original input strings and the context identifiers can thus properly detect the edges of those stringsthe inverse prologue at the bottom of the cascade removes the boundary marker along with the other auxiliary symbolsit remains to model the application of a set of rules collected together in a single batchrecall that for each position in the input string each rule in a batch set is considered for application independentlyas we have seen several times before there is a straightforward approach that approximates this behaviorlet r1 right now be the set of regular relations for rules that are to be applied as a batch and construct the relation ukrkbecause of closure under union this relation is regular and includes all pairs of strings that are identical except for substrings that differ according to the rewriting specified by at least one of the rulesbut also as we have seen several times before this relation does not completely simulate the batch application of the rulesin particular it does not allow for overlap between the material that satisfies the application requirements of one rule in the set with the elements that sanction a previous application of another ruleas usual we account for this new array of overlapping dependencies by introducing a larger set of special marking symbols and carefully managing their occurrences and interactionsa batch rule is a set of subrules 01 1p1 al a pn together with a specification of the standard parameters of application we use superscripts to distinguish the components of the different subrules to avoid confusion with our other notational conventionsa crucial part of our treatment of an ordinary rule is to introduce special bracket symbols to mark the appearance of its left and right contexts so that its replacements are carried out only in the proper environmentswe do the same thing for each of the subrules of a batch but we use a different set of brackets for each of themthese brackets permit us to code in a single string the context occurrences for all the different subrules with each subrule contexts distinctively markedlet k be the corresponding set of rightcontext brackets and let mk be the set kwe also redefine the generic cover symbols and m to stand for the respective collections of all brackets ukk m note that with this redefinition of m the prologue relation as defined above will now freely introduce all the brackets for all of the subrulesit will also be helpful to notate the set of brackets not containing those for the kth subrule mk m mk o now consider the regular language leftcontextm k this contains strings in which all instances of the kth subrule leftcontext expression are followed by one of the kth leftcontext brackets and those brackets appear only after instances of akthe kth rightcontext brackets are freely distributed as are all brackets for all the other subrulesoccurrences of all other leftcontext brackets are restricted in similarly defined regular languagesputting all these bracketrestrictions together the language nleftcontextmk has each subrule leftcontext duly marked by one of that subrule leftcontext bracketsthis leaves all rightcontext brackets unconstrained they are restricted to their proper positions by the corresponding rightcontext language nrightcontextk these intersection languages which are both regular will take the place of the simple context identifiers when we form the composition cascades to model batchrule applicationthese generalized context identifiers are also appropriate for ordinary rules if we regard each of them as a batch containing only one subrulea replacement operator for batch rules must also be constructedthis must map between input and output strings with contextbrackets properly located ensuring that any of the subrule rewrites are possible at each properly marked position but that the rewrite of the kth subrule occurs only between quotthe complete set where the generic symbol is assigned a corresponding meaningwe incorporate this relation as the rewrite part of a new definition of the replace operator with the generic now representing the sets of all left and right identity brackets this relation allows for any of the appropriate replacements separated by identity substringsit is regular because of the unionclosure property this would not be the case of course if intersection or complementation had been required for its constructiona model of the lefttoright application optional application of a batch rule is obtained by substituting the new more complex definitions in the composition cascade for ordinary rules with these application parameters optional righttoleft and simultaneous batch rules are modeled by similar substitutions in the corresponding ordinaryrule cascadesobligatory applications are handled by combining instances of the obligatory operator constructed independently for each subruleobligatory excludes all strings in which the kth subrule failed to apply moving from left to right when its conditions of application were satisfiedthe intersection of the obligatory filters for all subrules in the batch ensures that at least one subrule is applied at each position where application is allowedthus the behavior of a lefttoright obligatory batch rule is represented by the composition again similar substitutions in the cascades for ordinary obligatory rules will model the behavior of righttoleft and simultaneous applicationusing only operations that preserve the regularity of string sets and relations we have modeled the properties of rewriting rules whose components are regular languages over an alphabet of unanalyzable symbolswe have thus established that every such rule denotes a regular relationwe now extend our analysis to rules involving regular expressions with feature matrices and finite feature variables as in the turkish vowel harmony rule discussed in section 4 aback consonantal syllabic aback syllabic consonantal consonantal we first translate this compact feature notation well suited for expressing linguistic generalizations into an equivalent but verbose notation that is mathematically more tractablethe first step is to represent explicitly the convention that features not mentioned in the input or output matrices are left unchanged in the segment that the rule applies towe expand the input and output matrices with as many variables and features as necessary so that the value of every output feature is completely specified in the rulethe centerexpanded version of this example is the input and output feature matrices are now fully specified and in the contexts the value of any unmentioned feature can be freely chosena feature matrix in a regular expression is quite simple to interpret when it does not contain any feature variablessuch a matrix merely abbreviates the union of all segment symbols that share the specified features and the matrix can be replaced by that set of unanalyzable symbols without changing the meaning of the rulethus the matrix consonantal can be translated to the regular language p t k b d and treated with standard techniquesof course if the features are incompatible the feature matrix will be replaced by the empty set of segmentsa simple translation is also available for feature variables all of whose occurrences are located in just one part of the rule as in the following fictitious left context ahigh consonantal around if a takes on the value then the first matrix is instantiated to highl and denotes the set of unanalyzable symbols say e 1 that satisfy that descriptionthe last matrix reduces to round and denotes another set of unanalyzable symbols the whole expression is then equivalent to e p t k b d a e on the other hand if a takes on the value then the first matrix is instantiated to high and denotes a different set of symbols say a o and the last one reduces to roundthe whole expression on this instantiation of a is equivalent to a o fp t k b d 1 o you on the conventional interpretation the original expression matches strings that belong to either of these instantiated regular languagesin effect the variable is used to encode a correlation between choices from different sets of unanalyzable symbolswe can formalize this interpretation in the following waysuppose 0 is a regular expression over feature matrices containing a single variable a for a feature whose values are drawn from a finite set v commonly the set let 0a 4 v be the result of substituting v e v for a wherever it occurs in 0 and then replacing each variablefree feature matrix in that result by the set of unanalyzable symbols that satisfy its feature descriptionthen the interpretation of 0 is given by the formula this translation produces a regular expression that properly models the choicecorrelation defined by a in the original expressionrule expressions containing several locally occurring variables can be handled by an obvious generalization of this substitution schemeif al an are the local variables in 6 whose values come from the finite sets v the set of ntuples represents the collection of all possible value instantiations of those variablesif we let 0i be the result of carrying out the substitutions indicated for all variables by some i in i the interpretation of the entire expression is given by the formula indeed the input and output expressions will almost always have variables in common because of the feature variables introduced in the initial centerexpansion stepvariables that appear in more than one rule part clearly cannot be eliminated from each part independently because the correlation between feature instantiations would be losta featurematrix rule is to be interpreted as scanning in the appropriate direction along the input string until a configuration of symbols is encountered that satisfies the application conditions of the rule instantiated to one selection of values for all of its variablesthe segments matching the input are then replaced by the output segments determined by that same selection and scanning resumes until another configuration is located that matches under possibly a different selection of variables valuesthis behavior is modeled as the batchmode application of a set of rules each of which corresponds to one variable instantiation of the original ruleconsider a centerexpanded rule of the general form 0 0a p and let i be the set of possible value instantiations for the featurevariables it containsthen the collection of instantiated rules is simply the components of the rules in this set are regular languages over unanalyzable segment symbols all feature matrices and variables having been resolvedsince each instantiated rule is formed by applying the same substitution to each of the original rule components the crosscomponent correlation of symbol choices is properly representedthe behavior of the original rule is thus modeled by the relation that corresponds to the batch application of rules in this set and we have already shown that such a relation is regular57 summary this completes our examination of individual contextsensitive rewriting ruleswe have modeled the inputoutput behavior of these rules according to a variety of different application parameterswe have expressed the conditions and actions specified by a rule in terms of carefully constructed formal languages and string relationsour constructions make judicious use of distinguished auxiliary symbols so that crucial informational dependencies can be stringencoded in unambiguous wayswe have also shown how these languages and relations can be combined by settheoretic operations to produce a single string relation that simulates the rule overall effectsince our constructions and operations are all regularitypreserving we have established the following theorem for all the application parameters we have considered every rewriting rule whose components describe regular languages denotes a regular string relationthis theorem has an immediate corollary the inputoutput string pairs of every such rewriting rule are accepted by some finitestate transducerthis theoretical result has important practical consequencesthe mathematical analysis that establishes the theorem and its corollary is constructive in naturenot only do we know that an appropriate relation and its corresponding transducer exist we also know all the operations to perform to construct such a transducer from a particular rulethus given a careful implementation of the calculus of regular languages and regular relations our analysis provides a general method for compiling complicated rule conditions and actions into very simple computational devicesthe individual rules of a grammar are meant to capture independent phonological generalizationsthe grammar formalism also specifies how the effects of the different rules are to be combined together to account for any interactions between the generalizationsthe simplest method of combination for rewriting rule grammars is for the rules to be arranged in an ordered sequence with the interpretation that the first rule applies to the input lexical string the second rule applies to the output of the first rule and so onas we observed earlier the typical practice is to place specialized rules with more elaborate context requirements earlier in the sequence so that they will override more general rules appearing laterthe combined effect of having one rule operate on the output of another can be modeled by composing the string relations corresponding to each ruleif the string relations for two rules are regular we know that their composition is also regularthe following result is then established by induction on the number of rules in the grammar if g is a grammar defined as a finite ordered sequence of rewriting rules each of which denotes a regular relation then the set of inputoutput stringpairs for the grammar as a whole is the regular relation given by r1 o o r this theorem also has an immediate corollary the inputoutput string pairs of every such rewriting grammar are accepted by a single finitestate transduceragain given an implementation of the regular calculus a grammar transducer can be constructed algorithmically from its ruleswe can also show that certain more complex methods of combination also denote regular relationssuppose a grammar is specified as a finite sequence of rules but with a further specification that rules in some subsequences are to be treated as a block of mutually exclusive alternativesthat is only one rule in each such subsequence can be applied in any derivation but the choice of which one varies freely between derivationsthe alternative choices among the rules in a block can be modeled as the union of the regular relations they denote individually and regular relations are closed under this operationthus this kind of grammar also reduces to a finite composition of regular relationsin a more intricate arrangement the grammar might specify a block of alternatives made up of rules that are not adjacent in the ordering sequencefor example suppose the grammar consists of the sequence where r2 and r4 constitute a block of exclusive alternativesthis cannot be handled by simple union of the block rules because that would not incorporate the effect of the intervening rule r3however this grammar can be interpreted as abbreviating a choice between two different sequences and and thus denotes the regular relation ri 0 kr2 0 r3 you 0 r5 the union and composition operators can be interleaved in different ways to show that a wide variety of rule combination regimes are encompassed by the regular relationsthere may be grammars specifying even more complex rule interactions and depending on the formal details it may be possible to establish their regularity by other techniques for example by carefully managing a set of distinguished auxiliary symbols that code interrule constraintswe know of course that certain methods for combining regular rules give rise to nonregular mappingsthis is true for example of unrestricted cyclic application of the rules in a finite ordered sequenceaccording to a cyclic grammar specification a given input string is mapped through all the rules in the sequence to produce an output string and that output string then becomes a new input for a reapplication of all the rules and the process can be repeated without boundwe can demonstrate that such a grammar is nonregular by considering again the simple optional rule e aba b we showed before that this rule does not denote a regular relation if it is allowed to rewrite material that was introduced on a previous applicationunder those circumstances it would map the regular language ab into the contextfree language ab n 1 1 as the main connective instead of or a bidirectional rule is merely an abbreviation that can be included in a grammar in place of the two subrules formed by replacing the k for each context pair akpkthese are distinct from all other symbols and since their identity pairs are now feasible pairs they are added to 7these pairs take the place of the actual context relations in the iterative union this eliminates the overlap problemwe then must ensure that these bracket pairs appear only if appropriately followed or preceded by the proper context relationwith m being the set of all bracket pairs and subscripting now indicating that identity pairs of the specified symbols are ignored we define a twolevel leftcontext operator so that leftcontext enforces the requirement that every k pair be preceded by an instance of akthis is simpler than the rewriting leftcontext operator because not every instance of a must be markedonly the ones that precede t and those are picked out independently by the iterative unionthat is why this uses a oneway implication instead of a biconditionalas in the rewriting case the ignoring provides for overlapping instances of athe rightcontext operator can be defined symmetrically using ifpthens or by reversing the leftcontext operator auxiliary marks are freely introduced on the lexical stringthose marks are appropriately constrained so that matching brackets enclose every occurrence of t and each bracket marks an occurrence of the associated context relationthe marks are removed at the endnote that there are only samelength relations in the intermediate expression and that all brackets introduced at the top are removed at the bottomthus the composite relation is regular and also belongs to the samelength subclass so that the result of intersecting it with the samelength regular relations for other rules will be regulara surface coercion rule of the form imposes a requirement on the paired substrings that come between all members of the a and p relationsif the lexical side of such a paired substring belongs to the domain of t then the surface side must be such that the intervening pair belongs to t to formalize this interpretation we first describe the set of string pairs that fail to meet the conditionsthe complement of this set is then the appropriate relationthe relationt 71 t is the set of string pairs in 7r that are not in t because either their lexical string is not in the domain of t or 7 associates that lexical string with different surface stringsid o f is the subset of these whose lexical strings are in the domain of r and whose surface strings must therefore be different than t provides forthe unacceptable string pairs thus belong to the samelength relation 7taid a 7rlp7r and its regular complement in the coerce operator coerce r aid o tp7r contains all the string pairs that satisfy the rulefor most surface coercions it is also the case that this contains only the pairs that satisfy the rulebut for one special class of coercions the epenthesis rules this relation includes more string pairs than we desirethese are rules in which the domain of t includes strings consisting entirely of o and the difficulty arises because of the dual nature of twolevel othey behave formally as actual string symbols in samelength relations but they are also intended to act as the empty stringin this way they are similar to the c in the centers of rewriting rules and they must also be modeled by special techniquesthe epenthesis rule 0b cc dd can be used to illustrate the important issuesif this is the only rule in a grammar then clearly that grammar should allow the string pair but disallow the pair in which e appears instead of b between the surface c and d it should also disallow the pair in which c and d are adjacent on both sides and no epenthesis has occurredthis is consistent with the intuition that the 0 in the rule stands for the absence of explicit lexical string material and that therefore the rule must force a surface b when lexical c and d are adjacentin our analysis this interpretation of 0 is expressed by having the intro relation freely introduce o between any other symbols mimicking the fact that c can be regarded as freely appearing everywherethe pair is allowed as the composition of pairs and the first pair belongs to the intro relation and the second is sanctioned by the rulebut because o are introduced freely the intro relation includes the identity pair as wellthe coerce relation as defined above also contains the pair since e is not in 00 o 0 m the grammar as a whole thus allows as an undesired compositionwe can eliminate pairs of this type by formulating a slightly different relation for epenthesis rules such as thesewe must still disallow pairs when o in the domain of 7 are paired with strings not in the rangebut we also want to disallow pairs whose lexical strings do not have the appropriate o to trigger the grammar epenthesis coercionsthis can be accomplished by a modified version of the coerce relation that also excludes realizations of the empty string by something not in t we replace the dom expression in the definition above with the relation dom the twolevel literature is silent about whether or not an epenthesis rule should also reject strings with certain other insertion patternson one view the rule only restricts the insertion of singleton strings and thus pairs such as and would be included in the relationthis view is modeled by using the dom expressionon another view the rule requires that lexically adjacent c and d must be separated by exactly one b on the surface so that and would be excluded in addition to and we can model this second interpretation by using 0 instead of domthe relation then restricts the surface realization of any number of introduced oit is not clear which of these interpretations leads to a more convenient formalism but each of them can be modeled with regular deviceskarttunen and beesley discuss a somewhat different peculiarity that shows up in the analysis of epenthesis rules where one context is omitted the rule requires that a b corresponding to nothing in the lexical string must appear in the surface string after every cc pairif we use either the dom introduced or because those o correspond to unacceptable surface materialthese two prescriptions can be brought together into the single formula 7ratp7r for all onecontext rules since whichever context is missing is treated as the identity pair we can bring out the similarity between this formula and the original coerce relation by observing that this one is equivalent to 71aid o t1p7r because id o 7t and t are the same relationwe now give a general statement of the coerce relation that models surface coercions whether they are epenthetic or nonepenthetic and neither a nor p contains e x 7r if t has only epenthetic pairs and one of a or p does contain this definition assumes that t is homogeneous in that either all its stringpairs are epenthetic or none of them are but we must do further analysis to guarantee that this is the casein the formalism we are considering t is permitted to be an arbitrary samelength relation not just the single unitlength pair that twolevel systems typically provide forif t contains more than one stringpair the single rule is interpreted as imposing the constraints that would be imposed by a conjunction of rules formed by substituting for t each of its member stringpairs in turnwithout further specification and even if 7 contains infinitely many pairs this is the interpretation modeled by the coerce relation provided that t is homogeneousto deal with heterogeneous t relations we separate the epenthetic and nonepenthetic pairs into two distinct and homogeneous subrelationswe partition an arbitrary t into the subrelations r and t defined as we then recast a rule of the form t 4 a p as the conjunction of the two rules these rules taken together represent the desired interpretation of the original and each of them is properly modeled by exactly one variant of the coerce relationwe have now dealt with the major complexities that surface coercion rules presentthe compound forms of these rules are quite easy to modela rule of the form is interpreted as coercing to the surface side of 7 if any of the context conditions are metauxiliary symbols are not needed to model this interpretation since there is no iteration to introduce overlap difficultiesthe relation for this rule is given simply by the intersection of the individual relations ncoercek we conclude our discussion of twolevel rules with a brief mention of surface prohibitionsrecall that a prohibition ruleindicates that a paired substring must not belong to r if it comes between instances of a and p and its lexical side is in the domain of t we can construct a standard surface coercion rule that has exactly this interpretation by using the complement of 7 restricted to t domain id 0 a p as desired the left side is the relation that maps each string in the domain of t to all strings other than those to which t maps itsurface prohibitions are thus reduced to ordinary surface coercionsthe relation for a grammar of rules is formed just as for a grammar of parallel automatathe intersection of the relations for all the individual rules is constructed as a samelength inner relationthis is then composed with the 0 introduction and removal relations to form the outer lexicaltosurface maprulebased twolevel grammars thus denote regular relations just as the original transducerbased grammars dosome grammars may make use of boundarycontext rules in which case a special symbol can appear in contexts to mark the beginning and end of the stringsthese can be modeled with exactly the same technique we outlined for rewriting rules we compose the additional relation ed at the beginning of the fourlevel cascade and compose its inverse at the endas we mentioned before the twolevel grammars with boundarycontext rules are the ones that ritchie showed were complete for the regular relationsin reasoning about these systems it is important to keep clearly in mind the distinction between the outer and inner relationsritchie for example also proved that the quotlanguagesquot generated by twolevel grammars with regular contexts are closed under intersection but this result does not hold if a grammar language is taken to be its outer relationsuppose that g1 has the set a b 0 c as its feasible pairs and the vacuous ab as its only rule and that g2 has the pairs a c 0b and rule ac the domain of both outer relations is aa string aquot is mapped by g1 into strings containing n b with c freely intermixed and by g2 into strings containing n c with b freely intermixedthe range of the intersection of the outer relations for g1 and g2 thus contains strings with the same number of b and c but occurring in any orderthis set is not regular since intersecting it with the regular language because produces the contextfree language bncquotthe intersection of the two outer relations is therefore also not regular and so cannot be the outer relation of any regular twolevel grammarwe have shown how our regular analysis techniques can be applied to twolevel systems as well as rewriting grammars and that grammars in both frameworks denote only regular relationsthese results open up many new ways of partitioning the account of linguistic phenomena in order to achieve descriptions that are intuitively more satisfying but without introducing new formal power or computational machinerykarttunen kaplan and zaenen for example argued that certain french morphological patterns can be better described as the composition of two separate twolevel grammars rather than as a single oneas another option an entire twolevel grammar can be embedded in place of a single rule in an ordered rewriting systemas long as care is taken to avoid inappropriate complementations and intersections all such arrangements will denote regular relations and can be implemented by a uniform finitestate transducer mechanismour aim in this paper has been to provide the core of a mathematical framework for phonologywe used systems of rewriting rules particularly as formulated in spe to give concreteness to our work and to the paperhowever we continually sought solutions in terms of algebraic abstractions of sufficiently high level to free them from any necessary attachment to that or any other specific theoryif our approach proves useful it will only be because it is broad enough to encompass new theories and new variations on old onesif we have chosen our abstractions well our techniques will extend smoothly and incrementally to new formal systemsour discussion of twolevel rule systems illustrates how we expect such extensions to unfoldthese techniques may even extend to phonological systems that make use of matched pairs of bracketsclearly contextfree mechanisms are sufficient to enforce dependencies between corresponding brackets but further research may show that accurate phonological description does not exploit the power needed to maintain the balance between particular pairs and thus that only regular devices are required for the analysis and interpretation of such systemsan important goal for us was to establish a solid basis for computation in the domain of phonological and orthographic systemswith that in mind we developed a wellengineered computer implementation of the calculus of regular languages and relations and this has made possible the construction of practical language processing systemsthe common data structures that our programs manipulate are clearly states transitions labels and label pairsthe building blocks of finite automata and transducersbut many of our initial mistakes and failures arose from attempting also to think in terms of these objectsthe automata required to implement even the simplest examples are large and involve considerable subtlety for their constructionto view them from the perspective of states and transitions is much like predicting weather patterns by studying the movements of atoms and molecules or inverting a matrix with a turing machinethe only hope of success in this domain lies in developing an appropriate set of highlevel algebraic operators for reasoning about languages and relations and for justifying a corresponding set of operators and automata for computationfrom a practical point of view the result of the work reported here has been a set of powerful and sometimes quite complex tools for compiling phonological grammars in a variety of formalisms into a single representation namely a finitestate transducerthis representation has a number of remarkable advantages the program required to interpret this representation is simple almost to the point of triviality no matter how intricate the original grammars might have been that same program can be used to generate surface or textual forms from underlying lexical representations or to analyze text into a lexical string the only difference is in which of the two symbols on a transition is regarded as the input and which the output the interpreter is constant even under radical changes in the theory and the formalism that informed the compiler the compiler consists almost entirely of an implementation of the basic calculusgiven the operators and data types that this makes available only a very few lines of code make up the compiler for a particular theoryreflecting on the way the relation for a rewriting rule is constructed from simpler relations and on how these are composed to create a single relation for a complete grammar we come naturally to a consideration of how that relation should comport with the other parts of a larger languageprocessing systemwe can show for example that the result of combining together a list of items that have exceptional phonological behavior with a grammarderived relation for general patterns is still a regular relation with an associated transducerif e is a relation for a finite list of exceptional inputoutput pairs and p is the general phonological relation then the combination is given by e you id p this relation is regular because e is regular it suppresses the general mapping provided by p for the exceptional items allowing outputs for them to come from e onlyas another example the finite list of formatives in a lexicon l can be combined with a regular phonology by means of the composition id o p this relation enshrines not only the phonological regularities of the language but its lexical inventory as well and its corresponding transducer would perform phonological recognition and lexical lookup in a single sequence of transitionsthis is the sort of arrangement that karttunen et al discussfinally we know that many language classes are closed under finitestate transductions or composition with regular relationsthe images of contextfree languages for example are contextfreeit might therefore prove advantageous to seek ways of composing phonology and syntax to produce a new system with the same formal properties as syntax alonewe are particularly indebted to danny bobrow for helpful discussions in the early stages of the research on rewriting systemsour understanding and analysis of twolevel systems is based on very productive discussions with lauri karttunen and kimmo koskenniemiwe would like to thank john maxwell mary dalrymple andy daniels chris manning and especially kenneth beesley for detailed comments on earlier versions of this paperfinally we are also indebted to the anonymous referees for identifying a number of technical and rhetorical weaknesseswe of course are responsible for any remaining errors
J94-3001
regular models of phonological rule systemsthis paper presents a set of mathematical and computational tools for manipulating and reasoning about regular languages and regular relations and argues that they provide a solid basis for computational phonologyit shows in detail how this framework applies to ordered sets of contextsensitive rewriting rules and also to grammars in koskenniemi twolevel formalismthis analysis provides a common representation of phonological constraints that supports efficient generation and recognition by a single simple interpreterwe provide an algorithm for compilation into transducerswe describe a general method representing a replacement procedure as finitestate transduction
a syntactic analysis method of long japanese sentences based on the detection of conjunctive structures this paper presents a syntactic analysis method that first detects conjunctive structures in a sentence by checking parallelism of two series of words and then analyzes the dependency structure of the sentence with the help of the information about the conjunctive structures analysis of long sentences is one of the most difficult problems in natural language processing the main reason for this difficulty is the structural ambiguity that is common for conjunctive structures that appear in long sentences human beings can recognize conjunctive structures because of a certain but sometimes subtle similarity that exists between conjuncts therefore we have developed an algorithm for calculating a similarity measure between two arbitrary series of words from the left and the right of a conjunction and selecting the two most similar series of words that can reasonably be considered as composing a conjunctive structure this is realized using a dynamic programming technique a long sentence can be reduced into a shorter form by recognizing conjunctive structures consequently the total dependency structure of a sentence can be obtained by relatively simple headdependent rules a serious problem concerning conjunctive structures besides the ambiguity of their scopes is the ellipsis of some of their components through our dependency analysis process we can find the ellipses and recover the omitted components we report the results of analyzing 150 japanese sentences to illustrate the effectiveness of this method this paper presents a syntactic analysis method that first detects conjunctive structures in a sentence by checking parallelism of two series of words and then analyzes the dependency structure of the sentence with the help of the information about the conjunctive structuresanalysis of long sentences is one of the most difficult problems in natural language processingthe main reason for this difficulty is the structural ambiguity that is common for conjunctive structures that appear in long sentenceshuman beings can recognize conjunctive structures because of a certain but sometimes subtle similarity that exists between conjunctstherefore we have developed an algorithm for calculating a similarity measure between two arbitrary series of words from the left and the right of a conjunction and selecting the two most similar series of words that can reasonably be considered as composing a conjunctive structurethis is realized using a dynamic programming techniquea long sentence can be reduced into a shorter form by recognizing conjunctive structuresconsequently the total dependency structure of a sentence can be obtained by relatively simple headdependent rulesa serious problem concerning conjunctive structures besides the ambiguity of their scopes is the ellipsis of some of their componentsthrough our dependency analysis process we can find the ellipses and recover the omitted componentswe report the results of analyzing 150 japanese sentences to illustrate the effectiveness of this methodmachine translation systems are gradually being accepted by a wider range of people and accordingly the improvement of machine translation systems is becoming an urgent requirement by manufacturersthere are many difficult problems that cannot be solved by the current efforts of many researchersanalysis of long japanese sentences is one of themit is difficult to get a proper analysis of a sentence whose length is more than 50 japanese characters and almost all the current analysis methods fail for sentences composed of more than 80 charactersby analysis failure we mean the following some researchers have attributed the difficulties to the numerous possibilities of headdependent relations between phrases in long sentencesbut no deeper consideration has ever been given to the reasons for the analysis failurea long sentence particularly in japanese very often contains conjunctive structuresthese may be either conjunctive noun phrases or conjunctive predicative clausesamong the latter those made by the renyoh forms of predicates are called renyoh chuushiho of table 1a renyoh chuushiho appears in an embedded sentence to modify nouns and is also used to connect two or more sentencesthis form is used frequently in japanese and is a major because of structural ambiguitymany major sentential components are omitted in the posterior part of renyoh chuushi expressions thus complicating the analysisfor the successful analysis of long sentences these conjunctive phrases and clauses including renyoh chuushiho must be recognized correctlynevertheless most work in this area has concerned the problem of creating candidate conjunctive structures or explaining correct conjunctive structures and not the method for selecting correct structures among many candidatesa method proposed by some researchers for selecting the correct structure is in outline that the two most similar components to the left side and to the right side of a conjunction are detected as two conjoined heads in a conjunctive structurefor example in quotjohn enjoyed the book and liked the playquot we call the verbs quotenjoyedquot and quotlikedquot conjoined heads quotenjoyedquot is the prehead and quotlikedquot the postheadwe also call quotenjoyed the bookquot preconjunct and quotliked the playquot postconjunctin japanese the word preceding a conjunction is the prehead and the posthead that is most similar to the prehead is searched for in english conversely the phrase following the conjunction is the posthead and the prehead is searched for in the same way however two conjoined heads are sometimes far apart in a long sentence making this simple method clearly inadequatehuman beings can recognize conjunctive structures because of a certain but sometimes subtle similarity that exists between conjunctsnot only the conjoined heads but also other components in conjuncts have some similarity and furthermore the pre and postconjuncts have a structural parallelisma computational method needs to recognize this subtle similarity in order to detect the correct conjunctive structuresin this investigation we have developed an algorithm for calculating a similarity measure between two arbitrary series of words from the left and the right of a conjunction and selecting the two most similar series of words that can reasonably be considered as composing a conjunctive structure this procedure is realized using a dynamic programming techniquein our syntactic analysis method the first step is the detection of conjunctive structures by the abovementioned algorithmsince two or more conjunctive structures sometimes exist in a sentence with very complex interrelations the second step is to adjust tangled relations that may exist between two or more conjunctive structures in the sentencein this step conjunctive structures with incorrect overlapping relations if they exist are found and retrials of detecting their scopes are donethe third step of our syntactic analysis is a very common operationjapanese sentences can best be explained by kakariuke which is essentially a dependency structuretherefore our third step after identifying all the conjunctive structures is to perform dependency analyses for each phraseclause of the conjunctive structures and the dependency analysis for the whole sentence after all the conjunctive structures have been reduced into single nodesthe dependency analysis of japanese is rather simplea component depends on a component to its right and the suffix of a component indicates what kind of element it can depend onmore than one headdependent relation may exist between components but by introducing some heuristics we can easily get a unique dependency analysis result that is correct for a high percentage of casesa serious problem regarding conjunctive structures in addition to the ambiguity of their scopes is the ellipses in some of their componentsthrough the dependency analysis process outlined we are able to find the ellipses occurring in the conjunctive structures and supplement them with the omitted componentsin japanese bunsetsu is the smallest meaningful sequence consisting of an independent word and accompanying words a bunsetsu whose iw is a verb or an adjective or whose aw is a copula functions as a predicate and thus is called a predicative bunsetsu a bunsetsu whose iw is a noun is called a nominal bunsetsu conjunctive structures that appear in japanese are classified into three types the first type is the conjunctive noun phrasewe can find these phrases by the words listed in table 1aeach conjunctive noun can have adjectival modifiers or clausal modifiers the second type is the conjunctive predicative clause in which two or more predicates in a sentence form a coordinationwe can find these clauses by the renyoh forms of predicates or by the predicates accompanying one of the words in table 1b the third type is a cs consisting of parts of conjunctive predicative clauseswe call this type an incomplete conjunctive structurewe can find these structures by the a noun directly followed by a comma indicates a conjunctive noun phrase or an incomplete conjunctive structure correspondence of casemarking postpositions however sometimes the last bunsetsu of the preconjunct has no casemarking postposition just followed by one of the words listed in table 1cin such cases we cannot distinguish this type of cs from conjunctive noun phrases by seeing the last bunsetsu of the preconjuncthowever this does not matter as our method handles the three types of css in almost the same way in the stage of detecting their scopes and it exactly distinguishes incomplete conjunctive structures in the stage of dependency analysisfor all of these types it is relatively easy to detect the presence of a cs by looking for a distinctive key bunsetsu that accompanies a word indicating a cs listed in table 1 or has the renyoh forms a kb lies last in the preconjunct and is a preheadhowever it is difficult to determine which bunsetsu sequences on both sides of the kb constitute pre and postconjunctsthat is it is not easy to determine which bunsetsu to the left of a kb is the leftmost bunsetsu of the preconjunct and which bunsetsu to the right of a kb is the rightmost bunsetsu of the postconjunct the bunsetsus between these two extreme bunsetsus constitute the scope of the csin detecting a cs it is most important to find the posthead among many candidates in a sentence eg in a conjunctive noun phrase all nbs after a kb are candidates however our method searches not only for the most plausible eb but also for the most plausible scope of the cswe detect the scope of css by using a wide range of information before and after a kban input sentence is first divided into bunsetsus by conventional morphological analysisthen we calculate similarities in all pairs of bunsetsus in the sentenceafter that we calculate the similarities between two series of bunsetsus on the left and right of the kb by combining the similarity scores for pairs of bunsetsusthen as a final result we choose the two most similar series of bunsetsus that can reasonably be considered as composing a cswe will explain this process in detail in the following sectionsin detecting css it is necessary to take many factors into consideration and it is important to give the proper weight to each factorthe scoring system described hereafter was first hypothesized and then manually adjusted through experiments on 30 training sentences containing cssthese parameters would not be the best and statistical investigations of large corpora would be preferablehowever these parameters are good enough to get reasonably good analysis results as shown in the experiments section and to show the appropriateness of our methodfirst we calculate similarities for all pairs of bunsetsus in the sentencean appropriate similarity value between two bunsetsus is given by the following process hyou the bgh has a six layer abstraction hierarchy and more than 60000 words are assigned to the leaves of itif the most specific common layer between two iws is the kth layer and if k is greater than 2 add x 2 pointsif either or both iws are not contained in the bgh no addition is madematching of the generic two layers is ignored to prevent too vague matching in a broader sensethe maximum sum of similarity values that can be added by step 3 and this step is 10 points5if some of the aws match add the number of matching aws x 3 pointsfor example the similarity value between quotteiseishiquot and quotkenshutsusuruquot is calculated as 2 2 3 7 pointsthe similarity value between quotteisuijungengoquot and quotkousuijungengot0quot is 2 8 10 pointssince the bgh does not contain technical terms similarity points cannot be given to them by the bghhowever technical terms are often compound words and those having similar meanings often contain the same wordsfor such technical terms some similarity points can be given according to the degree of partial character matching by step 3 as for the latter exampleour method detects the scope of a cs by finding the two series of bunsetsus from before and after the kb that have the greatest similarityto measure the similarity score between two series of bunsetsus we have developed a method using a triangular matrix a as shown in figure 2 a quot which is outside of the cs and owing to the bonus points for the iw quotkotoquot in the next right bunsetsu of the csthe maximum path specifying a conjunctive structurekorerano 0 0 2 0 0 0 0 0 0 0 0 0 0 aimaiseiwo 0 0 2 5 0 2 0 5 0 2 2 2 kaishousurutameniwa 0 0 0 8 0 2 0 5 0 0 2 sono 0 0 0 0 0 0 0 0 0 0 subblbno 2 0 2 0 2 7 2 kanouseiwo 2 2 2 ahyoukashi 0 4 640 0 2 saitekito 0 2 0 2 2 2 kb omowareru 0 2 0 0 0 kaiwo 0 2 2 2 doushutsusuru 0 o 0 kotomo 2 2 hitotsuno 2 houhoudearu in order to solve these ambiguities one way is to evaluate all the possibility and to derive the answer which is thought to be optimuman example of detecting conjunctive structures hi the sentence illustrated in figure 8 the conjunctive noun phrase in which three nouns are conjoined is detected correctly consecutive overlapping css express a cs consisting of more than two conjuncts and will thus be merged into one cs in this example the conjunctive predicative clause that contains the conjunctive noun phrase is also detected correctly zokuseinikansuru 2 0 231jouhout0shite 2 k 6 csaihenseishl 0 0 56 kb quot sakuinno 2 0 katachide 0 kirokushiteoku concretely document information is reorganized as the secondary information concerning an attribute such as a title an author a theme and is recorded in the form of an indexan example of detecting conjunctive structuresin a long japanese sentence two or more css often exist overlapping with each otherin such cases we have to adjust their relations in a sentence after their scopes have been detectedthis adjustment is done by checking relations in all pairs of css and merging all the relationsthrough this adjustment process css consisting of three or more conjuncts are detectedfurthermore css with incorrect relations if they exist are found and retrials of detecting their scopes are doneas a result of this adjustment process we get a reduced sentence formthe details of these processes will be given in the following sectionthe scope of a cs is represented by a threetuple position of sb position of kb position of eblet us suppose that two css exist in a sentence the prior one x has a scope represented by xl x2 x3 and the posterior one y has a scope represented by yl y2 y3 when two css are detected by the previously described dynamic programming method as overlapping each other in this case yl are the root nodes of the dependency trees for conjunctsnext the pre and postconjuncts hyoudai saihenseishi sakuinno kirokushiteoku are analyzed and transformed into dependency trees and another cs node is created finally the whole sentence is analyzed and its dependency tree is obtainedour method of detecting a cs cannot find where the preconjunct begins with complete certaintyfor this reason it is necessary to check whether some modifiers to the left of the detected sb can be included in the cs in the stage of dependency analysisthis leftside extension is performed only on css containing pbsthis is because modifiers to the left of a cs containing no pb rarely depend on the preconjunct alone usually they depend on the entire cs or on a bunsetsu after the cswhen a cs contains pbs the analysis of its preconjunct does not stop at the detected sb but continues to the bunsetsus to the left of the sb as follows if the bunsetsu depends on a certain bunsetsu apart from the kb in the preconjunct the bunsetsu is regarded as a part of the cs and the extension operation is continued otherwise the extension operation is stoppedthe kb is excluded from the candidates for a head because the headdependent relation to the kb is handled as the relation to the cs node in the next level analysisa modifier ellipsisin the sentence in figure 7 the bunsetsu quotsonoquot which can depend on quotkanouseiwoquot is regarded as contained in the cs but the bunsetsu quotkaishousurutameniwaquot which accompanies quotwaquot and a comma is not contained in the cs and the extension of the cs thus ends herethrough this extension of the cs the issue of omitted modifiers in a cs can be addressedwhen the same modifiers exist in both conjuncts the modifiers in its postconjunct are often omitted among these omitted modifiers the ones that depend on the eb do not have to be recovered because a remaining modifier that depends on the kb is treated as depending on the cs node which means that the sadao kurohashi and makoto nagao syntactic analysis method mochiron 0 0 0 0 0 0 0 mondaino 2 0 2 0 0 2 daibubunwa 0 2 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 2 5a 2 0 5 2 0 2 0 2 0 2 0 2 0 0 5 2 2a0 2 2 0 2 0 2 0 2 0 aru00800000008a000000000 genshouwo 0 0 2 5 00 22 2 0 2a2 02 0 20 2 0 shiraberunoni 0 0 0 0 6 0 0 0 0 0 0a2 0 2 0 2 0 2 donna 0 0 0 0 0 0 0 8 0 0 oa 0 0 0 0 0 0 argorithmga 2 0 0 5 2 7 0 2 2 0 2a 0 2 0 2 0 hitsuyoukawo 0 0 2 2 2 0 2 2 0 2 oa 2 0 2 0 seikakuni 0 0 0 0 0 0 0 0 0 0 oa 0 0 0 misadameru 0 0 0 0 0 0 2 0 2 0 2a 0 2 a kotodearuga 0 0 0 0 0 2 0 2 0 2 oa 2a toillittia110 0 5 2 0 2 0 2 0 2 0 an example of analyzing a long sentence into a dependency structure remaining modifier also depends on the eb the problem is to recover the omitted modifiers that depend on a bunsetsu in the postconjunct except the ebthe key point is that y and y in figure 14b have a great similarity because they contain not only similar bunsetsus kb and eb but also very similar bunsetsus that originally governed the same modifier xtherefore we can detect the possibility of modifier ellipsis by checking the similarity score of the cs obtained when detecting its scopewhen the extension operation is performed on the preconjunct of a cs that is a strong cs we recover the omitted modifiers by interpreting a bunsetsu that depends on a bunsetsu in its preconjunct as also depending on the bunsetsu in its postconjunct corresponding to b a cs that satisfies the following two conditions is called a strong cs for example in the sentence in figure 15 the detected cs tasukeni areba samatageni aru satisfies the above two conditionsthus by checking the relation between the cs and the outside modifier phrase quotsono kaihatsunoquot the phrase is considered to depend on both of the bunsetsus quottasukeniquot and quotsamatageniquot in the same way quotcomputerno architecturegaquot is again thought to depend on both the bunsetsu quotnaruquot in the preconjunct and the bunsetsu quotnaruquot in the postconjunctthe dependency tree of this sentence that is supplemented correctly with the omitted modifiers is shown in figure 15another type of ellipsis in css that is a serious problem is the omission of predicates in incomplete conjunctive structuresthis type of ellipsis can be found by examining the failures of dependency analysisthe failure of dependency analysis here means that a head bunsetsu cannot be found for a certain bunsetsu in a certain range of analysiswhen two predicates in a conjunctive predicative clause are the same the first predicate is sometimes omitted and the remaining part constitutes the incomplete conjunctive structure in these structures neither conjunct can be parsed into a dependency tree because there is no predicate in it that should become the root node of a dependency treefor this reason by checking dependency analysis failures we find incomplete conjunctive structures and start the process of supplementing the css with omitted predicatesthe conditions for incomplete conjunctive structures are the following the key point is that it is important for successful analysis of css containing predicate ellipses to detect the correct scope of the incomplete conjunctive structuresin most cases their scopes can be detected correctly from a significant similarity between the a predicate ellipsis pre and postconjuncts that contain the case components of the same predicatethat is the detection of a cs based on the similarity measure smoothly leads to the omitted predicate being recovereda method that merely searches for the eb as the most similar bunsetsu for the kb might detect an incorrect scope and in this case the predicate ellipsis cannot be detected as shown in figure 16dwhen a cs is regarded as an incomplete conjunctive structure each series of bunsetsus to the left of an fb is analyzed into a dependency tree and its root node is connected to a cs node in addition to the kb and the eb when the head of the cs node is found in the next level analysis the head is considered to be the omitted predicate and the dependency tree is transformed by supplementing it with this predicate in the preconjunct as shown in figure 16fwhen the postposition of an example of analyzing a long sentence into a dependency structure the kb is also omitted the kb is supplemented with the postposition of the ebfor example in the sentence in figure 17 the cs denryugenni pnptransistor switchingni npntransistorw0 is recognized as an incomplete conjunctive structure since the head of the bunsetsu quotdenryugenniquot in the preconjunct and the bunsetsu quotswitchingniquot in the postconjunct are not found and both of them have the same postposition quotniquot as a result fb quotdenryugenniquot and fb quotswitchingniquot are connected to the cs node in addition to the kb and ebin the analysis of the parent cs it is made clear that this cs node depends on bunsetsu quotshiyoushiquot and the dependency tree is transformed by supplementing it with the omitted predicate and the omitted postposition as shown in figure 17 on the other hand if the dependency analysis of a cs fails and the conditions for incomplete conjunctive structures are not satisfied we postulate that the detected scope of a cs is incorrect and start the detection of a new cs for the kbto find a new cs whose pre and postconjuncts can be analyzed successfully the positions of the sb and eb are restricted as follows sb we examine headdependent relations in a series of bunsetsus from the first bunsetsu in a sentence to the kbif there exists a bunsetsu in that range whose head is not found the analysis must fail for a cs whose preconjunct contains this bunsetsutherefore the sb is restricted to be to the right of this bunsetsueb we examine headdependent relations in all series of bunsetsus that can be a postconjunctif the analysis of a certain series of bunsetsus fails the last bunsetsu of this series cannot become an eb of a new csafter reanalysis of the cs the analysis returns to the reduction of a sentence by checking the relations between all pairs of cssan example of redetecting a cs is shown in figure 18we report the results of analyzing 150 test sentences which are different from the 30 training sentences used in the parameter adjustment to illustrate the effectiveness of our methodtest sentences are longer and more complex than sentences in common usage and consist of 50 sentences composed of 30 to 50 characters 50 sentences of 50 to 80 characters and 50 sentences of over 80 characters8 all the example sentences shown in this paper belong to these test sentenceswe evaluated the results of analyzing 150 japanese sentencesfirst as shown in table 4 we classified all the bunsetsus in the 150 sentences into five types kbs of conjunctive noun phrases kbs of conjunctive predicative clauses kbs of incomplete conjunctive structures bunsetsus that depend on nbs and bunsetsus that depend on pbsthen we manually checked these kbs to see whether their corresponding ebs were analyzed correctly for other bunsetsus we manually checked whether their heads were analyzed correctlytable 4 shows a high success ratio for the detection of css and a very high success ratio of the dependency analysis on bunsetsu levelthese results suggest that the simple heuristic rules for headdependent relations are good enough to analyze each phraseclause of the css internally and the sentence in which css are merged into nodes respectivelysecond as shown in the upper part of table 5 we classified the 150 sentences by their length and according to whether they contain css or notwe manually checked whether css in each sentence were detected correctly if they exit and whether their dependency structures were analyzed correctlythe table shows that css are generally well recognized but the total success ratio of getting proper dependency structures is 65 to determine how well a conventional method works on such long sentences we parsed the same test sentences by another method simulating a conventional onethis method uses a simple rule instead of our dynamic programming method that a kb depends on the most similar cb it parses a sentence determining the head bunsetsu from right to left for each bunsetsu in the sentence with this simple rule for css heuristic rules for headdependent relations and the nocross conditionthe result of this method clearly shows the superiority of our method over the conventional methodthird we report the results of the redetection of css and the recovery of omitted components the redetection of css was activated only for incorrect css so we can conclude that the conditions for performing redetection are reasonableout of 215 css 180 were obtained correctly by the first cs detection five css were redetected because of incorrect relation to other css and all of them were analyzed correctlyeight css were redetected because of the failure in obtaining a dependency structure and five out of them were recognized correctlyfinally 190 css out of 215 were obtained correctly eleven out of 215 detected css satisfied the conditions for a strong csone strong cs was an incorrectly detected cs and this problem is mentioned in the following sectionfor two of the ten correctly detected strong css the omitted components that depend on one of the bunsetsus a the number of sentences that were classified into this category b the number of sentences in which all the css were detected correctly c the number of sentences whose whole dependency structures were analyzed correctly in the postconjunct other than the eb were recovered correctlythere was no modifier ellipsis of this type that could not be found by our method in the test sentencesother strong css had omitted modifiers depending on the eb or had no omitted modifiers there were two incomplete conjunctive structures in the test sentencesboth of them were found by our method and the omitted predicates concerning them were recovered correctly e we analyzed sentences of considerable length consisting of many bunsetsus there are many candidate heads for each bunsetsu in such a sentence making the possibility for incorrect headdependent relations in the dependency structure of a sentence significantconsidering these conditions and comparing results using our method with those using the conventional method the total success ratio for determining correct dependency structures for a complete sentence 65 can be considered to be fairly goodalthough onethird of the dependency structures after this analysis process included some errors their major structures that is their conjunctive structures and basic dependency structures were detected correctly in most casesthis can be seen from the high scores in table 4it is possible to classify some of the causes of incorrect analyses arising from our methodtable 6 gives some examples of errors in recognizing csshere the underlined bunsetsus are kbsthe incorrectly calculated scope of a cs is enclosed by square brackets and the correct scope is enclosed by curly brackets our assumption that both conjuncts contain about the same number of bunsetsus is useful in detecting most csseven if the number of bunsetsus of two conjuncts is somewhat different a correct cs can be obtained with the help of the penalty points which reduces the possibility that a cs contains high sl bunsetsus and with the extension of the preconjunct and so onhowever it is difficult to recognize a cs that is extremely unbalancedin sentence in table 6 the kb quottsukattequot in the beginning part of the sentence should correspond to the last cb quotseisakushiteiruquot corresponds to the following we have shown that a variety of conjunctive structures in japanese sentences can be detected using a certain similarity measure and that information about conjunctive structures enables the syntactic analysis to be more robust and successful in handling long and complex sentencesthere are still some expressions that cannot be recognized by the proposed method and one might hasten to rely on semantic information in the hope of getting proper analyses for these remaining casessemantic information however is not as reliable as syntactic information and we have to make further efforts to find some syntactic rather than semantic relations even in these difficult casesphrase structure grammar or other existing grammar formalisms may not be applicable in detecting the subtle syntactic relations among several words in a sentencewe have to find new methods to detect themto make further progress in this field we feel it is necessary to be able to take into consideration more possible interactions among a wider range of components of long sentences
J94-4001
a syntactic analysis method of long japanese sentences based on the detection of conjunctive structuresthis paper presents a syntactic analysis method that first detects conjunctive structures in a sentence by checking parallelism of two series of words and then analyzes the dependency structure of the sentence with the help of the information about the conjunctive structuresanalysis of long sentences is one of the most difficult problems in natural language processingthe main reason for this difficulty is the structural ambiguity that is common for conjunctive structures that appear in long sentenceshuman beings can recognize conjunctive structures because of a certain but sometimes subtle similarity that exists between conjunctstherefore we have developed an algorithm for calculating a similarity measure between two arbitrary series of words from the left and the right of a conjunction and selecting the two most similar series of words that can reasonably be considered as composing a conjunctive structurethis is realized using a dynamic programming techniquea long sentence can be reduced into a shorter form by recognizing conjunctive structuresconsequently the total dependency structure of a sentence can be obtained by relatively simple headdependent rulesa serious problem concerning conjunctive structures besides the ambiguity of their scopes is the ellipsis of some of their componentsthrough our dependency analysis process we can find the ellipses and recover the omitted componentswe report the results of analyzing 150 japanese sentences to illustrate the effectiveness of this methodwe propose a method to detect conjunctive structures by calculating similarity scores between two sequences of bunsetsuswe propose a similaritybased method to resolve both of the two tasks for japanesewe propose a japanese parsing method that included coordinate structure detection
an algorithm for pronominal anaphora resolution this paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors the algorithm applies to the syntactic representations generated by mccord slot grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional state like the parser the algorithm is implemented in prolog the authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrences the algorithm successfully identifies the antecedent of the pronoun for 86 of these pronoun occurrences the relative contributions of the algorithm components to its overall success rate in this blind test are examined experiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and realworld relations to the algorithm decision procedure interestingly this enhancement only marginally improves the algorithm performance the algorithm is compared with other approaches to anaphora resolution that have been proposed in the literature in particular the search procedure of hobbs algorithm was implemented in the slot grammar framework and applied to the sentences in the blind test set the authors algorithm achieves a higher rate of success than hobbs algorithm the relation of the algorithm to the centering approach is discussed as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidates this paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors the algorithm applies to the syntactic representations generated by mccord slot grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional statelike the parser the algorithm is implemented in prologthe authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrencesthe algorithm successfully identifies the antecedent of the pronoun for 86 of these pronoun occurrencesthe relative contributions of the algorithm components to its overall success rate in this blind test are examinedexperiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and realworld relations to the algorithm decision procedureinterestingly this enhancement only marginally improves the algorithm performance the algorithm is compared with other approaches to anaphora resolution that have been proposed in the literaturein particular the search procedure of hobbs algorithm was implemented in the slot grammar framework and applied to the sentences in the blind test setthe authors algorithm achieves a higher rate of success than hobbs algorithmthe relation of the algorithm to the centering approach is discussed as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidateswe present an algorithm for identifying both intrasentential and intersentential antecedents of pronouns in textwe refer to this algorithm as rap rap applies to the syntactic structures of mccord slot grammar parser and like the parser it is implemented in prologit relies on measures of salience derived from syntactic structure and a simple dynamic model of attentional state to select the antecedent noun phrase of a pronoun from a list of candidatesit does not employ semantic conditions or realworld knowledge in evaluating candidate antecedents nor does it model intentional or global discourse structure in section 2 we present rap and discuss its main propertieswe provide examples of its output for different sorts of cases in section 3most of these examples are taken from the computer manual texts on which we trained the algorithmwe give the results of a blind test in section 4 as well as an analysis of the relative contributions of the algorithm components to the overall success ratein section 5 we discuss a procedure developed by dagan for using statistically measured lexical preference patterns to reevaluate rap salience rankings of antecedent candidateswe present the results of a comparative blind test of rap and this procedurefinally in section 6 we compare rap to several other approaches to anaphora resolution that have been proposed in the computational literaturerap contains the following main components1 this hierarchy is more or less identical to the np accessibility hierarchy proposed by keenan and comrie johnson uses a similar grammatical role hierarchy to specify a set of constraints on syntactic relations including reflexive bindinglappin employs it as a salience hierarchy to state a noncoreference constraint for pronounsguenthner and lehmann use a similar salience ranking of grammatical roles to formulate rules of anaphora resolutioncentering approaches to anaphora resolution use similar hierarchies as well rap has been implemented for both esg and gsg we will limit ourselves here to a discussion of the english versionthe differences between the two versions are at present minimal primarily owing to the fact that we have devoted most of our attention to analysis of englishas with slot grammar systems in general an architecture was adopted that quotfactors outquot languagespecific elements of the algorithmwe have integrated rap into mccord logicbased machine translation system when the algorithm identifies the antecedent of a pronoun in the source language the agreement features of the head of the np corresponding to the antecedent in the target language are used to generate the pronoun in the target languagethus for example neuter third person pronouns in english are mapped into pronouns with the correct gender feature in german in which inanimate nouns are marked for genderrap operates primarily on a clausal representation of the slot grammar analysis of the current sentence in a text the clausal representation consists of a set of prolog unit clauses that provide information on the headargument and headadjunct relations of the phrase structure that the slot grammar assigns to a sentence clausal representations of the previous four sentences in the text are retained in the prolog workspacethe discourse representation used by our algorithm consists of these clausal representations together with additional unit clauses declaring discourse referents evoked by nps in the text and specifying anaphoric links among discourse referents2 all information pertaining to a discourse referent or its evoking np is accessed via an identifier a prolog term containing two integersthe first integer identifies the sentence in which the evoking np occurs with the sentences in a text being numbered consecutivelythe second integer indicates the position of the np head word in the sentence211 the syntactic filter on pronounnp coreferencethe filter consists of six conditions for nppronoun noncoreference within a sentenceto state these conditions we use the following terminologythe agreement features of an np are its number person and gender featureswe will say that a phrase p is in the argument domain of a phrase n iff p and n are both arguments of the same headwe will say that p is in the adjunct domain of n iff n is an argument of a head h p is the object of a preposition prep and prep is an adjunct of h p is in the np domain of n iff n is the determiner of a noun q and p is an argument of q or p is the object of a preposition prep and prep is an adjunct of qa phrase p is contained in a phrase q iff p is either an argument or an adjunct of q ie p is immediately contained in q or p is immediately contained in some phrase r and r is contained in qa pronoun p is noncoreferential with a noun phrase n if any of the following conditions hold 2 the number of sentences whose syntactic representations are retained is a parametrically specified value of the algorithmour decision to set this value at four is motivated by our experience with the technical texts we have been working with212 test for pleonastic pronounsthe tests are partly syntactic and partly lexicala class of modal adjectives is specifiedit includes the following items necessary possible certain likely important good useful advisable convenient sufficient economical easy desirable difficult legal a class of cognitive verbs with the following elements is also specified recommend think believe know anticipate assume expect it appearing in the constructions of figure 2 is considered pleonastic syntactic variants of these constructions are recognized as wellto our knowledge no other computational treatment of pronominal anaphora resolution has addressed the problem of pleonastic pronounsit could be argued that recognizing pleonastic uses of pronouns is a task for levels of syntacticsemantic analysis that precede anaphora resolutionwith the help of semantic classes defined in the lexicon it should be possible to include exhaustive tests for these constructions in it is modaladj that s it is modaladj to vp it is cogved that s it seemsappearsmeansfollows s np makesfinds it modaladj to vp it is time to vp it is thanks to np that s analysis grammars3 following formulation of the binding algorithm is defined by the following hierarchy of argument slots here subj is the surface subject slot agent is the deep subject slot of a verb heading a passive vp obj is the direct object slot lob is the indirect object slot and pobj is the object of a pp complement of a verb as in put np on npwe assume the definitions of argument domain adjunct domain and np domain given abovea noun phrase n is a possible antecedent binder for a lexical anaphor a iff n and a do not have incompatible agreement features and one of the following five conditions holds214 salience weightingsalience weighting is accomplished using salience factorsa given salience factor is associated with one or more discourse referentsthese discourse referents are said to be in the factor scopea weight is associated with each factor reflecting its relative contribution to the total salience of individual discourse referentsinitial weights are degraded in the course of processingthe use of salience factors in our algorithm is based on alshawi context mechanismother than sentence recency the factors used in rap differ from alshawi and are more specific to the task of pronominal anaphora resolutionalshawi framework is designed to deal with a broad class of language interpretation problems including reference resolution word sense disambiguation and the interpretation of implicit relationswhile alshawi does propose emphasis factors for memory entities that are quotreferents for noun phrases playing syntactic roles regarded as foregrounding the referentquot only topics of sentences in the passive voice and the agents of certain be clauses receive such emphasis in his systemour emphasis salience factors realize a much more detailed measure of structural saliencedegradation of salience factors occurs as the first step in processing a new sentence in the textall salience factors that have been assigned prior to the appearance of this sentence have their weights degraded by a factor of twowhen the weight of a given salience factor reaches zero the factor is removeda sentence recency salience factor is created for the current sentenceits scope is all discourse referents introduced by the current sentencethe discourse referents evoked by the current sentence are tested to see whether other salience factors should applyif at least one discourse referent satisfies the conditions for a given factor type a new salience factor of that type is created with the appropriate discourse referents in its scopein addition to sentence recency the algorithm employs the following salience factors existential emphasis predicate nominal in an existential construction as in there are only a few restrictions on lql query construction for wordsmithhead noun emphasis any np not contained in another np using the slot grammar notion of quotcontainment within a phrasequot this factor increases the salience value of an np that is not embedded within another np examples of nps not receiving head noun emphasis are the configuration information copied by backup configuration the assembly in bay c the connector labeled p3 on the flat cable nonadverbial emphasis any np not contained in an adverbial pp demarcated by a separatorlike head noun emphasis this factor penalizes nps in certain embedded constructionsexamples of nps not receiving nonadverbial emphasis are throughout the first section of this guide these symbols are also used in the panel definition panel select the quotspecifyquot option from the action barthe initial weights for each of the above factor types are given in table 1note that the relative weighting of some of these factors realizes a hierarchy of grammatical rolesthis indicates that the discourse referent you evoked by an anaphoric np is anaphorically linked to a previously introduced discourse referent yto avoid confusion with also coref is true for any discourse referent youthe coref relation defines equivalence classes of discourse referents with all discourse referents in an quotanaphoric chainquot forming one class each equivalence class of discourse referents has a salience weight associated with itthis weight is the sum of the current weight of all salience factors in whose scope at least one member of the equivalence class liesequivalence classes along with the sentence recency factor and the salience degradation mechanism constitute a dynamic system for computing the relative attentional prominence of denotational nps in textrap procedure for identifying antecedents of pronouns is as followsshalom lappin and herbert j leass an algorithm for pronominal anaphora resolution sentence attempt to identify their antecedentsresolution is attempted in the order of pronoun occurrence in the sentencein the case of lexical anaphors the possible antecedent binders were identified by the anaphor binding algorithmif more than one candidate was found the one with the highest salience weight was chosen in the case of third person pronouns resolution proceeds as follows recent discourse referent of each equivalence classthe salience weight of each candidate is calculated and included in the listthe salience weight of a candidate can be modified in several ways aif a candidate follows the pronoun its salience weight is reduced substantially bif a candidate fills the same slot as the pronoun its weight is increased slightly it is important to note that unlike the salience factors described in section 214 these modifications of the salience weights of candidates are local to the the resolution of a particular pronoun are determinedthe possible sg and pl genders are determined either of these can be a disjunction or nilpronominal forms in many languages are ambiguous as to number and gender such ambiguities are taken into account by rap morphological filter and by the algorithm as a wholethe search splits to consider singular and plural antecedents separately to allow a general treatment of number ambiguity c the syntactic filter is applied using the list of disjoint pronounnp pairs generated earlierthe filter excludes any candidate paired in the list with the pronoun being resolved as well as any candidate that is anaphorically linked to an np paired with the pronoun unambiguous as to number7the selected candidate is declared to be the antecedent of the pronounthe following properties of rap are worth notingfirst it applies a powerful syntactic and morphological filter to lists of pronounnp pairs to reduce the set of possible np antecedents for each pronounsecond np salience measures are specified largely in terms of syntactic properties and relations these include a hierarchy of grammatical roles level of phrasal embedding and parallelism of grammatical rolesemantic constraints and realworld knowledge play no role in filtering or salience rankingthird proximity of an np relative to a pronoun is used to select an antecedent in cases in which several candidates have equal salience weightingfourth intrasentential antecedents are preferred to intersentential candidatesthis preference is achieved by three mechanisms the fifth property which we note is that anaphora is strongly preferred to cataphorarap generates the list of noncoreferential pronounnp pairs for the current sentence the list of pleonastic pronouns if any in the current sentence the list of possible antecedent nplexical anaphor pairs if any for the current sentence and the list of pronounantecedent np pairs that it has identified for which antecedents may appear in preceding sentences in the texteach np appearing in any of the first three lists is represented by its lexical head followed by the integer that corresponds to its position in the sequence of tokens in the input string of the current sentencethe nps in the pairs of the pronounantecedent list are represented by their lexical heads followed by their ids displayed as a list of two integersafter installation of the option the backup copy of the reference diskette was started for the computer to automatically configure itselfantecedent nplexical anaphor pairs computer18 itself22 anaphorantecedent links itself to computer john talked to bill about himselfantecedent nplexical anaphor pairsjohn1 himself6 bill4 himself6 anaphorantecedent links himself to john in the second example joh n was preferred to bi i i owing to its higher salience weightmost of the copyright notices are embedded in the exec but this keyword makes it possible for a usersupplied function to have its own copyright noticenoncoreferential pronounnp pairs function and keyword share the highest salience weight of all candidates that pass the morphological and syntactic filters they are both subjects and therefore higher in salience than the third candidate exec function is then selected as the antecedent owing to its proximity to the anaphorbecause of this microemacs cannot process an incoming esc until it knows what character follows itescin addition m icroemacs is rewarded because it fills the same grammatical role as the anaphor being resolvedin the case of it the parallelism reward works in favor of esc causing it to be chosen despite the general preference for subjects over objectsat this point emacs is waiting for a commandit is prepared to see if the variable keys are true and executes some lines if they arenoncoreferential pronounnp pairs it1 key9 it1 line16 it1 they18 they18 it1 anaphorantecedent links it to emacs they to key the discourse referents currently defined can be displayed with their salience weightsthe display for the twosentence text of section 34 is as follows the members of an equivalence class are displayed on one linesince salience factors from previous sentences are degraded by a factor of two when each new sentence is processed discourse referents from earlier sentences that are not members of anaphoric chains extending into the current sentence rapidly become quotuncompetitivequot this example illustrates the strong preference for intrasentential antecedents printer is selected despite the fact that it is much lower on the hierarchy of grammatical roles than the other candidate file which also benefits from the parallelism rewarddegradation of salience weight for the candidate from the previous sentence is substantial enough to offset these factorsthe partnum tag prints a part number on the documentname initial setting places it on the back coverfour candidates receive a similar salience weighting in this exampletwo potential intrasentential candidates that would have received a high salience ranking sett ing and cover are ruled out by the syntactic filterthe remaining intrasentential candidate scsym8 ranks relatively low as it is a possessive determinerit scores lower than two candidates from the previous sentencethe parallelism reward causes num ber to be preferredwe tuned rap on a corpus of five computer manuals containing a total of approximately 82000 wordsfrom this corpus we extracted sentences with 560 occurrences of third person pronouns and their antecedentsin the training phase we refined our tests for pleonastic pronouns and experimented extensively with salience weightingour goal was of course to optimize rap success rate with the training corpuswe proceeded heuristically analyzing cases of failure and attempting to eliminate them in as general a manner as possiblethe parallelism reward was introduced at this time as it seemed to make a substantial contribution to the overall success ratea salience factor that was originally present viz matrix emphasis was revised to become the nonadverbial emphasis factorin its original form this factor contributed to the salience of any np not contained in a subordinate clause or in an adverbial pp demarcated by a separatorthis was found to be too general especially since the relative positions of a given pronoun and its antecedent candidates are not taken into accountthe revised factor could be thought of as an adverbial penalty factor since it in effect penalizes nps occurring in adverbial ppsio we also experimented with the initial weights for the various factors and with the size of the parallelism reward and cataphora penalty again attempting to optimize rap overall success ratea value of 35 was chosen for the parallelism reward this is just large enough to offset the preference for subjects over accusative objectsa much larger value was found to be necessary for the cataphora penaltythe final results that we obtained for the training corpus are given in table 2interestingly the syntacticmorphological filter reduces the set of possible antecedents to a single np or identifies the pronoun as pleonastic in 163 of the 475 cases that the algorithm resolves correctlyit significantly restricts the size of the candidate list in most of the other cases in which the antecedent is selected on the basis of salience ranking and proximitythis indicates the importance of a powerful syntacticmorphological filtering component in an anaphora resolution systemwe then performed a blind test of rap on a test set of 345 sentences randomly selected from a corpus of 48 computer manuals containing 125 million wordsthe results which we obtained for the test corpus are given in table 313 this blind test provides the basis for a comparative evaluation of rap and dagan several classes of errors that rap makes are worthy of discussionthe first occurs with many cases of intersentential anaphora such as the following this green indicator is lit when the controller is onit shows that the dc power supply voltages are at the correct levelsmorphological and syntactic filtering exclude all possible intrasentential candidatesbecause the level of sentential embedding does not contribute to rap salience weighting mechanism indicator and controller are ranked equally since both are subjectsrap then erroneously chooses control ler as the antecedent since it is closer to the pronoun than the other candidatethe next class of errors involves antecedents that receive a low salience weighting owing to the fact that the evoking np is embedded in a matrix np or is in another structurally nonprominent position the users you enroll may not necessarily be new to the system and may already have a user profile and a system distribution directory entryof course checks for the existence of these objects and only creates them as necessarydespite the general preference for intrasentential candidates user is selected as the antecedent since the only factor contributing to the salience weight of object is sentence recencyselectional restrictions or statistically measured lexical preferences could clearly help in at least some of these casesin another class of cases rap fails because semanticpragmatic information is required to identify the correct antecedent conditions13 proper resolution was determined by a consensus of three opinions including that of the first authoras you did with the function use it to verify that the items have been restored to your system successfully funct ion is selected as the antecedent rather than a id using the test corpus of our blind test we conducted experiments with modified versions of rap in which various elements of the salience weighting mechanism were switched offwe present the results in table 4 and discuss their significanceten variants are presented in table 4 they are as follows i quotstandardquot rap ii parallelism reward deactivated iii nonadverbial and head emphasis deactivated iv matrix emphasis used instead of nonadverbial emphasis v cataphora penalty deactivated vi subject existential accusative and indirect objectoblique complement emphasis deactivated vii equivalence classes deactivated viii sentence recency and salience degradation deactivated ix all quotstructuralquot salience weighting deactivated x all salience weighting and degradation deactivated the single most important element of the salience weighting mechanism is the recency preference this is not surprising given the relative scarcity of intersentential anaphora in our test corpus deactivating the equivalence class mechanism also led to a significant deterioration in rap performance in this variant only the salience factors applying to a particular np contribute to its salience weight without any contribution from other anaphorically linked npsthe performance of the syntactic filter is degraded somewhat in this variant as well since nps that are anaphorically linked to an np fulfilling the criteria for disjoint reference will no longer be rejected as antecedent candidatesthe results for vii and viii indicate that attentional state plays a significant role in pronominal anaphora resolution and that even a simple model of attentional state can be quite effectivedeactivating the syntaxbased elements of the salience weighting mechanism individually led to relatively small deteriorations in the overall success rate eliminating the hierarchy of grammatical roles for example led to a deterioration of less than 4despite the comparatively small degradation in performance that resulted from turning off these elements individually their combined effect is quite significant as the results of ix showthis suggests that the syntactic salience factors operate in a complex and highly interdependent manner for anaphora resolutionx relies solely on syntacticmorphological filtering and proximity to choose an antecedentnote that the sentence pairs of the blind test set were selected so that for each pronoun occurrence at least two antecedent candidates remained after syntacticmorphological filtering in the 17 cases in which x correctly disagreed with rap the proper antecedent happened to be the most proximate candidatewe suspect that rap overall success rate can be improved by refining its measures of structural salienceother measures of embeddedness or perhaps of quotdistancequot between anaphor and candidate measured in terms of clausal and np boundaries may be more effective than the current mechanisms for nonadverbial and head emphasisempirical studies of patterns of pronominal anaphora in corpora could be helpful in defining the most effective measures of structural salienceone might use such studies to obtain statistical data for determining the reliability of each proposed measure as a predictor of the antecedentanaphor relation and the orthogonality of all proposed measuresdagan constructs a procedure which he refers to as rapstat for using statistically measured lexical preference patterns to reevaluate rap salience rankings of antecedent candidatesrapstat assigns a statistical score to each element of a candidate list that rap generates this score is intended to provide a measure of the preference that lexical semanticpragmatic factors impose upon the candidate as a possible antecedent for a given pronoun14 such a distance measure is reminiscent of hobbs tree search proceduresee section 61 for a discussion of hobbs algorithm and its limitationsthe results for iv confirm our suspicions from the training phase that matrix emphasis does not contribute significantly to successful resolution15 assume that p is a nonpleonastic and nonreflexive pronoun in a sentence such that rap generates the nonempty list l of antecedent candidates for p let h be the lexical head of which p is an argument or an adjunct in the sentencerapstat computes a statistical score for each element c of l on the basis of the frequency in a corpus with which ci occurs in the same grammatical relation with h as p occurs with h in the sentencethe statistical score that rapstat assigns to ci is intended to model the probability of the event where c stands in the relevant grammatical relation to h given the occurrence of c rapstat reevaluates rap ranking of the elements of the antecedent candidate list l in a way that combines both the statistical scores and the salience values of the candidatesthe elements of l appear in descending order of salience valuerapstat processes l as followsinitially it considers the first two elements c1 and c2 of l if the difference in salience scores between c1 and c2 does not exceed a parametrically specified value and the statistical score of c2 is significantly greater than that of c1 then rapstat will substitute the former for the latter as the currently preferred candidateif conditions and do not hold rapstat confirms rap selection of c1 as the preferred antecedentif these conditions do hold then rapstat selects c2 as the currently preferred candidate and proceeds to compare it with the next element of l it repeats this procedure for each successive pair of candidates in l until either or fails or the list is completedin either case the last currently preferred candidate is selected as the antecedentan example of a case in which rapstat overules rap is the followingthe send message display is shown allowing you to enter your message and specify where it will be sentthe two top candidates in the list that rap generates for it are display with a salience value of 345 and message which has a salience value of 315in the corpus that we used for testing rapstat the verbobject pair senddisplay appears only once whereas sendmessage occurs 289 timesas a result message receives a considerably higher statistical score than displaythe salience difference threshold that we used for the test is 100 and conditions and hold for these two candidatesthe difference between the salience value of message and the third element of the candidate list is greater than 100therefore rapstat correctly selects message as the antecedent of itdagan et al report a comparative blind test of rap and rapstatto construct a database of grammatical relation counts for rapstat we applied the slot grammar parser to a corpus of 125 million words of text from 48 computer manualswe automatically extracted all lexical tuples and recorded their frequencies in the parsed corpuswe then constructed a test set of pronouns by randomly selecting from the corpus sentences containing at least one nonpleonastic third person pronoun occurrencefor each such sentence in the set we included the sentence that immediately precedes it in the text we filtered the test set so that for each pronoun occurrence in the set rap generates a candidate list with at least two elements the actual antecedent np appears in the candidate list and there is a total tuple frequency greater than 1 for the candidate see dagan 1992 and dagan et al for a discussion of this lexical statistical approach to ranking antecedent candidates and possible alternatives16 in the interests of simplicity and uniformity we discarded sentence pairs in which the first sentence contains a pronounwe decided to limit the text preceding the sentence containing the pronoun to one sentence because we found that in the manuals which we used to tune the algorithm almost all cases of intersentential anaphora involved an antecedent in the immediately preceding sentencemoreover the progressive decline in the salience values of antecedent candidates in previous sentences ensures that a candidate appearing in a sentence which is more than one sentence prior to the current one will be selected only if no candidates exist in either the current or the preceding sentenceas such cases are relatively rare in the type of text we studied we limited our test set to textual units containing the current and the preceding sentence list 17 the test set contains 345 sentence pairs with a total of 360 pronoun occurrencesthe results of the blind test for rap and rapstat are as followswhen we further analyzed the results of the blind test we found that rapstat success depends in large part on its use of salience informationif rapstat statistically based lexical preference scores are used as the only criterion for selecting an antecedent the statistical selection procedure disagrees with rap in 151 out of 338 instancesrap is correct in 120 of these cases and the statistical decision in 31 of the caseswhen salience is factored into rapstat decision procedure the rate of disagreement between rap and rapstat declines sharply and rapstat performance slightly surpasses that of rap yielding the results that we obtained in the blind testin general rapstat is a conservative statistical extension of rapit permits statistically measured lexical preference to overturn saliencebased decisions only in cases in which the difference between the salience values of two candidates is small and the statistical preference for the less salient candidate is comparatively largethe comparative blind test indicates that incorporating statistical information on lexical preference patterns into a saliencebased anaphora resolution procedure can yield a modest improvement in performance relative to a system that relies only on syntactic salience for antecedent selectionour analysis of these results also shows that statistically measured lexical preference patterns alone provide a far less efficient basis for anaphora resolution than an algorithm based on syntactic and attentional measures of saliencewe will briefly compare our algorithm with several other approaches to anaphora resolution that have been suggestedhobbs algorithm relies on a simple tree search procedure formulated in terms of depth of embedding and leftright orderby contrast rap uses a multidimensional measure of salience that invokes a variety of syntactic properties specified in terms of the headargument structures of slot grammar as well as a model of attentional statehobbs tree search procedure selects the first candidate encountered by a left right depth first search of the tree outside of a minimal path to the pronoun that satisfies certain configurational constraintsthe algorithm chooses as the antecedent of a pronoun p the first np in the tree obtained by lefttoright breadthfirst traversal of the branches to the left of the path t such that t is the path from the np dominating p to the first np or s dominating this np t contains an np or s node n that contains the np dominating p and n does not contain npif an antecedent satisfying this condition is not found in the sentence containing p the algorithm selects the first np obtained by a lefttoright breadth first search of the surface structures of preceding sentences in the textwe have implemented a version of hobbs algorithm for slot grammarthe original formulation of the algorithm encodes syntactic constraints on pronominal anaphora in the definition of the domain to which the search for an antecedent np appliesin our implementation of the algorithm we have factored out the search procedure and substituted rap syntacticmorphological filter for hobbs procedural filterlet the mods of a head h be the sisters of h in the slot grammar representation of the phrase that h headsour specification of hobbs algorithm for slot grammar is as follows we ran this version of hobbs algorithm on the test set that we used for the blind test of rap and rapstat the results appear in table 5it is important to note that the test set does not include pleonastic pronouns or lexical anaphors neither of which are dealt with by hobbs algorithmmoreover our slot grammar implementation of the algorithm gives it the full advantage of rap syntacticmorphological filter which is more powerful than the configurational filter built into the original specification of the algorithmtherefore the test results provide a direct comparison of rap salience metric and hobbs search procedurehobbs algorithm was more successful than rap in resolving intersentential anaphora because intersentential anaphora is relatively rare in our corpus of computer manual texts and because rap success rate for intrasentential anaphora is higher than hobbs rap overall success rate on the blind test set is 4 higher than that of our version of hobbs algorithmthis indicates that rap salience metric provides a more reliable basis for antecedent selection than hobbs search procedure for the text domain on which we tested both algorithmsit is clear from the relatively high rate of agreement between rap and hobbs algorithm on the test set that there is a significant degree of convergence between salience as measured by rap and the configurational prominence defined by hobbs search procedurethis is to be expected in english in which grammatical roles are identified by means of phrase orderhowever in languages in which grammatical roles are case marked and word order is relatively free we expect that there will be greater divergence in the predictions of the two algorithmsthe salience measures used by rap have application to a wider class of languages than hobbs orderbased search procedurethis procedure relies on a correspondence of grammatical roles and linear precedence relations that holds for a comparatively small class of languagesmost of the work in this area seeks to formulate general principles of discourse structure and interpretation and to integrate methods of anaphora resolution into a computational model of discourse interpretation sidner grosz joshi and weinstein grosz and sidner 21 the difficulty that rap encounters with such cases was discussed in section 41we are experimenting with refinements in rap scoring mechanism to improve its performance in these and other casesbrennan friedman and pollard and webber present different versions of this approachdynamic properties of discourse especially coherence and focusing are invoked as the primary basis for identifying antecedence candidates selecting a candidate as the antecedent of a pronoun in discourse involves additional constraints of a syntactic semantic and pragmatic naturein developing our algorithm we have not attempted to consider elements of discourse structure beyond the simple model of attentional state realized by equivalence classes of discourse referents salience degradation and the sentence recency salience factorthe results of our experiments with computer manual texts indicate that at least for certain text domains relatively simple models of discourse structure can be quite useful in pronominal anaphora resolutionwe suspect that many aspects of discourse models discussed in the literature will remain computationally intractable for quite some time at least for broadcoverage systemsa more extensive treatment of discourse structure would no doubt improve the performance of a structurally based algorithm such as rapat the very least formatting information concerning paragraph and section boundaries list elements etc should be taken into accounta treatment of definite np resolution would also presumably lead to more accurate resolution of pronominal anaphora since it would improve the reliability of the salience weighting mechanismhowever some current discoursebased approaches to anaphora resolution assign too dominant a role to coherence and focus in antecedent selectionas a result they establish a strong preference for intersentential over intrasentential anaphora resolutionthis is the case with the anaphora resolution algorithm described by brennan friedman and pollard this algorithm is based on the centering approach to modeling attentional structure in discourse 22 constraints and rules for centering are applied by the algorithm as part of the selection procedure for identifying the antecedents of pronouns in a discoursethe algorithm strongly prefers intersentential antecedents that preserve the center or maximize continuity in center change to intrasentential antecedents that because radical center shiftsthis strong preference for intersentential antecedents is inappropriate for at least some text domainsin our corpus of computer manual texts for example we estimate that less than 20 of referentially used third person pronouns have intersentential antecedentsthere is a second difficulty with the brennan et al centering algorithmit uses a hierarchy of grammatical roles quite similar to that of rap but this role hierarchy does not directly influence antecedent selectionwhereas the hierarchy in rap contributes to a multidimensional measure of the relative salience of all antecedent candidates in brennan et al 1987 it is used only to constrain the choice of the backwardlooking center cb of an utteranceit does not serve as a general preference measure for antecedencethe items in the forward center list cf are ranked according to the hierarchy of grammatical rolesfor an utterance un cb is required to be the highest ranked element of cf that is realized in youif an element e in the list of possible forward centers cf is identified as the antecedent of a pronoun in lie then e is realized in youthe brennan et al centering algorithm does not require that the highest ranked element of cf actually be realized in un but only that cb be the highest ranked element of cf which is in fact realized in you antecedent selection is constrained by rules that sustain cohesion in the relations between the backward centers of successive utterances in a discourse but it is not determined directly by the role hierarchy used to rank the forward centers of a previous utterancetherefore an np in un_1 that is relatively low in the hierarchy of grammatical roles can serve as an antecedent of a pronoun in lin provided that no higher ranked np in you_1 is taken as the antecedent of some other pronoun or definite np in un24 an example will serve to illustrate the problem with this approachthe display shows you the status of all the printersit also provides options that control printersthe forward center list for the first sentence is as follows applying the filters and ranking mechanism of brennan friedman and pollard yields two possible anchors25 each anchor determines a choice of cb and the antecedent of itone anchor identifies both with display whereas the second takes both to be statusthe hierarchy of grammatical roles is not used to select display over statusnothing in the algorithm rules out the choice of status as the backward center for the second sentence and as the antecedent of itif this selection is made display is not realized in the second sentence and so cb is status which is then the highest ranked element of cf that is realized in un as required by constraint 3 of the brennan et al centering algorithmin general we agree with alshawi that an algorithmmodel relying on the relative salience of all entities evoked by a text with a mechanism for removing or filtering entities whose salience falls below a threshold is preferable to models that quotmake assumptions about a single focus of attentionquot this approach seeks to combine a variety of syntactic semantic and discourse factors into a multidimensional metric for ranking antecedent candidateson this view the score of a candidate is a composite of several distinct scoring procedures each of which reflects the prominence of the candidate with respect to a specific type of information or propertythe systems described by asher and wada carbonell and brown and rich and luperfoy are examples of this mixed evaluation strategyin general these systems use composite scoring procedures that assign a global rank to an antecedent candidate on the basis of the scores that it receives from several evaluation metricseach such metric scores the likelihood of the candidate relative to a distinct informational factorthus for example rich and luperfoy propose a system that computes the global preference value of a candidate from the scores provided by a set of constraint source modules in which each module invokes different sorts of conditions for ranking the antecedent candidatethe set of modules includes syntactic and morphological filters for checking agreement and syntactic conditions on disjoint reference a procedure for applying semantic selection restrictions to a verb and its arguments a component that uses contextual and realworld knowledge and modules that represent both the local and global focus of discoursethe global ranking of an antecedent candidate is a function of the scores that it receives from each of the constraint source modulesour algorithm also uses a mixed evaluation strategywe have taken inspiration from the discussions of scoring procedures in the works cited above but we have avoided constraint sources involving complex inferencing mechanisms and realworld knowledge typically required for evaluating the semanticpragmatic suitability of antecedent candidates or for determining details of discourse structurein general it seems to us that reliable large scale modelling of realworld and contextual factors is beyond the capabilities of current computational systemseven constructing a comprehensive computationally viable system of semantic selection restrictions and an associated type hierarchy for a natural language is an exceedingly difficult problem which to our knowledge has yet to be solvedmoreover our experiments with statistically based lexical preference information casts doubt on the efficacy of relatively inexpensive methods for capturing semantic and pragmatic factors for purposes of anaphora resolutionour results suggest that scoring procedures which rely primarily on tractable syntactic and attentional properties can yield a broad coverage anaphora resolution system that achieves a good level of performancewe have designed and implemented an algorithm for pronominal anaphora resolution that employs measures of discourse salience derived from syntactic structure and a simple dynamic model of attentional statewe have performed a blind test of this algorithm on a substantial set of cases taken from a corpus of computer manual text and found it to provide good coverage for this setit scored higher than a version of hobbs algorithm that we implemented for slot grammarresults of experiments with the test corpus show that the syntaxbased elements of our salience weighting mechanism contribute in a complexly interdependent way to the overall effectiveness of the algorithmthe results also support the view that attentional state plays a significant role in pronominal anaphora resolution and demonstrate that even a simple model of attentional state can be quite effectivethe addition of statistically measured lexical preferences to the range of factors that the algorithm considers only marginally improved its performance on the blind test setanalysis of the results indicates that lexical preference information can be useful in cases in which the syntactic salience ranking does not provide a clear decision among the top candidates and there is a strong lexical preference for one of the less salient candidatesthe relatively high success rate of the algorithm suggests the viability of a computational model of anaphora resolution in which the relative salience of an np in discourse is determined in large part by structural factorsin this model semantic and realworld knowledge conditions apply to the output of an algorithm that resolves pronominal anaphora on the basis of syntactic measures of salience recency and frequency of mentionthese conditions are invoked only in cases in which salience does not provide a clearcut decision andor there is substantial semanticpragmatic support for one of the less salient candidateswe would like to thank martin chodorow ido dagan john justeson slava katz michael mccord hubert lehman amnon ribak ulrike schwa11 and marilyn walker for helpful discussion of many of the ideas and proposals presented herethe blind test and evaluation of rapstat reported here was done jointly with ido dagan john justeson and amnon ribakan early version of this paper was presented at the cognitive science colloquium of the university of pennsylvania in january 1992 and we are grateful to the participants of the colloquium for their reactions and suggestionswe are also grateful to several anonymous reviewers of computational linguistics for helpful comments on earlier drafts of the paper
J94-4002
an algorithm for pronominal anaphora resolutionthis paper presents an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors the algorithm applies to the syntactic representations generated by mccord slot grammar parser and relies on salience measures derived from syntactic structure and a simple dynamic model of attentional statelike the parser the algorithm is implemented in prologthe authors have tested it extensively on computer manual texts and conducted a blind test on manual text containing 360 pronoun occurrencesthe algorithm successfully identifies the antecedent of the pronoun for 86 of these pronoun occurrencesthe relative contributions of the algorithm components to its overall success rate in this blind test are examinedexperiments were conducted with an enhancement of the algorithm that contributes statistically modelled information concerning semantic and realworld relations to the algorithm decision procedureinterestingly this enhancement only marginally improves the algorithm performance the algorithm is compared with other approaches to anaphora resolution that have been proposed in the literaturein particular the search procedure of hobbs algorithm was implemented in the slot grammar framework and applied to the sentences in the blind test setthe authors algorithm achieves a higher rate of success than hobbs algorithmthe relation of the algorithm to the centering approach is discussed as well as to models of anaphora resolution that invoke a variety of informational factors in ranking antecedent candidatesin the heuristic saliencebased algorithm for pronoun resolution we introduce a procedure for identifying anaphorically linked np as a cluster for which a global salience value is computed as the sum of the salience values of its elementswe describe an algorithm for pronominal anaphora resolution that achieves a high rate of correct analyses
word sense disambiguation using a second language monolingual corpus this paper presents a new approach for resolving lexical ambiguities in one language using statistical data from a monolingual corpus of another language this approach exploits the differences between mappings of words to senses in different languages the paper concentrates on the problem of target word selection in machine translation for which the approach is directly applicable the presented algorithm identifies syntactic relations between words using a source language parser and maps the alternative interpretations of these relations to the target language using a bilingual lexicon the preferred senses are then selected according to statistics on lexical relations in the target language the selection is based on a statistical model and on a constraint propagation algorithm which simultaneously handles all ambiguities in the sentence the method was evaluated using three sets of hebrew and german examples and was found to be very useful for disambiguation the paper includes a detailed comparative analysis of statistical sense disambiguation methods this paper presents a new approach for resolving lexical ambiguities in one language using statistical data from a monolingual corpus of another languagethis approach exploits the differences between mappings of words to senses in different languagesthe paper concentrates on the problem of target word selection in machine translation for which the approach is directly applicablethe presented algorithm identifies syntactic relations between words using a source language parser and maps the alternative interpretations of these relations to the target language using a bilingual lexiconthe preferred senses are then selected according to statistics on lexical relations in the target languagethe selection is based on a statistical model and on a constraint propagation algorithm which simultaneously handles all ambiguities in the sentencethe method was evaluated using three sets of hebrew and german examples and was found to be very useful for disambiguationthe paper includes a detailed comparative analysis of statistical sense disambiguation methodsthe resolution of lexical ambiguities in nonrestricted text is one of the most difficult tasks of natural language processinga related task in machine translation on which we focus in this paper is target word selectionthis is the task of deciding which target language word is the most appropriate equivalent of a source language word in contextin addition to the alternatives introduced by the different word senses of the source language word the target language may specify additional alternatives that differ mainly in their usagetraditionally several linguistic levels were used to deal with this problem syntactic semantic and pragmaticcomputationally the syntactic methods are the most affordable but are of no avail in the frequent situation when the different senses of the word show the same syntactic behavior having the same part of speech and even the same subcategorization framesubstantial application of semantic or pragmatic knowledge about the word and its context requires compiling huge amounts of knowledge the usefulness of which for practical applications in broad domains has not yet been proven moreover such methods usually do not reflect word usagesstatistical approaches which were popular several decades ago have recently reawakened and were found to be useful for computational linguisticswithin this framework a possible alternative to using manually constructed knowledge can be found in the use of statistical data on the occurrence of lexical relations in large corpora the use of such relations for various purposes has received growing attention in recent research more specifically two recent works have suggested using statistical data on lexical relations for resolving ambiguity of prepositional phrase attachment and pronoun references clearly statistics on lexical relations can also be useful for target word selectionconsider for example the hebrew sentence extracted from the foreign news section of the daily haaretz september 1990 nose ze mana mishtei hamdinot milahtom al hoze shalom issue this prevented fromtwo thecountries fromsigning on treaty peace this sentence would translate into english as this issue prevented the two countries from signing a peace treatythe verb lahtom has four senses ign sealquotfinish and closethe noun hoze means both contract and treaty where the difference is mainly in usage rather than in the meaning one possible solution is to consult a hebrew corpus tagged with word senses from which we would probably learn that the sense ign of lahtom appears more frequently with hoze as its object than all the other sensesthus we should prefer that sensehowever the size of corpora required to identify lexical relations in a broad domain is very large and therefore it is usually not feasible to have such corpora manually tagged with word sensesthe problem of choosing between treaty and contract cannot be solved using only information on hebrew because hebrew does not distinguish between themthe solution suggested in this paper is to identify the lexical relations in corpora of the target language instead of the source languagewe consider word combinations and count how often they appear in the same syntactic relation as in the ambiguous sentencefor the above example the noun compound peace treaty appeared 49 times in our corpus whereas the compound peace contract did not appear at all the verbobj combination to sign a treaty appeared 79 times whereas none of the other three alternatives appeared more than twicethus we first prefer treaty to contract because of the noun compound peace treaty and then proceed to prefer ign since it appears most frequently having the object treatythe order of selection is determined by a constraint propagation algorithmin both cases the correctly selected word is not the most frequent one close is more frequent in our corpus than ign and contract is more frequent than treatyalso by using a model of statistical confidence the algorithm avoids a decision in cases in which no alternative is significantly better than the othersour approach can be analyzed from two different points of viewfrom that of monolingual sense disambiguation we exploit the fact that the mapping between words and word senses varies significantly among different languagesthis enables us to map an ambiguous construct from one language to another obtaining representations in which each sense corresponds to a distinct wordnow it is possible to collect cooccurrence statistics automatically from a corpus of the other language without requiring manual tagging of sensesfrom the point of view of machine translation we suggest that some ambiguity problems are easier to solve at the level of the target language than the source languagethe source language sentences are considered a noisy source for target language sentences and our task is to devise a target language model that prefers the most reasonable translationmachine translation is thus viewed in part as a recognition problem and the statistical model we use specifically for target word selection may be compared with other language models in recognition tasks to a limited extent this view is shared with the statistical machine translation system of brown et al which employs a target language ngram model in contrast to this view previous approaches in machine translation typically resolve examples like by stating various constraints in terms of the source language as explained above such constraints cannot be acquired automatically and therefore are usually limited in their coveragethe experiments we conducted clearly show that statistics on lexical relations are very useful for disambiguationmost notable is the result for the set of examples of hebrew to english translation which was picked randomly from foreign news sections in the israeli pressfor this set the statistical model was applicable for 70 of the ambiguous words and its selection was then correct for 91 of the caseswe cite also the results of a later experiment that tested a weaker variant of our method on texts in the computer domain achieving a precision of 85both results significantly improve upon a naive method that uses only a priori word probabilitiesthese results are comparable to recent reports in the literature it should be emphasized though that our results were achieved for a realistic simulation of a broad coverage machine translation system on randomly selected exampleswe therefore believe that our figures reflect the expected performance of the algorithm in a practical implementationon the other hand most other results relate to a small number of words and senses that were determined by the experimenterssection 2 of the paper describes the linguistic model we use employing a syntactic parser and a bilingual lexiconsection 3 presents the statistical model assuming a multinomial model for a single lexical relation and then using a constraint propagation algorithm to account simultaneously for all relations in the sentencesection 4 describes the experimental settingsection 5 presents and analyzes the results of the experiment and cites additional results in section 6 we analyze the limitations of the algorithm in different cases and suggest enhancements to improve itwe also discuss the possibility of adopting the algorithm for monolingual applicationsfinally in section 7 we present a comparative analysis of statistical sense disambiguation methods and then conclude in section 82 a similar observation underlies the use of parallel bilingual corpora for sense disambiguation as we explain in section 7 these corpora are a form of a manually tagged corpus and are more difficult to obtain than monolingual corporaerroneously the preliminary publication of our method was cited several times as requiring a parallel bilingual corpusour approach is first to use a bilingual lexicon to find all possible translations of each lexically ambiguous word in the source sentence and then use statistical information gathered from target language corpora to choose the most appropriate alternativeto carry out this task we need the following linguistic tools which are discussed in detail in the following sections section 21 parsers for both the source language and the target languagethese parsers should be capable of locating relevant syntactic relations such as subjverb verbobj etcsection 22 a bilingual lexicon that lists alternative translations for each source language wordif a word belongs to several syntactic categories there should be a separate list for each onesection 23 a procedure for mapping the source language syntactic relations to those of the target languagesuch tools have been implemented within the framework of many computational linguistic theorieswe have used mccord implementation of slot grammars however our method could have proceeded just as well using other linguistic modelsthe linguistic model will be illustrated by the following hebrew example taken from the haaretz daily newspaper from september 1990 et hasikkuyim 1hassagat hitqaddmut basihot thechances forachieving progress in thetalks here the ambiguous words in translation to english are magdila hitqaddmut and sihotto facilitate the reading we give the translation of the sentence into english and in each case of an ambiguous selection all the alternatives are listed within curly brackets the first alternative being the correct one diplomats believe that the joining of hon sun increases i enlarges i magnifies the chances for achieving progress i advance i advancement in the talks i conversations i callsthe following subsections describe in detail the processing steps of the linguistic modelthese include locating the ambiguous words and the relevant syntactic relations among them in the source language sentence mapping these relations to alternative relations in the target language and finally counting occurrences of these alternatives in a target language corpusour model defines the different quotsensesquot of a source word to be all its possible translations to the target language as listed in a bilingual lexiconsome translations can be eliminated by the syntactic environment of the word in the source languagefor example in the following two sentences the word consider should be translated in these examples the different syntactic subcategorization frames determine two different translations to hebrew thus eliminating some of the ambiguitysuch syntactic rules that allow us to resolve some of the ambiguities may be encoded in the lexicon however many ambiguities cannot be resolved on syntactic groundsthe purpose of this work is to resolve the remaining ambiguities using lexical cooccurrence preferences obtained by statistical methodsour basic concept is the syntactic tuple which denotes a syntactic relation between two or more wordsit is denoted by the name of the syntactic relation followed by a sequence of words that satisfies the relation appearing in their base form for example is a syntactic tuple which occurs in the sentence the man walked homewe assume that our parser can locate the syntactic relation corresponding to a given syntactic tuple in a sentencethe use of the base form of words is justified by the additional assumption that morphological inflections do not affect the probability of syntactic tuplesthis assumption is not entirely accurate but it has proven practically useful and reduces the number of distinct tuplesin our experience the following syntactic relations proved useful for resolving ambiguities as mentioned earlier the full list of syntactic relations depends on the syntactic theory of the parserour model is general and does not depend on any particular listhowever we have found some desired properties in defining the relevant syntactic relationsone such property is the use of deep or canonical relations as was already identified by grishman hirschman and nhan this property was directly available from the esg parser which identifies the underlying syntactic function in constructs such as passives and relative clauseswe have also implemented an additional routine which modified or filtered some of the relations received from the parserthis postprocessing routine dealt mainly with function words and prepositional phrases to get a set of more informative relationsfor example it combined the subject and complement of the verb be into a single relationlikewise a verb with its preposition and the head noun of a modifying prepositional phrase were also combinedthe routine was designed to choose relations that impose considerable restrictions on the possible syntactic tupleson the other hand these relations should not be too specific to allow statistically meaningful samplesthe first step in resolving an ambiguity is to find all the syntactic tuples containing the ambiguous wordsfor we get the following syntactic tuples in using these tuples we expect to capture lexical constraints that are imposed by syntactic relationsthe set of syntactic tuples in the source language sentence is reflected in its translation to the target languageas a syntactic tuple is defined by both its syntactic relation and the words that appear in it we need to map both components to the target languageby definition every ambiguous source language word maps to several target language wordswe thus get several alternative target language tuples for each source language tuple that involves an ambiguous wordfor example for tuple 3 in we obtain three alternatives corresponding to the three different translations of the word hitqaddmutfor tuple 4 we obtain nine alternative target tuples since each of the words hit qaddmut and say maps to three different english wordsthe full mapping of the hebrew tuples in to english tuples appears in table 1 each of the tuple sets in this table denotes the alternatives for translating the corresponding hebrew tuplefrom a theoretical point of view the mapping of syntactic relations is more problematicthere need not be a onetoone mapping from source language relations to target language onesin many cases the mapping depends on the words of the syntactic tuple as seen in the following example of translating from german to englishin this example the source language subject becomes the direct object in the target whereas the direct object in the source language becomes the subject in the targettherefore the german syntactic tuples in practice this is less of a problemin most cases the source language relation has a direct equivalent in the target languagein many other cases transformation rules can be encoded either in the lexicon or as syntactic transformationsthese rules are usually available in machine translation systems that use the transfer method as this knowledge is required to generate target language structuresto facilitate further the mapping of syntactic relations and to avoid errors due to fine distinctions between them we grouped related syntactic relations into a single quotgeneral classquot and mapped this class to the target languagethe important classes used were relations between a verb and its arguments and modifiers and between a noun and its arguments and modifiers the classification enables us to get more statistical data for each class as it reduces the number of relationsthe success of using this general level of syntactic relations indicates that even a rough mapping of source to target language relations is useful for the statistical modelwe now wish to determine the plausibility of each alternative target word being the translation of an ambiguous source wordin our model the plausibility of selecting a target word is determined by the plausibility of the tuples that are obtained from itthe plausibility of alternative target tuples is in turn determined by their relative frequency in the corpustarget syntactic tuples are identified in the corpus similarly to source language tuples ie by a target language parser and a companion routine as described in section 21the right column of table 1 shows the counts obtained for the syntactic tuples of our example in the corpora we usedthe table reveals that the tuples containing the correct target word are indeed more frequenthowever we still need a decision algorithm to analyze the statistical significance of the data and choose the appropriate word accordinglyas seen in the previous section the linguistic model maps each source language syntactic tuple to several alternative target tuples in which each alternative corresponds to a different selection of target wordswe wish to select the most plausible target language word for each ambiguous source language word basing our decision on the counts obtained from the target corpus as illustrated in table 1to that end we should define a selection algorithm whose outcome depends on all the syntactic tuples in the sentenceif the data obtained from the corpus do not substantially support any one of the alternatives the algorithm should notify the translation system that it cannot reach a statistically meaningful decisionour algorithm is based on a statistical modelhowever we wish to point out that we do not see the statistical considerations as expressed in the model as fully reflecting the linguistic considerations that determine the correct translationthe model reflects only part of the relevant data and in addition makes statistical assumptions that are only partially satisfiedtherefore a statistically based model need not make the correct linguistic choicesthe performance of the model can only be empirically evaluated the statistical considerations serve only as heuristicsthe role of the statistical considerations is therefore to guide us in constructing heuristics that make use of the linguistic data of the sample our experience shows that the statistical methods are indeed very helpful in establishing and comparing useful decision criteria that reflect various linguistic considerationsfirst we discuss decisions based on a single syntactic tuple denote the source language syntactic tuple t and let there be k alternative target tuples for t denoted by th tklet the counts obtained for the target tuples be n1 nkfor notational convenience we number the tuples by decreasing frequency ie n1 n2 since our goal is to choose for t one of the target tuples t we can consider t a discrete random variable with multinomial distribution whose possible values are t1 tklet p be the probability of obtaining ti ie the probability that ti is the correct translation for t we estimate the probabilities pi by the counts n in the obvious way using the maximum likelihood estimator the estimator p the precision of the estimator depends of course on the size of the counts in the computationwe will incorporate this consideration into the decision algorithm by using confidence intervalswe now have to establish the criterion for choosing the preferred target language syntactic tuplethe most reasonable assumption is to choose the tuple with the highest estimated probability that is tithe tuple with the largest observed frequency according to the model the probability that t1 is the right choice is estimated as this criterion should be subject to the condition that the difference between the alternative probabilities is significantfor example if 51 051 and 52 049 the expected success rate in choosing t1 is approximately 05to prevent the system from making a decision in such cases we need to impose some conditions on the probabilities one possible such condition is that ti exceeds a prespecified threshold according to the model this requirement ensures that the success probability of every decision exceeds the thresholdeven though this method satisfies the probabilistic model it is vulnerable to noise in the data which often causes some relatively small counts to be larger than their true value in the samplethe noise is introduced in part by inaccuracies in the model and in part because of errors during the automatic collection of the statistical dataconsequently the estimated value of p1 may be smaller than its true value because other counts in equation 1 are too large thus preventing p1 from passing the thresholdto deal with this problem we have chosen another criterion for significancethe odds ratiowe choose the alternative t1 only if all the ratios exceed a prespecified thresholdnote that pi13j n1n1 and since n1 n2 nk the ratio 151 152 is less than or equal to all the other ratiostherefore it suffices to check the odds ratio only for 51152this criterion is less sensitive to noise of the abovementioned type than pi since it depends only on the two largest counts311 underlying assumptionsthe use of a probabilistic model necessarily introduces several assumptions on the structure of the corresponding linguistic datait is important to point out these assumptions in order to be aware of possible inconsistencies between the model and the linguistic phenomena for which it is usedthe first assumption is introduced by the use of a multinomial model which presupposes the following the events t are mutually disjointthis assumption is not entirely valid since sometimes it is possible to translate a source language word to several target language words such that all the translations are validfor example consider the hebrew sentence whose english translation is the resignation of thatcher is not related connected to the negotiations with damascusin this sentence the ambiguous word qshura can equally well be translated to either related or connectedin terms of the probabilistic model the two corresponding events ie the two alternative english tuples that contain these words and t2 are both correct thus the events t1 and t2 both occur however we have to make this assumption since the counts we have n from which we estimate the probabilities of the t values count actual occurrences of single syntactic tuplesin other words we count the number of times that each of t1 and t2 actually occur not the number of times in which each of them could occurtwo additional assumptions are introduced by using counts of the occurrences of syntactic tuples of the target language in order to estimate the translation probabilities of source language tuplesan occurrence of the source language syntactic tuple t can indeed be translated to one of th tkassumption 3 every occurrence of the target tuple t can be the translation of only the source tuple t assumption 2 is an assumption on the completeness of the linguistic modelit is rather reasonable and depends on the completeness of our bilingual lexicon if the lexicon gives all possible translations of each ambiguous word then this assumption will hold since for each syntactic tuple t we will produce all possible translationsassumption 3 which may be viewed as a soundness assumption does not always hold since a target language word may be the translation of several source language wordsconsider for example the hebrew tuple t lul is ambiguous meaning either a playpen or a chicken penaccordingly t can be translated to either t1 or t2 in the context of hold the first translation is more likely and we can therefore expect our model to prefer t1however this might not be the case because assumption 3 is contradictedpen can also be the translation of the hebrew word et and thus t2 can be the translation of another hebrew tuple this means that when translating t we are counting occurrences of t2 that correspond to both t and t quotmisleadingquot the selection criterionsection 63 illustrates another example in which the assumption is not valid causing the algorithm to fail to select the correct translationwe must make this assumption since we use only a target language corpus which is not related to any source language information6 therefore when seeing an occurrence of the target language word w we do not know which source language word is appropriate in the current contextconsequently we count its occurrence as a translation of all the source language words for which w is a possible translationthis implies that sometimes we use inaccurate data which introduce noise into the statistical model as we shall see even though the assumption does not always hold in most cases this noise does not interfere with the decision algorithm5 the problem of constructing a bilingual lexicon that is as complete as possible is beyond the scope of this papera promising approach may be to use aligned bilingual corpora especially for augmenting existing lexicons with domainspecific terminology in any case it seems that any translation system is limited by the completeness of its bilingual lexicon which makes our assumption a reasonable oneanother problem we should address is the statistical significance of the datawhat confidence do we have that the data indeed reflect the phenomenonif the decision is based on small counts then the difference in the counts might be due to chancefor example we should have more confidence in the odds ratio 151112 3 when n1 30 and n2 10 than when n1 3 and n2 1consequently we shall use a dynamic threshold for 151 1112 which is large when the counts are small and decreases as the counts increasea common method for determining the statistical significance of estimates is the use of confidence intervalsrather than finding a confidence interval for 51752 we will bound the log odds ratio 111since the variance of the log odds ratio is independent of the mean it converges to the normal distribution faster than the odds ratio itself we use a onetailed interval as we want only to decide whether 111 is greater than a specific threshold using this method for each desired error probability 0 0 will we select the most frequent tuple t1 as the appropriate onein terms of statistical decision theory we say that our null hypothesis is that ln 02 for the alternative translations of tuple c in table 1 we got ni 29 and n2 5for these values ba 1137in this case equation 4 is satisfied for 0 02 and the algorithm selects the word progress as the translation of the hebrew word hitqaddmutin another case we had to translate the hebrew word roh which can be translated to either top or head in the sentence whose translation is sihanuk stood at the top i head of a coalition of underground groupsthe two alternative syntactic tuples were for n1 10 and n2 5 we get ba 0009 since b 02 0therefore the word increase was chosen as the translation of higdilsince this word appears also in the tuple the target tuples that include alternative translations of higdil were deletedthus were deletedthis leaves us with only one alternative as a possible translation of this hebrew tuple which is therefore removed from the input listwe now recompute the values of ba for the remaining tuplesthe maximal value is obtained for the tuple where ba 1137 0we therefore choose the word progress as a translation for hitqaddmutsince this word hitqaddmut also appears in the tuple we delete the six target tuples that are inconsistent with the selection of progress there now remain only three alternative target tuples for hitqaddmut b sihawe now recompute the values of bathe maximum value is ba 0836 0 thus talk is selected as the translation of sapnow all the ambiguities have been resolved and the procedure stopsin the above example all the ambiguities were resolved since in each stage the value of 13 exceeded the threshold 6 02in some cases not all ambiguities are resolved though the number of ambiguities may decreaseit should be noted that other methods may be proposed for combining the statistics of several syntactic relationsfor example it may make sense to multiply estimates of conditional probabilities of tuples in different relations in a way that is analogous to ngram language modeling however such an approach will make it harder to take into account the statistical significance of the estimate in our set of examples the constraint propagation method proved to be successful and did not seem to introduce any errorsfurther experimentation on much larger data sets is needed to determine which of the two methods is substantially superior to the otherto evaluate the proposed disambiguation method we implemented and tested the method on a random set of examplesthe examples consisted of a set of hebrew paragraphs and a set of german paragraphsin both cases the target language was englishthe hebrew examples consisted of ten paragraphs picked at random from foreign news sections of the israeli pressthe paragraphs were selected from several news items and articles that appeared in several daily newspapersthe target language corpus consisted of american newspaper articles and the hansard corpus of the proceedings of the canadian parliamentthe domain of foreign news articles was chosen to correspond to some of the topics that appear in the english corpusthe german examples were chosen at random from the german press without restricting the topic9 since we did not have a translation system from hebrew or german to english we simulated the steps such a system would performhence the results we report measure the performance of just the target word selection module and not the performance of a complete translation systemthe latter can be expected to be somewhat lower for a real system depending on the performance of its other componentsnote however that since the disambiguation module is highly immune to noise it might be more useful in a real system in such a system some of the alternatives would be totally erroneoussince the corresponding syntactic tuples would typically not be found in the corpora they would be eliminated by our modulethe experiment is described in detail in the following subsectionsit provides an example for a thorough evaluation that is carried out without having a complete system availablewe specifically describe the processing of the hebrew data which was performed by a professional translator supervised by the authorsthe german examples were processed very similarlyto locate ambiguous words we simulated a bilingual lexicon and syntactic filters of a translation systemfor every source language word the translator searched all possible translations using a hebrewenglish dictionary the list of translations proposed by the dictionary was modified according to the following guidelines to reflect better the lexicon of a practical translation system in addition each of the remaining target alternatives for each source word was evaluated as to whether it is a suitable translation in the current contextthis evaluation was later used to judge the selections of the algorithmif all the alternatives were considered suitable then the source word was eliminated from the test set since any decision for it would have been considered successfulwe ended up with 103 hebrew and 54 german ambiguous wordsfor each hebrew word we had an average of 327 alternative translations and an average of 144 correct translationsthe average number of translations of a german word was 326 and there were 133 correct translationssince we did not have a hebrew parser we have simulated the two steps of determining the source syntactic tuples and mapping them to english by reversing the order of these steps in the following way first the sample sentences were translated manually as literally as possible into englishthen the resulting english sentences were analyzed using the esg parser and the postprocessing routine to identify the relevant syntactic tuplesthe tuples were further classified into quotgeneral classesquot as described in section 23the use of these general classes which was intended to facilitate the mapping of syntactic relations from one language to another also facilitated our simulation method and caused it to produce realistic outputat the end of the procedure we had for each sample sentence a data structure similar to table 1 the statistical data were acquired from the following corpora however the effective size of the corpora was only about 25 million words owing to two filtering criteriafirst we considered only sentences whose length did not exceed 25 words since longer sentences required excessive parse time and contained many parsing errorssecond even 35 of the shorter sentences failed to parse and had to be eliminatedthe syntactic tuples were located by the esg parser and the postprocessing routine mentioned earlierfor the purpose of evaluation we gathered only the data required for the given test exampleswithin a practical machine translation system the disambiguation module would require a database containing all the syntactic tuples of the corpus with their frequency countsin the current research project we did not have the computing resources necessary for constructing the complete database however such resources are not needed in order to evaluate the proposed methodsince we evaluated the method only on a relatively small number of random sentences we first constructed the set of all quotrelevantquot target tuples ie tuples that should be considered for the test sentencesthen we scanned the entire corpus and extracted only sentences that contain both words from at least one of the relevant tuplesonly the extracted sentences were parsed and their counts were recorded in our databaseeven though this database is much smaller than the full database for the ambiguous words of the test sentences both databases provide the same informationthus the success rate for the test sentences is the same for both methods while requiring a considerably smaller amount of resources at the research phasethe problem with this method is that for every set of sample sentences the entire corpus has to be scannedthus a practical system would have to preprocess the corpus to construct a database of the entire corpusthen to resolve ambiguities only this database need be consultedafter acquiring all the relevant data the algorithm of section 33 was executed for each of the test sentencestwo measurements applicability and precision are used to evaluate the performance of the algorithmthe applicability denotes the proportion of cases for which the model performed a selection ie those cases for which the bound because passed the thresholdthe precision denotes the proportion of cases for which the model performed a correct selection out of all the applicable caseswe compare the precision of our method which we term tws with that of the word frequencies procedure which always selects the most frequent target wordin other words the word frequencies method prefers the alternative that has the highest a priori probability of appearing in the target language corpusthis naive quotstrawmanquot is less sophisticated than other methods suggested in the literature but it is useful as a common benchmark since it can be easily implementedthe success rate of the word frequencies procedure can serve as a measure for the degree of lexical ambiguity in a given set of examples and thus different methods can be partly compared by their degree of success relative to this procedureout of the 103 ambiguous hebrew words for 33 the bound ba did not pass the threshold achieving an applicability of 68the remaining 70 examples were distributed according to table 2thus the precision of the statistical model was 91 quot whereas relying just on word frequencies yields 63 providing an improvement of 28the table demonstrates that our algorithm corrects 22 erroneous decisions of the word frequencies method but makes only 2 errors that the word frequencies method translates correctlythis implies that with high confidence our method greatly improves the word frequencies methodthe number of hebrew examples is large enough to permit a meaningful analysis of the statistical significance of the resultsby computing confidence intervals for the distribution of proportions we claim that with 95 confidence our method succeeds in at least 86 of the applicable examplesthis means that though the figure of 91 might be due to a lucky selection of the random examples there is only a 5 chance that the real figure is less than 86 the confidence interval was computed as follows p 64 165 64 6 70 70 70 086 70 where a 005 and the variance is estimated by p n with the same confidence our method improves the word frequencies method by at least 18 let p1 be the proportion of cases for which our method succeeds and the word frequencies method fails and p2 be the proportion of cases for which the word frequencies method succeeds and ours fails the confidence interval is for the difference of proportions in multinomial distribution and is computed as follows out of the 54 ambiguous german words for 27 the bound because did not pass the threshold the remaining 27 examples were distributed according to table 3thus the precision of the statistical model was 78 whereas relying just on word frequencies yields 56 here our method corrected 6 errors of the word frequencies method without causing any new errorswe attribute the lower success rate for the german examples to the fact that they were not restricted to topics that are well represented in the corpusthis poor correspondence between the training and testing texts is reflected also by the low precision of the word frequencies methodthis means that the a priori probability of the target words as estimated from the training corpora provides a very poor prediction of the correct selection in the test examplesrelative to the a priori probability the precision of our method is still 22 higherrecently dagan marcus and markovitch have implemented a variant of the disambiguation method of the current paperthis variant was developed for evaluating a method that estimates the probability of word combinations which do not occur in the training corpus in this section we quote their results providing additional evidence for the effectiveness of the tws methodthe major difference between the tws method as presented in this paper and the variant described by dagan marcus and markovitch which we term tws is that the latter does not use any parsing for collecting the statistics from the corpusinstead the counts of syntactic tuples are approximated by counting cooccurrences of the given words of the tuple within a short distance in a sentencethe approximation takes into account the relative order between the words of the tuple such that occurrences of a certain syntactic relation are approximated only by word cooccurrences that preserve the most frequent word order for that relation the tws method still assumes that the source sentence to be translated is being parsed in order to identify the words that are syntactically related to an ambiguous wordthis model is therefore relevant for translation systems that use a parser for the source language but may not have available a robust target language parserthe corpus used for evaluating the tws method consists of articles posted to the usenet news systemthe articles were collected from news groups that discuss computerrelated topicsthe length of the corpus is 8871125 words and the lexicon size is 95559the type of text in this corpus is quite noisy including short and incomplete sentences as well as much irrelevant information such as person and device namesthe test set used for the experiment consists of 78 hebrew sentences that were taken out of a book about computersthese sentences were processed as described in section 4 obtaining a set of 269 ambiguous hebrew wordsthe average number of alternative translations per ambiguous word in this set is 58 and there are 135 correct translationsout of the 269 ambiguous hebrew words for 96 the bound b did not pass the threshold achieving an applicability of 643the remaining 173 examples were distributed according to table 4for the words that are covered by the tws method the word frequencies method has a precision of 711 whereas the tws method has a precision of 855as can be seen in the table the tws method is correct in almost all the cases it disagrees with the word frequencies method the applicability and precision figures in this experiment are somewhat lower than those achieved for the hebrew set in our original evaluation of the tws method we attribute this to the fact that the original results were achieved using a parsed corpus which was about 25 times larger and of much higher quality than the one used in the second experimentyet the new results give additional support for the usefulness of the tws method even for noisy data provided by a low quality corpus without any parsing or taggingquotin this section we give a detailed analysis of the selections performed by the algorithm and in particular analyze the cases when it failedthe analysis of these modes suggests possible improvements of the model and indicates its limitationsas described earlier the algorithm failure includes either the cases for which the method was not applicable or the cases for which it made an incorrect selectionthe following paragraphs list various reasons for both typesat the end of the section we discuss the possibility of adapting our approach to monolingual applicationsin the cases that were treated correctly by our method such as the examples given in the previous sections the statistics succeeded in capturing two major types of disambiguating datain preferring igntreaty upon ealtreaty the statistics reflect the relevant semantic constraintin preferring peacetreaty upon peacecontract the statistics reflect the lexical usage of treaty in english which differs from the usage of contract621 insufficient datathis was the reason for nearly all the cases of inapplicabilityin one of our examples for instance none of the alternative relations an investigator of corruption or researcher of corruption 11 it should be mentioned that the work of dagan marcus and markovitch includes further results evaluating an enhancement of the tws method using their similaritybased estimation methodthis enhancement is beyond the scope of the current paper and is referred to in the next section was observed in the parsed corpusin this case it is possible to perform the correct selection if we used only statistics about the cooccurrence of corruption with either investigator or researcher in the same local context without requiring any syntactic relationstatistics on cooccurrence of words in a local context were used recently for monolingual word sense disambiguation it is possible to apply these methods using statistics of the target language and thus incorporate them within the framework proposed here for target word selectionfinding an optimal way of combining the different methods is a subject for further researchour intuition though as well as some of our initial data suggests that statistics on word cooccurrence in the local context can substantially increase the applicability of the selection methodanother way to deal with the lack of statistical data for the specific words in question is to use statistics about similar wordsthis is the basis for sadler analogical semantics which according to his report has not proved effectivehis results may be improved if more sophisticated methods and larger corpora are used to establish similarity between words in particular an enhancement of our disambiguation method using similaritybased estimation was evaluated recentlyin this evaluation the applicability of the disambiguation method was increased by 15 with only a slight decrease in the precisionthe increased applicability was achieved by disambiguating additional cases in which statistical data were not available for any of the alternative tuples whereas data were available for other tuples containing similar words622 conflicting datain very few cases two alternatives were supported equally by the statistical data thus preventing a selectionin such cases both alternatives are valid at the independent level of the syntactic relation but may be inappropriate for the specific contextfor instance the two alternatives of to take a job or to take a position appeared in one of the examples but since the general context was about the position of a prime minister only the latter was appropriateto resolve such ambiguities it may be useful to consider also cooccurrences of the ambiguous word with other words in the broader context for instance the word minister seems to cooccur in the same context more frequently with position than with jobin another example both alternatives were appropriate also for the specific contextthis happened with the german verb werfen which may be translated as throw cast or corein our example werfen appeared in the context of to throwcast light and these two correct alternatives had equal frequencies in the corpus in such situations any selection between the alternatives will be appropriate and therefore any algorithm that handles conflicting data would work properlyhowever it is difficult to decide automatically when both alternatives are acceptable and when only one of them is631 using an inappropriate relationone of the examples contained the hebrew word matzavthis word has several translations two of which are tate and positionthe phrase that contained this word was to put an end to the statelposition of warthe ambiguous word is involved in two syntactic relations being a complement of put and also modified by warthe corresponding frequencies were the bound of the odds ratio for the first relation was higher than for the second and therefore this relation determined the translation as positionhowever the correct translation should be tate as determined by the second relationthese data suggest that while ordering the relations it may be necessary to give different weights to the different types of syntactic relationsfor instance it seems reasonable that the object of a noun should receive greater weight in selecting the noun sense than the verb for which this noun serves as a complementfurther examination of the example suggests another refinement of our method it turns out that most of the 320 instances of the tuple include the preposition in as part of the common phrase put in a positiontherefore these instances should not be considered for the current example which includes the preposition tohowever the distinction between different prepositions was lost by our program as a result of using equivalence classes of syntactic tuples this suggests that we should not use an equivalence class when there is enough statistical data for specific tuples12 632 confusing sensesin another example the hebrew adjective qatann modified the noun sikkuy which means prospect or chancethe word qatann has several translations two of which are mall and youngin this hebrew word combination the correct sense of qatann is necessarily mallhowever the relation that was observed in the corpus was young prospect relating to the human sense of prospect that appeared in sports articles this borrowed sense of prospect is necessarily inappropriate since in hebrew it is represented by the equivalent of hope and not by sikkuythe source of this problem is assumption 3 a target tuple t might be a translation of several source tuples and while gathering statistics for t we cannot distinguish between the different sources since we use only a target language corpusa possible solution is to use an aligned bilingual corpus as suggested by sadler brown et al and gale et alin such a corpus the occurrences of the relation young prospect will be aligned to the corresponding occurrences of the hebrew word tiqwa and will not be used when the hebrew word sikkuy is involvedyet it should be brought to mind that an aligned corpus is the result of manual translation which can be viewed as including a manual tagging of the ambiguous words with their equivalent senses in the target languagethis resource is much more expensive and less available than an untagged monolingual corpus and it seems to be necessary only for relatively rare situationstherefore considering the tradeoff between applicability and precision it seems better to rely on a significantly larger monolingual corpus than on a smaller bilingual corpusan optimal method may exploit both types of corpora in which the somewhat more accurate but more expensive data of a bilingual corpus are augmented by the data of a much larger monolingual corpus13 quantities of shallow informationthus they are doomed to fail when disambiguation can rely only on deep understanding of the text and no other surface cues are availablethis happened in one of the hebrew examples in which the two alternatives were either emigration law or immigration law while the context indicated that the first alternative is correct the statistics preferred the second alternativeto translate the above phrase the program would need deep knowledge to an extent that seems to far exceed the capabilities of current systemsfortunately our results suggest that such cases are quite rarethe results of our experiments in the context of machine translation suggest the utility of a similar mechanism even for in word sense disambiguation within a single languageto select the right sense of a word in a broad coverage application it is useful to identify lexical relations between word senseshowever within corpora of a single language it is possible to identify automatically only relations at the word level which are of course not useful for selecting word senses in that languagethis is where other languages can supply the solution exploiting the fact that the mapping between words and word senses varies significantly between different languagesfor instance the english words ign and eal correspond to two distinct senses of the hebrew word lahtomthese senses should be distinguished by most applications of hebrew understanding programsto make this distinction it is possible to perform the same process that is performed for target word selection by producing all the english alternatives for the lexical relations involving lahtomthen the hebrew sense that corresponds to the most plausible english lexical relations is preferredthis process requires a bilingual lexicon that maps each hebrew sense separately into its possible translations similar to a hebrewhebrewenglish lexicon in some cases different senses of a hebrew word map to the same word also in englishin these cases the lexical relations of each sense cannot be identified in an english corpus and a third language is required to distinguish among these sensesalternatively it is possible to combine our method with other disambiguation methods that have been developed in a monolingual context as a longterm vision one can imagine a multilingual corporabased environment which exploits the differences between languages to facilitate the acquisition of knowledge about word sensesuntil recently word sense disambiguation seemed to be a problem for which there is no satisfactory solution for broad coverage applicationsrecently several statistical methods have been developed for solving this problem suggesting the possibility of robust yet feasible disambiguationin this section we identify and analyze basic aspects of a statistical sense disambiguation method and compare several proposed corpus of moderate size can be valuable when constructing a bilingual lexicon thus justifying the effort of maintaining such a corpus methods along these aspectsthis analysis may be useful for future research on sense disambiguation as well as for the development of sense disambiguation modules in practical systemsthe basic aspects that will be reviewed are the first three aspects deal with the components of a disambiguation method as would be implemented for a practical applicationthe fourth is a methodological issue which is relevant for developing testing and comparing disambiguation methodswe identify three major types of information that were used in statistical methods for sense disambiguation the first type of information is the one used in the current paper in which words that are syntactically related to an ambiguous word are used to indicate its most probable sensestatistical data on the cooccurrence of syntactically related words with each of the alternative senses reflect semantic and lexical preferences and constraints of these sensesin addition these statistics may provide information about the topics of discourse that are typical for each senseideally the syntactic relations between words should be identified using a syntactic parser in both the training and the disambiguation phasessince robust syntactic parsers are not widely available and those that exist are not always accurate it is possible to use various approximations to identify relevant syntactic relations between wordshearst uses a stochastic part of speech tagger and a simple scheme for partial parsing of short phrasesthe structures achieved by this analysis are used to identify approximated syntactic relations between wordsbrown et al make even weaker approximations using only a stochastic part of speech tagger and defining relations such as quotthe first verb to the rightquot or quotthe first noun to the leftquot finally dagan et al assume full parsing at the disambiguation phase but no preprocessing at the training phase in which a higher level of noise can be accommodateda second type of information is provided by words that occur in the global context of the ambiguous word gale et al and yarowsky use words that appear within 50 words in each 14 the reader is referred to some of these recent papers for thorough surveys of work on sense disambiguation direction of the ambiguous wordstatistical data are stored about the occurrence of words in the context of each sense and are matched against the context in the disambiguated sentencecooccurrence in the global context provides information about typical topics associated with each sense in which a topic is represented by words that commonly occur in itschiitze uses a variant of this type of information in which context vectors are maintained for character fourgrams instead of wordsin addition the context of an occurrence of an ambiguous word is represented by cooccurrence information of a second order as a set of context vectors compared with cooccurrence within syntactic relations information about the global context is less sensitive to fine semantic and lexical distinctions and is less useful when different senses of a word appear in similar contextson the other hand the global context contains more words and is therefore more likely to provide enough disambiguating information in cases in which this distinction can be based on the topic of discoursefrom a general perspective these two types of information represent a common tradeoff in statistical language processing the first type is related to a limited amount of deeper and more precise linguistic information whereas the second type provides a large amount of shallow information which can be applied in a more robust mannerthe two sources of information seem to complement each other and may both be combined in future disambiguation methodshearst incorporates a third type of statistical information to distinguish between different senses of nouns for each occurrence of a sense several syntactic and morphological characteristics are recorded such as whether the noun modifies or is modified by another word whether it is capitalized and whether it is related to certain prepositional phrasesthen in the disambiguation phase a best match is sought between the information recorded for each sense and the syntactic context of the current occurrence of the nounthis type of information resembles the information that is defined for lexical items in lexicalist approaches for grammars such as possible subcategorization frames of a wordthe major difference is that hearst captures probabilistic preferences of senses for such syntactic constructsgrammatical formalisms on the other hand usually specify only which constructs are possible and at most distinguish between optional and obligatory onestherefore the information recorded in such grammars cannot distinguish between different senses of a word that potentially have the same subcategorization frames though in practice each sense might have different probabilistic preferences for different syntactic constructsit is clear that each of the different types of information provides some information that is not captured by the othershowever as the acquisition and manipulation of each type of information requires different tools and resources it is important to assess the relative contribution and the quotcost effectivenessquot of each of themsuch comparative evaluations are not available yet not even for systems that incorporate several types of data further research is therefore needed to cornpare the relative importance of different information types and to find optimal ways of combining themwhen training a statistical model for sense disambiguation it is necessary to associate the acquired statistics with word sensesthis seems to require manual tagging of the training corpus with the appropriate sense for each occurrence of an ambiguous worda similar approach is being used for stochastic part of speech taggers and probabilistic parsers relying on the availability of large manually tagged corpora for traininghowever this approach is less feasible for sense disambiguation for two reasonsfirst the size of corpora required to acquire sufficient statistics on lexical cooccurrence is usually much larger than that used for acquiring statistics on syntactic constructs or sequences of parts of speechsecond lexical cooccurrence patterns as well as the definition of senses may vary a great deal across different domains of discourseconsequently it is usually not sufficient to acquire the statistics from one widely available quotbalancedquot corpus as is common for syntactic applicationsa sense disambiguation model should be trained on the same type of texts for which it will be applied thus increasing the cost of manual taggingthe need to disambiguate a training corpus before acquiring a statistical model for disambiguation is often termed as the circularity problemin the following paragraphs we discuss different methods that were proposed to overcome the circularity problem without exhaustive manual tagging of the training corpusin our opinion this is the most critical issue in developing feasible sense disambiguation methods721 bootstrappingbootstrapping which is a general scheme for reducing the amount of manual tagging was proposed also for sense disambiguation the idea is to tag manually an initial set of occurrences for each sense in the lexicon acquiring initial training statistics from these instancesthen using these statistics the system tries to disambiguate additional occurrences of ambiguous wordsif such an occurrence can be disambiguated automatically with high confidence the system acquires additional statistics from this occurrence as if it were tagged by handhopefully the system will incrementally acquire all the relevant statistics demanding just a small amount of manual taggingthe results of hearst show that at least 10 occurrences of each sense have to be tagged by hand and in most cases 2030 occurrences are required to get high precisionthese results which were achieved for a small set of preselected ambiguous words suggest that the cost of the bootstrapping approach is still very high722 clustering occurrences of an ambiguous wordschiitze proposes a method that can be viewed as an efficient way of manual tagginginstead of presenting all occurrences of an ambiguous word to a human these occurrences are first clustered using automatic clustering algorithmsthen a human is asked to assign one of the senses of the word to each cluster by observing several members of the clustereach sense is thus represented by one or more clustersat the disambiguation phase a new occurrence of an ambiguous word is matched against the contexts that were recorded for these clusters selecting the sense of that cluster which provides the best matchit is interesting to note that the number of occurrences that had to be observed by a human in the experiments of schutze is of the same order as in the bootstrapping approach 1020 members of a cluster were observed with an average of 28 clusters per senseas both approaches were tested only on a small number of preselected words further evaluation is necessary to predict the actual cost of their application to broad domainsthe methods described below on the other hand rely on resources that were already available on a large scale and it is therefore possible to estimate the expected cost of their broad application723 word classificationyarowsky proposes a method that completely avoids manual tagging of the training corpusthis is achieved by estimating parameters for classes of words rather than for individual word sensesin his work yarowsky considered the semantic categories defined in roget thesaurus as classeshe then mapped each of the senses of an ambiguous word to one or several of the categories under which this word is listed in the thesaurusthe task of sense disambiguation thus becomes the task of selecting the appropriate category for each occurrence of an ambiguous wordquot when estimating the parameters of a category any occurrence of a word that belongs to that category is counted as an occurrence of the categorythis means that each occurrence of an ambiguous word is counted as an occurrence of all the categories to which the word belongs and not just the category that corresponds to the specific occurrencea substantial amount of noise is introduced by this training method which is a consequence of the circularity problemto avoid the noise it would be necessary to tag each occurrence of an ambiguous word with the appropriate categoryas explained by yarowsky however this noise can usually be toleratedthe quotcorrectquot parameters of a certain class are acquired from all its occurrences whereas the quotincorrectquot parameters are distributed through occurrences of many different classes and usually do not produce statistically significant patternsto reduce the noise further yarowsky uses a system of weights that assigns lower weights to frequent words since such words may introduce more noisethe word class method thus overcomes the circularity problem by mapping word senses to classes of wordshowever because of this mapping the method cannot distinguish between senses that belong to the same class and it also introduces some level of noise724 a bilingual corpusbrown et al were concerned with sense disambiguation for machine translationhaving a large aligned bilingual corpus available they noticed that the target word which corresponds to an occurrence of an ambiguous source word can serve as a tag of the appropriate sensethis kind of tagging provides sense distinctions when different senses of a source word translate to different target wordsfor the purpose of translation these are exactly the cases for which sense distinction is requiredconceptually the use of a bilingual corpus does not eliminate manual tagging of the training corpussuch a corpus is a result of manual translation and it is the translator who provides tagging of senses as a side effect of the translation processpractically whenever a bilingual corpus is available it provides a useful source of a sense tagged corpusgale church and yarowsky have also exploited this resource for achieving large amounts of testing and training materials725 a bilingual lexicon and a monolingual corpusthe method of the current paper also exploits the fact that different senses of a word are usually mapped to different words in another languagehowever our work shows that the differences between languages enable us to avoid any form of manual tagging of the corpus this is achieved by a bilingual lexicon that maps a source language word to all its possible equivalents in the target languagethis approach has practical advantages for the purpose of machine translation in which a bilingual lexicon needs to be constructed in any case and very large bilingual corpora are not usually availablefrom a theoretical point of view the difference between the two methods can be made clear if we assume that the bilingual lexicon contains exactly all the different translations of a word which occur in a bilingual corpusfor a given set of senses that need to be disambiguated our method requires a bilingual corpus of size k in which each sense occurs at least once in order to establish its mapping to a target wordin addition a larger monolingual corpus of size n is required to provide enough training examples of typical contexts for each senseon the other hand using a bilingual corpus for training the disambiguation model would require a bilingual corpus of size n which is significantly larger than k the savings in resources is achieved since the mapping between the languages is done at the level of single wordsthe larger amount of information about word combinations on the other hand is acquired from an untagged monolingual corpus after the mapping has been performedour results show that the precision of the selection algorithm is high despite the additional noise which is introduced by mapping single words independently of their contextas mentioned in section 63 an optimal method may combine the two methodsin some sense the use of a bilingual lexicon resembles the use of a thesaurus in yarowsky approachboth rely on a manually established mapping of senses to other concepts and collect information about the target concepts from an untagged corpusin both cases ambiguous words in the corpus introduce some level of noise counting an occurrence of a word as an occurrence of all the classes to which it belongs or counting an occurrence of a target word as an occurrence of all the source words to which it may correspond also both methods can distinguish only between senses that are distinguished by the mappings they use either senses that belong to different classes or senses that correspond to different target wordsan interesting difference though relates to the feasibility of implementing the two methods for a new domain of texts the construction of a bilingual lexicon for a new domain is relatively straightforward and is often carried out for translation purposesthe construction of an appropriate classification for the words of a new domain is more complex and furthermore it is not clear whether it is possible in every domain to construct a classification that is sufficient for the purpose of sense disambiguationsense disambiguation methods require a decision model that evaluates the relevant statisticssense disambiguation thus resembles many other decision tasks and not surprisingly several common decision algorithms were employed in different worksthese include a bayesian classifier and a distance metric between vectors both inspired from methods in information retrieval the use of the flipflop algorithm for ordering possible informants about the preferred sense trying to maximize the mutual information between the informant and the ambiguous word and the use of confidence intervals to establish the degree of confidence in a certain preference combined with a constraint propagation algorithm at the present stage of research on sense disambiguation it is difficult to judge whether a certain decision algorithm is significantly superior to others21 yet these decision models can be characterized by several criteria which clarify the similarities and differences between themas will be explained below many of the differences are correlated with the different information sources employed by these models21 once the important information sources for sense selection have been identified it is possible that different decision algorithms would achieve comparable resultsthe differences between various disambiguation methods correlate with the difference in information sources in particular whether they use local or global contextwhen local context is used only few syntactically related informants may provide reliable information about the selectionit is therefore reasonable to base the selection on only one the most informative informant and it is also important to test the statistical significance of that informantthe problem of parameter explosion is less severe and the number of parameters is comparable to that of a bigram language model when using the global context on the other hand the number of potential parameters is significantly larger but each of them is usually less informativeit is therefore important to take into account as many parameters as possible in each ambiguous case but it is less important to test for detailed statistical significance or to worry about the mutual effects of sense selections for adjacent wordsin most of the abovementioned papers experimental results are reported for a small set of up to 12 preselected words usually with two or three senses per wordin the current paper we have evaluated our method using a random set of example sentences with no a priori selection of the wordsthis standard evaluation method which is commonly used for other natural language processing tasks provides a direct prediction for the expected success rate of the method when employed in a practical applicationto compare results on different test data it is useful to compare the precision of the disambiguation method with some a priori figure that reflects the degree of ambiguity in the textreporting the number of senses per example word corresponds to the expected success rate of random selectiona more informative figure is the success rate of a naive method that always selects the most frequent sense the success rate of this naive method is higher than that of random selection and thus provides a tighter lower bound for the desired precision of a proposed disambiguation methodan important practical issue in evaluation is how to get the test examples which should be tagged with the correct sensein most papers the tagging of the test data was done by hand which limits the size of the testing setpreparing one test set by hand may still be reasonable though time consuminghowever it is useful to have more than one set such that results will be reported on a new unseen set while another set is used for developing and tuning the systemone useful source of tagged examples is an aligned bilingual corpus which can be used for testing any sense disambiguation method including methods that do not use bilingual material for traininggale proposes to use quotpseudowordsquot as another practical source of testing examples pseudowords are constructed artificially as a union of several different words the disambiguation method is presented with texts in which all occurrences of wi w2 and w3 are considered as occurrences of x and should then select the original word for each occurrencethough testing with this method does not provide results for real ambiguities that occur in the text it can be very useful while developing and tuning the method the method presented in this paper takes advantage of two linguistic phenomena both proven to be very useful for sense disambiguation the different mapping between words and word senses among different languages and the importance of lexical cooccurrence within syntactic relationsthe first phenomenon provides the solution for the circularity problem in acquiring sense disambiguation datausing a bilingual lexicon and a monolingual corpus of the target language we can acquire statistics on word senses automatically without manual taggingas explained in section 7 this method has significant practical and theoretical advantages over the use of aligned bilingual corporawe pay for these advantages by introducing an additional level of noise in mapping individual words independently to the other languageour results show however that the precision of the selection algorithm is high despite this additional noisethis work also emphasizes the importance of lexical cooccurrence within syntactic relations for the resolution of lexical ambiguitycooccurrences found in a large corpus reflect a huge amount of semantic knowledge which was traditionally constructed by handmoreover frequency data for such cooccurrences reflect both linguistic and domainspecific preferences thus indicating not only what is possible but also what is probableit is important to notice that frequency information on lexical cooccurrence was found to be much more predictive than single word frequencyin the three experiments we reported there were 61 cases in which the two types of information contradicted each other favoring different target wordsin 56 of these cases it was the most frequent lexical cooccurrence and not the most frequent word that predicted the correct translationthis result may raise relevant hypotheses for psycholinguistic research which has indicated the relevance of word frequencies to human sense disambiguation we suggest that the high precision achieved in the experiments relies on two characteristics of the ambiguity phenomena namely the sparseness and redundancy of the disambiguating databy sparseness we mean that within the large space of alternative interpretations produced by ambiguous utterances only a small portion is commonly usedtherefore the chance that an inappropriate interpretation is observed in the corpus is lowredundancy relates to the fact that different informants tend to support rather than contradict one another and therefore the chance of picking a quotwrongquot informant is lowit is interesting to compare our method with some aspects of the statistical machine translation system of brown et al as mentioned in the introduction this system also incorporates target language statistics in the translation processto translate a french sentence f they choose the english sentence e that maximizes the term pr pr the first factor in this product which represents the target language model may thus affect any aspect of the translation including target word selectionit seems however that brown et al expect that target word selection would be determined mainly by translation probabilities which should be derived from a bilingual corpus this view is reflected also in their elaborate method for target word selection in which better estimates of translation probabilities are achieved as a result of word sense disambiguationour method on the other hand incorporates only target language probabilities and ignores any notion of translation probabilitiesit thus demonstrates a possible tradeoff between these two types of probabilities using more informative statistics of the target language may compensate for the lack of translation probabilitiesfor our system the more informative statistics are achieved by syntactic analysis of both the source and target languages instead of the simple trigram model used by brown et al in a broader sense this can be viewed as a tradeoff between the different components of a translation system having better analysis and generation models may reduce some burden from the transfer modelin our opinion the method proposed in this paper may have immediate practical value beyond its theoretical aspectsas we argue below we believe that the method is feasible for practical machine translation systems and can provide a costeffective improvement on target word selection methodsthe identification of syntactic relations in the source sentence is available in any machine translation system that uses some form of syntactic parsingtrivially a bilingual lexicon is availablea parser for the target language becomes common in many systems that offer bidirectional translation capabilities requiring parsers for several languages if a parser for the target language corpus is not available it is possible to approximate the statistics using word cooccurrence in a window as was demonstrated by a variant of our method in both cases the statistical model was shown to handle successfully the noise produced in automatic acquisition of the datasubstantial effort may be required for collecting a sufficiently large target language corpuswe have not studied the relation between the corpus size and the performance of the algorithm but it is our impression that a corpus of several hundred thousand words will prove useful for translation in a welldefined domainwith current availability of texts in electronic form a corpus of this size is feasible in many domainsthe effort of assembling this corpus should be compared with the effort of manually coding sense disambiguation informationfinally our method was evaluated by simulating realistic machine translation lexicons on randomly selected examples and yielded high performance in two different broad domains it is therefore expected that the results reported here will be reproduced in other domains and systemsto improve the performance of target word selection further our method may be combined with other sense disambiguation methodsas discussed in section 62 it is possible to increase the applicability of the selection method by considering word cooccurrence in a limited context andor by using similaritybased methods that reduce the problem of data sparsenessto a lesser extent the use of a bilingual corpus may further increase the precision of the selection a practical strategy may be to use a bilingual corpus for enriching the bilingual lexicon while relying mainly on cooccurrence statistics from a larger monolingual corpus for disambiguationin a broader context this paper promotes the combination of statistical and linguistic models in natural language processingit provides an example of how a problem can be first defined in detailed linguistic terms using an implemented linguistic tool then having a welldefined linguistic scenario we apply a suitable statistical model to highly informative linguistic structuresaccording to this view a complex task such as machine translation should be first decomposed on a linguistic basisthen appropriate statistical models can be developed for each subproblemwe believe that this approach provides a beneficial compromise between two extremes in natural language processing either using linguistic models that ignore quantitative information or using statistical models that are linguistically ignorantapproximating var ln to approximate var ln we first approximate ln by the first order derivatives we use the following equations using we get var ln 6 var ln p2 pi p2special thanks are due to ulrike schwa11 for her fruitful collaborationwe are grateful to mon rimon peter brown ayala cohen ulrike rackow herb leass and bill gale for their help and commentswe also thank the anonymous reviewers for their detailed comments which resulted in additional discussions and clarificationsthis research was partially supported by grant number 120741 of the israel council for research and development
J94-4003
word sense disambiguation using a second language monolingual corpusthis paper presents a new approach for resolving lexical ambiguities in one language using statistical data from a monolingual corpus of another languagethis approach exploits the differences between mappings of words to senses in different languagesthe paper concentrates on the problem of target word selection in machine translation for which the approach is directly applicablethe presented algorithm identifies syntactic relations between words using a source language parser and maps the alternative interpretations of these relations to the target language using a bilingual lexiconthe preferred senses are then selected according to statistics on lexical relations in the target languagethe selection is based on a statistical model and on a constraint propagation algorithm which simultaneously handles all ambiguities in the sentencethe method was evaluated using three sets of hebrew and german examples and was found to be very useful for disambiguationthe paper includes a detailed comparative analysis of statistical sense disambiguation methodswe propose an approach to wsd using monolingual corporaa bilingual lexicon and a parser for the source language
an efficient probabilistic contextfree parsing algorithm that computes prefix probabilities we describe an extension of earley parser for stochastic contextfree grammars that computes the following quantities given a stochastic contextfree grammar and an input string a probabilities of successive prefixes being generated by the grammar b probabilities of substrings being generated by the nonterminals including the entire string being generated by the grammar c most likely parse of the string d posterior expected number of applications of each grammar production as required for reestimating rule probabilities probabilities and are computed incrementally in a single lefttoright pass over the input our algorithm compares favorably to standard bottomup parsing methods for scfgs in that it works efficiently on sparse grammars by making use of earley topdown control structure it can process any contextfree rule format without conversion to some normal form and combines computations for through in a single algorithm finally the algorithm has simple extensions for processing partially bracketed inputs and for finding partial parses and their likelihoods on ungrammatical inputs we describe an extension of earley parser for stochastic contextfree grammars that computes the following quantities given a stochastic contextfree grammar and an input string a probabilities of successive prefixes being generated by the grammar b probabilities of substrings being generated by the nonterminals including the entire string being generated by the grammar c most likely parse of the string d posterior expected number of applications of each grammar production as required for reestimating rule probabilitiesprobabilities and are computed incrementally in a single lefttoright pass over the inputour algorithm compares favorably to standard bottomup parsing methods for scfgs in that it works efficiently on sparse grammars by making use of earley topdown control structureit can process any contextfree rule format without conversion to some normal form and combines computations for through in a single algorithmfinally the algorithm has simple extensions for processing partially bracketed inputs and for finding partial parses and their likelihoods on ungrammatical inputscontextfree grammars are widely used as models of natural language syntaxin their probabilistic version which defines a language as a probability distribution over strings they have been used in a variety of applications for the selection of parses for ambiguous inputs to guide the rule choice efficiently during parsing to compute island probabilities for nonlinear parsing in speech recognition probabilistic contextfree grammars play a central role in integrating lowlevel word models with higherlevel language models as well as in nonfinitestate acoustic and phonotactic modeling in some work contextfree grammars are combined with scoring functions that are not strictly probabilistic or they are used with contextsensitive andor semantic probabilities although clearly not a perfect model of natural language stochastic contextfree grammars are superior to nonprobabilistic cfgs with probability theory providing a sound theoretical basis for ranking and pruning of parses as well as for integration with models for nonsyntactic aspects of languageall of the applications listed above involve one or more of the following standard tasks compiled by jelinek and lafferty the algorithm described in this article can compute solutions to all four of these problems in a single framework with a number of additional advantages over previously presented isolated solutionsmost probabilistic parsers are based on a generalization of bottomup chart parsing such as the cyk algorithmpartial parses are assembled just as in nonprobabilistic parsing while substring probabilities can be computed in a straightforward waythus the cyk chart parser underlies the standard solutions to problems and as well as while the jelinek and lafferty solution to problem is not a direct extension of cyk parsing the authors nevertheless present their algorithm in terms of its similarities to the computation of inside probabilitiesin our algorithm computations for tasks and proceed incrementally as the parser scans its input from left to right in particular prefix probabilities are available as soon as the prefix has been seen and are updated incrementally as it is extendedtasks and require one more pass over the chart constructed from the inputincremental lefttoright computation of prefix probabilities is particularly important since that is a necessary condition for using scfgs as a replacement for finitestate language models in many applications such a speech decodingas pointed out by jelinek and lafferty knowing probabilities p for arbitrary prefixes xo xi enables probabilistic prediction of possible followwords xii as p ppthese conditional probabilities can then be used as word transition probabilities in a viterbistyle decoder or to incrementally compute the cost function for a stack decoder another application in which prefix probabilities play a central role is the extraction of ngram probabilities from scfgs here too efficient incremental computation saves time since the work for common prefix strings can be sharedthe key to most of the features of our algorithm is that it is based on the topdown parsing method for nonprobabilistic cfgs developed by earley earley algorithm is appealing because it runs with bestknown complexity on a number of special classes of grammarsin particular earley parsing is more efficient than the bottomup methods in cases where topdown prediction can rule out potential parses of substringsthe worstcase computational expense of the algorithm is as good as that of the other known specialized algorithms but can be substantially better on wellknown grammar classesearley parser also deals with any contextfree rule format in a seamless way without requiring conversions to chomsky normal form as is often assumedanother advantage is that our probabilistic earley parser has been extended to take advantage of partially bracketed input and to return partial parses on ungrammatical inputthe latter extension removes one of the common objections against topdown predictive parsing approaches the remainder of the article proceeds as followssection 3 briefly reviews the workings of an earley parser without regard to probabilitiessection 4 describes how the parser needs to be extended to compute sentence and prefix probabilitiessection 5 deals with further modifications for solving the viterbi and training tasks for processing partially bracketed inputs and for finding partial parsessection 6 discusses miscellaneous issues and relates our work to the literature on the subjectin section 7 we summarize and draw some conclusionsto get an overall idea of probabilistic earley parsing it should be sufficient to read sections 3 42 and 44section 45 deals with a crucial technicality and later sections mostly fill in details and add optional featureswe assume the reader is familiar with the basics of contextfree grammar theory such as given in aho and ullman some prior familiarity with probabilistic contextfree grammars will also be helpfuljelinek lafferty and mercer provide a tutorial introduction covering the standard algorithms for the four tasks mentioned in the introductionnotationthe input string is denoted by x ix is the length of xindividual input symbols are identified by indices starting at 0 x0 x1 x1_1the input alphabet is denoted by e substrings are identified by beginning and end positions x1the variables ijk are reserved for integers referring to positions in input stringslatin capital letters x y z denote nonterminal symbolslatin lowercase letters a b are used for terminal symbolsstrings of mixed nonterminal and terminal symbols are written using lowercase greek letters a it v the empty string is denoted by ean earley parser is essentially a generator that builds leftmost derivations of strings using a given set of contextfree productionsthe parsing functionality arises because the generator keeps track of all possible derivations that are consistent with the input string up to a certain pointas more and more of the input is revealed the set of possible derivations can either expand as new choices are introduced or shrink as a result of resolved ambiguitiesin describing the parser it is thus appropriate and convenient to use generation terminologythe parser keeps a set of states for each position in the input describing all pending derivations2 these state sets together form the earley charta state is of the where x is a nonterminal of the grammar a and it are strings of nonterminals andor terminals and i and k are indices into the input stringstates are derived from productions in the grammarthe above state is derived from a corresponding production with the following semantics a state with the dot to the right of the entire rhs is called a complete state since it indicates that the lefthand side nonterminal has been fully expandedour description of earley parsing omits an optional feature of earley states the lookahead stringearley algorithm allows for an adjustable amount of lookahead during parsing in order to process lr grammars deterministically parsers where possiblethe addition of lookahead is orthogonal to our extension to probabilistic grammars so we will not include it herethe operation of the parser is defined in terms of three operations that consult the current set of states and the current input symbol and add new states to the chartthis is strongly suggestive of state transitions in finitestate models of language parsing etcthis analogy will be explored further in the probabilistic formulation later onthe three types of transitions operate as followspredictionfor each statewhere y is a nonterminal anywhere in the rhs and for all rules y v expanding y add states i jyva state produced by prediction is called a predicted stateeach prediction corresponds to a potential expansion of a nonterminal in a leftmost derivation where a is a terminal symbol that matches the current input x1 add the state a state produced by scanning is called a scanned statescanning ensures that the terminals produced in a derivation match the input stringcompletionfor each complete state a state produced by completion is called a completed stateeach completion corresponds to the end of a nonterminal expansion started by a matching prediction stepfor each input symbol and corresponding state set an earley parser performs all three operations exhaustively ie until no new states are generatedone crucial insight into the working of the algorithm is that although both prediction and completion feed themselves there are only a finite number of states that can possibly be producedtherefore recursive prediction and completion at each position have to terminate eventually and the parser can proceed to the next input via scanningto complete the description we need only specify the initial and final statesthe parser starts out with 0 o s where s is the sentence nonterminal after processing the last symbol the parser verifies that 1 0 has been produced where 1 is the length of the input xif at any intermediate stage a state set remains empty the parse can be aborted because an impossible prefix has been detectedstates with empty lhs such as those above are useful in other contexts as will be shown in section 54we will refer to them collectively as dummy statesdummy states enter the chart only as a result of initialization as opposed to being derived from grammar productionsit is easy to see that earley parser operations are correct in the sense that each chain of transitions corresponds to a possible derivationintuitively it is also true that a parser that performs these transitions exhaustively is complete ie it finds all possible derivationsformal proofs of these properties are given in the literature eg aho and ullman the relationship between earley transitions and derivations will be stated more formally in the next sectionthe parse trees for sentences can be reconstructed from the chart contentswe will illustrate this in section 5 when discussing viterbi parsestable 1 shows a simple grammar and a trace of earley parser operation on a sample sentenceearley parser can deal with any type of contextfree rule format even with null or productions ie those that replace a nonterminal with the empty stringsuch productions do however require special attention and make the algorithm and its description more complicated than otherwise necessaryin the following sections we assume that no null productions have to be dealt with and then summarize the necessary changes in section 47one might choose to simply preprocess the grammar to eliminate null productions a process which is also describeda stochastic contextfree grammar extends the standard contextfree formalism by adding probabilities to each production p where the rule probability p is usually written as pthis notation to some extent hides the fact that p is a conditional probability of production x 4 a being chosen given that x is up for expansionthe probabilities of all rules with the same nonterminal x on the lhs must therefore sum to unitycontextfreeness in a probabilistic setting translates into conditional independence of rule choicesas a result complete derivations have joint probabilities that are simply the products of the rule probabilities involvedthe probabilities of interest mentioned in section 1 can now be defined formallydefinition 1 the following quantities are defined relative to a scfg g a nonterminal x and a string x over the alphabet e of g where i v2 vk are strings of terminals and nonterminals x 4 a is a production of g and 12 is derived from vi by replacing one occurrence of x with a b the string probability p is the sum of the probabilities of all leftmost derivations x x producing x from x c the sentence probability p is the string probability given the start symbol s of g by definition this is also the probability p assigned to x by the grammar g d the prefix probability p is the sum of the probabilities of all sentence strings having x as a prefix in the following we assume that the probabilities in a scfg are proper and consistent as defined in booth and thompson and that the grammar contains no useless nonterminals these restrictions ensure that all nonterminals define probability measures over strings ie p is a proper distribution over x for all xformal definitions of these conditions are given in appendix ain order to define the probabilities associated with parser operation on a scfg we need the concept of a path or partial derivation executed by the earley parserdefinition 2 a an earley path or simply path is a sequence of earley states linked by prediction scanning or completionfor the purpose of this definition we allow scanning to operate in quotgeneration modequot ie all states with terminals to the right of the dot can be scanned not just those matching the inputnote that the definition of path length is somewhat counterintuitive but is motivated by the fact that only scanned states correspond directly to input symbolsthus the length of a path is always the same as the length of the input string it generatesa constrained path starting with the initial state contains a sequence of states from state set 0 derived by repeated prediction followed by a single state from set 1 produced by scanning the first symbol followed by a sequence of states produced by completion followed by a sequence of predicted states followed by a state scanning the second symbol and so onthe significance of earley paths is that they are in a onetoone correspondence with leftmost derivationsthis will allow us to talk about probabilities of derivations strings and prefixes in terms of the actions performed by earley parserfrom now on we will use quotderivationquot to imply a leftmost derivationlemma 1 deriving a prefix x0_1 of the input b there is a onetoone mapping between partial derivations and earley paths such that each production x v applied in a derivation corresponds to a predicted earley state x v is the invariant underlying the correctness and completeness of earley algorithm it can be proved by induction on the length of a derivation the slightly stronger form follows from and the way possible prediction steps are definedsince we have established that paths correspond to derivations it is convenient to associate derivation probabilities directly with pathsthe uniqueness condition above which is irrelevant to the correctness of a standard earley parser justifies counting of paths in lieu of derivationsthe probability p of a path p is the product of the probabilities of all rules used in the predicted states occurring in p lemma 2 note that when summing over all paths quotstarting with the initial statequot summation is actually over all paths starting with s by definition of the initial state 0 s follows directly from our definitions of derivation probability string probability path probability and the onetoone correspondence between paths and derivations established by lemma 1 follows from by using s as the start nonterminalto obtain the prefix probability in we need to sum the probabilities of all complete derivations that generate x as a prefixthe constrained paths ending in scanned states represent exactly the beginnings of all such derivationssince the grammar is assumed to be consistent and without useless nonterminals all partial derivations can be completed with probability onehence the sum over the constrained incomplete paths is the soughtafter sum over all complete derivations generating the prefixsince string and prefix probabilities are the result of summing derivation probabilities the goal is to compute these sums efficiently by taking advantage of the earley control structurethis can be accomplished by attaching two probabilistic quantities to each earley state as followsthe terminology is derived from analogous or similar quantities commonly used in the literature on hidden markov models and in baker the following definitions are relative to an implied input string x a the forward probability a is the sum of the probabilities of all constrained paths of length i that end in state kx b the inner probability 7p is the sum of the probabilities of all paths of length i k that start in state k kx au and end in kx ap and generate the input symbols xk x11it helps to interpret these quantities in terms of an unconstrained earley parser that operates as a generator emittingrather than recognizingstringsinstead of tracking all possible derivations the generator traces along a single earley path randomly determined by always choosing among prediction steps according to the associated rule probabilitiesnotice that the scanning and completion steps are deterministic once the rules have been chosenintuitively the forward probability aip is the probability of an earley generator producing the prefix of the input up to position i 1 while passing through state kx ayou at position ihowever due to leftrecursion in productions the same state may appear several times on a path and each occurrence is counted toward the total athus a is really the expected number of occurrences of the given state in state set ihaving said that we will refer to a simply as a probability both for the sake of brevity and to keep the analogy to the hmm terminology of which this is a generalizationnote that for scanned states a is always a probability since by definition a scanned state can occur only once along a paththe inner probabilities on the other hand represent the probability of generating a substring of the input from a given nonterminal using a particular productioninner probabilities are thus conditional on the presence of a given nonterminal x with expansion starting at position k unlike the forward probabilities which include the generation history starting with the initial statethe inner probabilities as defined here correspond closely to the quantities of the same name in baker the sum of y of all states with a given lhs x is exactly baker inner probability for xthe following is essentially a restatement of lemma 2 in terms of forward and inner probabilitiesit shows how to obtain the sentence and string probabilities we are interested in provided that forward and inner probabilities can be computed effectivelythe following assumes an earley chart constructed by the parser on an input string x with ix i 1 a provided that s l x0k_ixv is a possible leftmost derivation of the grammar the probability that a nonterminal x generates the substring xk x_1 can be computed as the sum p e yickx a ikx a b in particular the string probability p can be computed as7 the restriction in that x be preceded by a possible prefix is necessary since the earley parser at position i will only pursue derivations that are consistent with the input up to position ithis constitutes the main distinguishing feature of earley parsing compared to the strict bottomup computation used in the standard inside probability computation there inside probabilities for all positions and nonterminals are computed regardless of possible prefixesforward and inner probabilities not only subsume the prefix and string probabilities they are also straightforward to compute during a run of earley algorithmin fact if it were not for leftrecursive and unit productions their computation would be trivialfor the purpose of exposition we will therefore ignore the technical complications introduced by these productions for a moment and then return to them once the overall picture has become clearduring a run of the parser both forward and inner probabilities will be attached to each state and updated incrementally as new states are created through one of the three types of transitionsboth probabilities are set to unity for the initial state 0 sthis is consistent with the interpretation that the initial state is derived from a dummy production 4 s for which no alternatives existparsing then proceeds as usual with the probabilistic computations detailed belowthe probabilities associated with new states will be computed as sums of various combinations of old probabilitiesas new states are generated by prediction scanning and completion certain probabilities have to be accumulated corresponding to the multiple paths leading to a statethat is if the same state is generated multiple times the previous probability associated with it has to be incremented by the new contribution just computedstates and probability contributions can be generated in any order as long as the summation for one state is finished before its probability enters into the computation of some successor stateappendix b2 suggests a way to implement this incremental summationnotationa few intuitive abbreviations are used from here on to describe earley transitions succinctly to avoid unwieldy e notation we adopt the following conventionthe expression x y means that x is computed incrementally as a sum of various y terms which are computed in some order and accumulated to finally yield the value of x8 transitions are denoted by with predecessor states on the left and successor states on the right the forward and inner probabilities of states are notated in brackets after each state eg kx aytt a7 is shorthand for a ayu 7 y p note that only the forward probability is accumulated 7 is not used in this steprationale a is the sum of all path probabilities leading up to kx ayit times the probability of choosing production y v the value y is just a special case of the definition i kx aap a 7 i 1 kx aap a71 for all states with terminal a matching input at position ithen a 7 rationalescanning does not involve any new choices since the terminal was already selected as part of the production during predictionthen i j1 note that aquot is not usedrationaleto update the old forwardinner probabilities a and y to a and y respectively the probabilities of all paths expanding y v have to be factored inthese are exactly the paths summarized by the inner probability yquotthe standard earley algorithm together with the probability computations described in the previous section would be sufficient if it were not for the problem of recursion in the prediction and completion stepsthe nonprobabilistic earley algorithm can stop recursing as soon as all predictionscompletions yield states already contained in the current state setfor the computation of probabilities however this would mean truncating the probabilities resulting from the repeated summing of contributions451 prediction loopsas an example consider the following simple leftrecursive scfg where q 1 p nonprobabilistically the prediction loop at position 0 would stop after producing the states corresponding to just two out of an infinity of possible pathsthe correct forward probabilities are obtained as a sum of infinitely many terms accounting for all possible paths of length 1in these sums each p corresponds to a choice of the first production each q to a choice of the second productionif we did not care about finite computation the resulting geometric series could be computed by letting the prediction loop continue indefinitelyfortunately all repeated prediction steps including those due to leftrecursion in the productions can be collapsed into a single modified prediction step and the corresponding sums computed in closed formfor this purpose we need a probabilistic version of the wellknown parsing concept of a left corner which is also at the heart of the prefix probability algorithm of jelinek and lafferty the following definitions are relative to a given scfg g a two nonterminals x and y are said to be in a leftcorner relation b the probabilistic leftcorner relationl pl pl is the matrix of probabilities p defined as the total probability of choosing a production for x that has y as a left corner d the probabilistic reflexive transitive leftcorner relation rl rl is a matrix of probability sums reach r is defined as a series where we use the delta function defined as s 1 if x y and 6 0 if x ythe recurrence for rl can be conveniently written in matrix notation from which the closedform solution is derived an existence proof for rl is given in appendix aappendix b31 shows how to speed up the computation of rl by inverting only a reduced version of the matrix i plthe significance of the matrix rl for the earley algorithm is that its elements are the sums of the probabilities of the potentially infinitely many prediction paths leading from a state kx zp to a predicted state y v via any number of intermediate statesrl can be computed once for each grammar and used for tablelookup in the following modified prediction step10 if a probabilistic relation r is replaced by its settheoretic version r ie e r iff r 0 then the closure operations used here reduce to their traditional discrete counterparts hence the choice of terminology i kx azit a 7 i iy v for all productions y v such that r is nonzerothen the new r factor in the updated forward probability accounts for the sum of all path probabilities linking z to yfor z y this covers the case of a single step of prediction are 1 always since rl is defined as a reflexive closure may imply an infinite summation and could lead to an infinite loop if computed naivelyhowever only unit productionsquot can give rise to cyclic completions where q 1 p presented with the input a after one cycle of prediction the earley chart contains the following statesthe 11 factors are a result of the leftcorner sum 1 q q2 1after scanning 0s a completion without truncation would enter an infinite loopfirst ot s is completed yielding a complete state ot 4 s which allows 0s t to be completed leading to another complete state for s etcthe nonprobabilistic earley parser can just stop here but as in prediction this would lead to truncated probabilitiesthe sum of probabilities that needs to be computed to arrive at the correct result contains infinitely many terms one for each possible loop through the t s productioneach such loop adds a factor of q to the forward and inner probabilitiesthe summations for all completed states turn out as the approach taken here to compute exact probabilities in cyclic completions is mostly analogous to that for leftrecursive predictionsthe main difference is that unit productions rather than leftcorners form the underlying transitive relationbefore proceeding we can convince ourselves that this is indeed the only case we have to worry aboutlemma 4 let ki x1 al x2 k2x2 a2x3 quot kx ax1 be a completion cycle ie ki a1 ac x2 xc1then it must be the case that ai a2 a ie all productions involved are unit productions xl x2x x1 proof for all completion chains it is true that the start indices of the states are monotonically increasing ki k2 from ki cc it follows that ki k2 kcbecause the current position also refers to the same input index in all states all nonterminals xi x2 x have been expanded into the same substring of the input between ki and the current positionby assumption the grammar contains no nonterminals that generate 612 therefore we must have ai a2 a qed0 we now formally define the relation between nonterminals mediated by unit productions analogous to the leftcorner relationthe following definitions are relative to a given scfg g as before a matrix inversion can compute the relation ru in closed form the existence of ru is shown in appendix athe modified completion loop in the probabilistic earley parser can now use the ru matrix to collapse all unit completions into a single stepnote that we still have to do iterative completion on nonunit productions jy v arr i kx aza ctryri kx azit a y for all y z such that r is nonzero and y v is not a unit production or v e ethen consider the grammar where q 1 p this highly ambiguous grammar generates strings of any number of a using all possible binary parse trees over the given number of terminalsthe states involved in parsing the string aaa are listed in table 2 along with their forward and inner probabilitiesthe example illustrates how the parser deals with leftrecursion and the merging of alternative subparses during completionsince the grammar has only a single nonterminal the leftcorner matrix pl has rank 1 consequently the example trace shows the factor p1 being introduced into the forward probability terms in the prediction stepsthe sample string can be parsed as either or a each parse having a probability of p3q2the total string probability is thus 2p3q2 the computed a and y values for the final statethe a values for the scanned states in sets 1 2 and 3 are the prefix probabilities for a aa and aaa respectively p 1 p q earley chart as constructed during the parse of aaa with the grammar in the two columns to the right in list the forward and inner probabilities respectively for each statein both a and columns the separates old factors from new ones addition indicates multiple derivations of the same statenull productions x e introduce some complications into the relatively straightforward parser operation described so far some of which are due specifically to the probabilistic aspects of parsingthis section summarizes the necessary modifications to process null productions correctly using the previous description as a baselineour treatment of null productions follows the formulation of graham harrison and ruzzo rather than the original one in earley 471 computing 1expansion probabilitiesthe main problem with null productions is that they allow multiple predictioncompletion cycles in between scanning steps our strategy will be to collapse all predictions and completions due to chains of null productions into the regular prediction and completion steps not unlike the way recursive predictionscompletions were handled in section 45a prerequisite for this approach is to precompute for all nonterminals x the probability that x expands to the empty stringnote that this is another recursive problem since x itself may not have a null production but expand to some nonterminal y that doescomputation of p for all x can be cast as a system of nonlinear equations as followsfor each x let ex be an abbreviation for pfor example let x have productions the semantics of contextfree rules imply that x can only expand to e if all the rhs nonterminals in one of x productions expand to e translating to probabilities we obtain the equation in other words each production contributes a term in which the rule probability is multiplied by the product of the e variables corresponding to the rhs nonterminals unless the rhs contains a terminal the resulting nonlinear system can be solved by iterative approximationeach variable ex is initialized to p and then repeatedly updated by substituting in the equation righthand sides until the desired level of accuracy is attainedconvergence is guaranteed since the ex values are monotonically increasing and bounded above by the true values p l y relationthis reachability criterion has to be extended in the presence of null productionsspecifically if x has a production x y_i ya then y is a left corner of x iff yi y_1 all have a nonzero probability of expanding to the contribution of such a production to the leftcorner probability p is the old prediction procedure can now be modified in two stepsfirst replace the old pl relation by the one that takes into account null productions as sketched abovefrom the resulting pl compute the reflexive transitive closure rl and use it to generate predictions as beforesecond when predicting a left corner y with a production y y_1ya add states for all dot positions up to the first rhs nonterminal that cannot expand to e say from x y1 y_i ya through x yi_i yawe will call this procedure quotspontaneous dot shiftingquot it accounts precisely for those derivations that expand the rhs prefix y1 y_1 without consuming any of the input symbolsthe forward and inner probabilities of the states thus created are those of the first state x y1 y_1ya multiplied by factors that account for the implied eexpansionsthis factor is just the product li ey where j is the dot position473 completion with null productionsmodification of the completion step follows a similar patternfirst the unit production relation has to be extended to allow for unit production chains due to null productionsa rule x yi y11 yyii yi can effectively act as a unit production that links x and y if all other nonterminals on the rhs can expand to e its contribution to the unit production relation p will then be p ey koi from the resulting revised pu matrix we compute the closure ru as usualthe second modification is another instance of spontaneous dot shiftingwhen completing a state x ayit and moving the dot to get x ayit additional states have to be added obtained by moving the dot further over any nonterminals in that have nonzero expansion probabilityas in prediction forward and inner probabilities are multiplied by the corresponding expansion probabilities474 eliminating null productionsgiven these added complications one might consider simply eliminating all productions in a preprocessing stepthis is mostly straightforward and analogous to the corresponding procedure for nonprobabilistic cfgs the main difference is the updating of rule probabilities for which the expansion probabilities are again neededandreas stolcke efficient probabilistic contextfree parsing the crucial step in this procedure is the addition of variants of the original productions that simulate the null productions by deleting the corresponding nonterminals from the rhsthe spontaneous dot shifting described in the previous sections effectively performs the same operation on the fly as the rules are used in prediction and completionthe probabilistic extension of earley parser preserves the original control structure in most aspects the major exception being the collapsing of cyclic predictions and unit completions which can only make these steps more efficienttherefore the complexity analysis from earley applies and we only summarize the most important results herethe worstcase complexity for earley parser is dominated by the completion step which takes 0 for each input position 1 being the length of the current prefixthe total time is therefore 0 for an input of length 1 which is also the complexity of the standard insideoutside and lri algorithmsfor grammars of bounded ambiguity the incremental perword cost reduces to 0 0 totalfor deterministic cfgs the incremental cost is constant 0 totalbecause of the possible start indices each state set can contain 0 earley states giving 0 worstcase space complexity overallapart from input length complexity is also determined by grammar sizewe will not try to give a precise characterization in the case of sparse grammars however for fully parameterized grammars in cnf we can verify the scaling of the algorithm in terms of the number of nonterminals n and verify that it has the same 0 time and space requirements as the insideoutside and lri algorithmsthe completion step again dominates the computation which has to compute probabilities for at most 0 statesby organizing summations and so that 7quot are first summed by lhs nonterminals the entire completion operation can be accomplished in 0the onetime cost for the matrix inversions to compute the leftcorner and unit production relation matrices is also 0this section discusses extensions to the earley algorithm that go beyond simple parsing and the computation of prefix and string probabilitiesthese extensions are all quite straightforward and well supported by the original earley chart structure which leads us to view them as part of a single unified algorithm for solving the tasks mentioned in the introductiona viterbi parse for a string x in a grammar g is a leftmost derivation that assigns maximal probability to x among all possible derivations for xboth the definition of viterbi parse and its computation are straightforward generalizations of the corresponding notion for hidden markov models where one computes the viterbi path through an hmmprecisely the same approach can be used in the earley parser using the fact that each derivation corresponds to a paththe standard computational technique for viterbi parses is applicable herewherever the original parsing procedure sums probabilities that correspond to alternative derivations of a grammatical entity the summation is replaced by a maximizationthus during the forward pass each state must keep track of the maximal path probability leading to it as well as the predecessor states associated with that maximum probability pathonce the final state is reached the maximum probability parse can be recovered by tracing back the path of quotbestquot predecessor statesthe following modifications to the probabilistic earley parser implement the forward phase of the viterbi computationonce the final state is reached a recursive procedure can recover the parse tree associated with the viterbi parsethis procedure takes an earley state i kx ap as input and produces the viterbi parse for the substring between k and i as output the result will be a partial parse tree with children missing from the root nodeviterbi parse andreas stolcke efficient probabilistic contextfree parsing as well as t viterbiparse adjoin t to t as the rightmost child at the root and return t the rule probabilities in a scfg can be iteratively estimated using the them algorithm given a sample corpus d the estimation procedure finds a set of parameters that represent a local maximum of the grammar likelihood function p which is given by the product of the string probabilities ie the samples are assumed to be distributed identically and independentlythe two steps of this algorithm can be briefly characterized as followsestep compute expectations for how often each grammar rule is used given the corpus d and the current grammar parameters mstep reset the parameters so as to maximize the likelihood relative to the expected rule counts found in the estepthis procedure is iterated until the parameter values convergeit can be shown that each round in the algorithm produces a likelihood that is at least as high as the previous one the them algorithm is therefore guaranteed to find at least a local maximum of the likelihood functionthem is a generalization of the wellknown baumwelch algorithm for hmm estimation the original formulation for the case of scfgs is attributable to baker for scfgs the estep involves computing the expected number of times each production is applied in generating the training corpusafter that the mstep consists of a simple normalization of these counts to yield the new production probabilitiesin this section we examine the computation of production count expectations required for the estepthe crucial notion introduced by baker for this purpose is the quotouter probabilityquot of a nonterminal or the joint probability that the nonterminal is generated with a given prefix and suffix of terminalsessentially the same method can be used in the earley framework after extending the definition of outer probabilities to apply to arbitrary earley statesgiven a string x ix1 i the outer probability 0 of an earley state is the sum of the probabilities of all paths that outer probabilities complement inner probabilities in that they refer precisely to those parts of complete paths generating x not covered by the corresponding inner probability ytherefore the choice of the production x apt is not part of the outer probability associated with a state kx aain fact the definition makes no reference to the first part a of the rhs all states sharing the same k x and it will have identical outer probabilitiesintuitively f3 is the probability that an earley parser operating as a string generator yields the prefix xok1 and the suffix while passing through state kx a at position i as was the case for forward probabilities c3 is actually an expectation of the number of such states in the path as unit production cycles can result in multiple occurrences for a single stateagain we gloss over this technicality in our terminologythe name is motivated by the fact that 3 reduces to the quotouter probabilityquot of x as defined in baker if the dot is in final position521 computing expected production countsbefore going into the details of computing outer probabilities we describe their use in obtaining the expected rule counts needed for the estep in grammar estimationlet c denote the expected number of uses of production x a in the derivation of string x alternatively c is the expected number of times that x a is used for prediction in a complete earley path generating xlet c be the number of occurrences of predicted states based on production x a along a path p the last summation is over all predicted states based on production x athe quantity p is the sum of the probabilities of all paths passing through i x ainner and outer probabilities have been defined such that this quantity is obtained precisely as the product of the corresponding of and 13thus v xthe sum can be computed after completing both forward and backward passes by scanning the chart for predicted states522 computing outer probabilitiesthe outer probabilities are computed by tracing the complete paths from the final state to the start state in a single backward pass over the earley chartonly completion and scanning steps need to be traced backreverse scanning leaves outer probabilities unchanged so the only operation of concern is reverse completionwe describe reverse transitions using the same notation as for their forward counterparts annotating each state with its outer and inner probabilitiesreverse completioni y v 3quot7quot i kx ayyou 07 j kx 4 ayp r3cy1 for all pairs of states 1y 4 v and kx aybt in the chartthen 0 0 the inner probability 7 is not usedrationalerelative to 3 0 is missing the probability of expanding y which is filled in from 7quotthe probability of the surrounding of y is the probability of the surrounding of x plus the choice of the rule of production for x and the expansion of the partial lhs a which are together given by 7note that the computation makes use of the inner probabilities computed in the forward passthe particular way in which 7 and 3 were defined turns out to be convenient here as no reference to the production probabilities themselves needs to be made in the computationas in the forward pass simple reverse completion would not terminate in the presence of cyclic unit productionsa version that collapses all such chains of productions is given below for all pairs of states 1y 4 v and kx azp in the chart such that the unit production relation r is nonzerothen the first summation is carried out once for each state j kx azyou whereas the second summation is applied for each choice of z but only if x azp is not itself a unit production ie aft c rationalethis increments 13quot the equivalent of r times accounting for the infinity of surroundings in which y can occur if it can be derived through cyclic productionsnote that the computation of 0 is unchanged since ryquot already includes an infinity of cyclically generated subtrees for y where appropriatethe estimation procedure described above are only guaranteed to find locally optimal parameter estimatesunfortunately it seems that in the case of unconstrained scfg estimation local maxima present a very real problem and make success dependent on chance and initial conditions pereira and schabes showed that partially bracketed input samples can alleviate the problem in certain casesthe bracketing information constrains the parse of the inputs and therefore the parameter estimates steering it clear from some of the suboptimal solutions that could otherwise be foundan earley parser can be minimally modified to take advantage of bracketed strings by invoking itself recursively when a left parenthesis is encounteredthe recursive instance of the parser is passed any predicted states at that position processes the input up to the matching right parenthesis and hands complete states back to the invoking instancethis technique is efficient as it never explicitly rejects parses not consistent with the bracketingit is also convenient as it leaves the basic parser operations including the lefttoright processing and the probabilistic computations unchangedfor example prefix probabilities conditioned on partial bracketings could be computed easily this wayparsing bracketed inputs is described in more detail in stolcke where it is also shown that bracketing gives the expected improved efficiencyfor example the modified earley parser processes fully bracketed inputs in linear timein many applications ungrammatical input has to be dealt with in some waytraditionally it has been seen as a drawback of topdown parsing algorithms such as earley that they sacrifice quotrobustnessquot ie the ability to find partial parses in an ungrammatical input for the efficiency gained from topdown prediction one approach to the problem is to build robustness into the grammar itselfin the simplest case one could add toplevel productions where x can expand to any nonterminal including an quotunknown wordquot categorythis grammar will cause the earley parser to find all partial parses of substrings effectively behaving like a bottomup parser constructing the chart in lefttoright fashionmore refined variations are possible the toplevel productions could be used to model which phrasal categories can likely follow each otherthis probabilistic information can then be used in a pruning version of the earley parser to arrive at a compromise between robust and expectationdriven parsingan alternative method for making earley parsing more robust is to modify the parser itself so as to accept arbitrary input and find all or a chosen subset of possible substring parsesin the case of earley parser there is a simple extension to accomplish just that based on the notion of a wildcard state where the wildcard stands for an arbitrary continuation of the rhsduring prediction a wildcard to the left of the dot causes the chart to be seeded with dummy states x for each phrasal category x of interestconversely a minimal modification to the standard completion step allows the wildcard states to collect all abutting substring parses i 11one advantage over the grammarmodifying approach is that it can be tailored to use various criteria at runtime to decide which partial parses to followin finitestate parsing one often makes use of the forward probabilities for pruning partial parses before having seen the entire inputpruning is formally straightforward in earley parsers in each state set rank states according to their a values then remove those states with small probabilities compared to the current best candidate or simply those whose rank exceeds a given limitnotice this will not only omit certain parses but will also underestimate the forward and inner probabilities of the derivations that remainpruning procedures have to be evaluated empirically since they invariably sacrifice completeness and in the case of the viterbi algorithm optimality of the resultwhile earleybased online pruning awaits further study there is reason to believe the earley framework has inherent advantages over strategies based only on bottomup information contextfree forward probabilities include all available probabilistic information available from an input prefix whereas the usual inside probabilities do not take into account the nonterminal prior probabilities that result from the topdown relation to the start stateusing topdown constraints does not necessarily mean sacrificing robustness as discussed in section 54on the contrary by using earleystyle parsing with a set of carefully designed and estimated quotfaulttolerantquot toplevel productions it should be possible to use probabilities to better advantage in robust parsingthis approach is a subject of ongoing work in the context of tightcoupling scfgs with speech decoders one of the major alternative contextfree parsing paradigms besides earley algorithm is lr parsing a comparison of the two approaches both in their probabilistic and nonprobabilistic aspects is interesting and provides useful insightsthe following remarks assume familiarity with both approacheswe sketch the fundamental relations as well as the important tradeoffs between the two frameworks13 like an earley parser lr parsing uses dotted productions called items to keep track of the progress of derivations as the input is processedthe start indices are not part of lr items we may therefore use the term quotitemquot to refer to both lr items and earley states without start indicesan earley parser constructs sets of possible items on the fly by following all possible partial derivationsan lr parser on the other hand has access to a complete list of sets of possible items computed beforehand and at runtime simply follows transitions between these setsthe item sets are known as the quotstatesquot of the lr parsera grammar is suitable for lr parsing if these transitions can be performed deterministically by considering only the next input and the contents of a shiftreduce stackgeneralized lr parsing is an extension that allows parallel tracking of multiple state transitions and stack actions by using a graphstructured stack probabilistic lr parsing is based on lr items augmented with certain conditional probabilitiesspecifically the probability p associated with an lr item x 4 att is in our terminology a normalized forward probability where the denominator is the probability of the current prefixlr item probabilities are thus conditioned forward probabilities and can be used to compute conditional probabilities of next words p is the sum of the p of all items having x to the right of the dot notice that the definition of p is independent of i as well as the start index of the corresponding earley statetherefore to ensure that item probabilities are correct independent of input position item sets would have to be constructed so that their probabilities are unique within each sethowever this may be impossible given that the probabilities can take on infinitely many values and in general depend on the history of the parsethe solution used by wright is to collapse items whose probabilities are within a small tolerance e and are otherwise identicalthe same threshold is used to simplify a number of other technical problems eg leftcorner probabilities are computed by iterated prediction until the resulting changes in probabilities are smaller than e subject to these approximations then a probabilistic lr parser can compute prefix probabilities by multiplying successive conditional probabilities for the words it sees16 as an alternative to the computation of lr transition probabilities from a given scfg one might instead estimate such probabilities directly from traces of parses andreas stolcke efficient probabilistic contextfree parsing on a training corpusbecause of the imprecise relationship between lr probabilities and scfg probabilities it is not clear if the model thus estimated corresponds to any particular scfg in the usual sensebriscoe and carroll turn this incongruity into an advantage by using the lr parser as a probabilistic model in its own right and show how lr probabilities can be extended to capture noncontextfree contingenciesthe problem of capturing more complex distributional constraints in natural language is clearly important but well beyond the scope of this articlewe simply remark that it should be possible to define quotinterestingquot nonstandard probabilities in terms of earley parser actions so as to better model noncontextfree phenomenaapart from such considerations the choice between lr methods and earley parsing is a typical spacetime tradeoffeven though an earley parser runs with the same linear time and space complexity as an lr parser on grammars of the appropriate lr class the constant factors involved will be much in favor of the lr parser as almost all the work has already been compiled into its transition and action tablehowever the size of lr parser tables can be exponential in the size of the grammar furthermore if the generalized lr method is used for dealing with nondeterministic grammars the runtime on arbitrary inputs may also grow exponentiallythe bottom line is that each application needs have to be evaluated against the pros and cons of both approaches to find the best solutionfrom a theoretical point of view the earley approach has the inherent appeal of being the more general solution to the computation of the various scfg probabilitiesthe literature on earleybased probabilistic parsers is sparse presumably because of the precedent set by the insideoutside algorithm which is more naturally formulated as a bottomup algorithmboth nakagawa and easeler use a nonprobabilistic earley parser augmented with quotword matchquot scoringthough not truly probabilistic these algorithms are similar to the viterbi version described here in that they find a parse that optimizes the accumulated matching scores prediction and completion loops do not come into play since no precise inner or forward probabilities are computedmagerman and marcus are interested primarily in scoring functions to guide a parser efficiently to the most promising parsesearleystyle topdown prediction is used only to suggest worthwhile parses not to compute precise probabilities which they argue would be an inappropriate metric for natural language parsingcasacuberta and vidal exhibit an earley parser that processes weighted cfgs and performs a computation that is isomorphic to that of inside probabilities shown hereschabes adds both inner and outer probabilities to earley algorithm with the purpose of obtaining a generalized estimation algorithm for scfgsboth of these approaches are restricted to grammars without unbounded ambiguities which can arise from unit or null productionsdan jurafsky wrote an earley parser for the berkeley restaurant project speech understanding system that originally computed forward probabilities for restricted grammars the parser now uses the method described here to provide exact scfg prefix and nextword probabilities to a tightly coupled speech decoder an essential idea in the probabilistic formulation of earley algorithm is the collapsing of recursive predictions and unit completion chains replacing both with lookups in precomputed matricesthis idea arises in our formulation out of the need to compute probability sums given as infinite seriesgraham harrison and ruzzo use a nonprobabilistic version of the same technique to create a highly optimized earleylike parser for general cfgs that implements prediction and completion by operations on boolean matricesthe matrix inversion method for dealing with leftrecursive prediction is borrowed from the lri algorithm of jelinek and lafferty for computing prefix probabilities for scfgs in cnf18 we then use that idea a second time to deal with the similar recursion arising from unit productions in the completion stepwe suspect but have not proved that the earley computation of forward probabilities when applied to a cnf grammar performs a computation that is isomorphic to that of the lri algorithmin any case we believe that the parseroriented view afforded by the earley framework makes for a very intuitive solution to the prefix probability problem with the added advantage that it is not restricted to cnf grammarsalgorithms for probabilistic cfgs can be broadly characterized along several dimensionsone such dimension is whether the quantities entered into the parser chart are defined in a bottomup fashion or whether lefttoright constraints are an inherent part of their definition19 the probabilistic earley parser shares the inherent lefttoright character of the lri algorithm and contrasts with the bottomup io algorithmprobabilistic parsing algorithms may also be classified as to whether they are formulated for fully parameterized cnf grammars or arbitrary contextfree rules in this respect the earley approach contrasts with both the cnforiented io and lri algorithmsanother approach to avoiding the cnf constraint is a formulation based on probabilistic recursive transition networks the similarity goes further as both kupiec and our approach is based on state transitions and dotted productions turn out to be equivalent to rtn states if the rtn is constructed from a cfgwe have presented an earleybased parser for stochastic contextfree grammars that is appealing for its combination of advantages over existing methodsearley control structure let us the algorithm run with bestknown complexity on a number of grammar subclasses and no worse than standard bottomup probabilistic chart parsers on general scfgs and fully parameterized cnf grammarsunlike bottomup parsers it also computes accurate prefix probabilities incrementally while scanning its input along with the usual substring probabilitiesthe chart constructed during parsing supports both viterbi parse extraction and baum welch type rule probability estimation by way of a backward pass over the parser chartif the input comes with bracketing to indicate phrase structure this andreas stolcke efficient probabilistic contextfree parsing information can be easily incorporated to restrict the allowable parsesa simple extension of the earley chart allows finding partial parses of ungrammatical inputthe computation of probabilities is conceptually simple and follows directly earley parsing framework while drawing heavily on the analogy to finitestate language modelsit does not require rewriting the grammar into normal formthus the present algorithm fills a gap in the existing array of algorithms for scfgs efficiently combining the functionalities and advantages of several previous approachesin section 45 we defined the probabilistic leftcorner and unitproduction matrices rl and ru respectively to collapse recursions in the prediction and completion stepsit was shown how these matrices could be obtained as the result of matrix inversionsin this appendix we give a proof that the existence of these inverses is assured if the grammar is welldefined in the following three sensesthe terminology used here is taken from booth and thompson for an scfg g over an alphabet e with start symbol s we say that a g is proper iff for all nonterminals x the rulequot probabilities sum to unity ie where p is induced by the rule probabilities according to definition 1 c g has no useless nonterminals iff all nonterminals x appear in at least one derivation of some string x e e with nonzero probability ie p 0it is useful to translate consistency into quotprocessquot termswe can view an scfg as a stochastic stringrewriting process in which each step consists of simultaneously replacing all nonterminals in a sentential form with the righthand sides of productions randomly drawn according to the rule probabilitiesbooth and thompson show that the grammar is consistent if and only if the probability that stochastic rewriting of the start symbol s leaves nonterminals remaining after n steps goes to 0 as n oomore loosely speaking rewriting s has to terminate after a finite number of steps with probability 1 or else the grammar is inconsistentwe observe that the same property holds not only for s but for all nonterminals if the grammar has no useless terminalsif any nonterminal x admitted infinite derivations with nonzero probability then s itself would have such derivations since by assumption x is reachable from s with nonzero probabilityto prove the existence of rl and ru it is sufficient to show that the corresponding geometric series converge lemma 5 if g is a proper consistent scfg without useless nonterminals then the powers fl of the leftcorner relation and piz of the unit production relation converge to zero as n ooentry in the leftcorner matrix pl is the probability of generating y as the immediately succeeding leftcorner below xsimilarly entry in the nth power pi is the probability of generating y as the leftcorner of x with n 1 intermediate nonterminalscertainly 131quot is bounded above by the probability that the entire derivation starting at x terminates after n steps since a derivation could not terminate without expanding the leftmost symbol to a terminal but that probability tends to 0 as n oo and hence so must each entry in p for the unit production matrix pu a similar argument applies since the length of a derivation is at least as long as it takes to terminate any initial unit production chainif g is a proper consistent scfg without useless nonterminals then the series for rl and ru as defined above converge to finite nonnegative valuespi converging to 0 implies that the magnitude of pl largest eigenvalue is but the leftcorner relation is well defined for all q 1 namely 1 p1in this case the left fringe of the derivation is guaranteed to result in a terminal after finitely many steps but the derivation as a whole may never terminatethis appendix discusses some of the experiences gained from implementing the probabilistic earley parserb1 prediction because of the collapse of transitive predictions this step can be implemented in a very efficient and straightforward manneras explained in section 45 one has to perform a single pass over the current state set identifying all nonterminals z occurring to the right of dots and add states corresponding to all productions y v that are reachable through the leftcorner relation z l yas indicated in equation contributions to the forward probabilities of new states have to be summed when several paths lead to the same statehowever the summation in equation can be optimized if the a values for all old states with the same nonterminal z are summed first and then multiplied by rthese quantities are then summed over all nonterminals z and the result is once multiplied by the rule probability p to give the forward probability for the predicted stateb2 completion unlike prediction the completion step still involves iterationeach complete state derived by completion can potentially feed other completionsan important detail here is to ensure that all contributions to a state a and y are summed before proceeding with using that state as input to further completion stepsone approach to this problem is to insert complete states into a prioritized queuethe queue orders states by their start indices highest firstthis is because states corresponding to later expansions always have to be completed first before they can lead to the completion of expansions earlier on in the derivationfor each start index the entries are managed as a firstin firstout queue ensuring that the dependency graph formed by the states is traversed in breadthfirst orderthe completion pass can now be implemented as followsinitially all complete states from the previous scanning step are inserted in the queuestates are then removed from the front of the queue and used to complete other statesamong the new states thus produced complete ones are again added to the queuethe process iterates until no more states remain in the queuebecause the computation of probabilities already includes chains of unit productions states derived from such productions need not be queued which also ensures that the iteration terminatesa similar queuing scheme with the start index order reversed can be used for the reverse completion step needed in the computation of outer probabilities b3 efficient parsing with large sparse grammars during work with a moderatesized applicationspecific natural language grammar taken from the berp speech system we had an opportunity to optimize our implementation of the algorithmbelow we relate some of the lessons learned in the processb31 speeding up matrix inversionsboth prediction and completion steps make use of a matrix r defined as a geometric series derived from a matrix p both p and r are indexed by the nonterminals in the grammarthe matrix p is derived from the scfg rules and probabilities for an application using a fixed grammar the time taken by the precomputation of leftcorner and unit production matrices may not be crucial since it occurs offlinethere are cases however when that cost should be minimized eg when rule probabilities are iteratively reestimatedeven if the matrix p is sparse the matrix inversion can be prohibitive for large numbers of nonterminals n empirically matrices of rank n with a bounded number p of nonzero entries in each row can be inverted in time 0 whereas a full matrix of size n x n would require time 0in many cases the grammar has a relatively small number of nonterminals that have productions involving other nonterminals in a leftcorner only those nonterminals can have nonzero contributions to the higher powers of the matrix p this fact can be used to substantially reduce the cost of the matrix inversion needed to compute r let p be a subset of the entries of p namely only those elements indexed by nonterminals that have a nonempty row in p for example for the leftcorner computation p is obtained from p by deleting all rows and columns indexed by nonterminals that do not have productions starting with nonterminalslet i be the identity matrix over the same set of nonterminals as pthen r can be computed as iivp here r is the inverse of i p and denotes a matrix multiplication in which the left operand is first augmented with zero elements to match the dimensions of the right operand p the speedups obtained with this technique can be substantialfor a grammar with 789 nonterminals of which only 132 have nonterminal productions the leftcorner matrix was computed in 12 seconds inversion of the full matrix i p took 4 minutes 28 seconds21 b32 linking and bottomup filteringas discussed in section 48 the worstcase runtime on fully parameterized cnf grammars is dominated by the completion stephowever this is not necessarily true of sparse grammarsour experiments showed that the computation is dominated by the generation of earley states during the prediction stepsit is therefore worthwhile to minimize the total number of predicted states generated by the parsersince predicted states only affect the derivation if they lead to subsequent scanning we can use the next input symbol to constrain the relevant predictionsto this end we compute the extended leftcorner relation rif indicating which terminals can appear as left corners of which nonterminalsru is a boolean 21 these figures are not very meaningful for their absolute valuesall measurements were obtained on a sun sparcstation 2 with a commonlispclos implementation of generic sparse matrices that was not particularly optimized for this task matrix with rows indexed by nonterminals and columns indexed by terminalsit can be computed as the product where pu has a nonzero entry at iff there is a production for nonterminal x that starts with terminal a rl is the old leftcorner relationduring the prediction step we can ignore incoming states whose rhs nonterminal following the dot cannot have the current input as a leftcorner and then eliminate from the remaining predictions all those whose lhs cannot produce the current input as a leftcornerthese filtering steps are very fast as they involve only table lookupthis technique for speeding up earley prediction is the exact converse of the quotlinkingquot method described by pereira and shieber for improving the efficiency of bottomup parsersthere the extended leftcorner relation is used for topdown filtering the bottomup application of grammar rulesin our case we use linking to provide bottomup filtering for topdown application of productionson a test corpus this technique cut the number of generated predictions to almost onefourth and speeded up parsing by a factor of 33the corpus consisted of 1143 sentence with an average length of 465 wordsthe topdown prediction alone generated 991781 states and parsed at a rate of 590 milliseconds per sentencewith bottomup filtered prediction only 262287 states were generated resulting in 180 milliseconds per sentencethanks are due dan jurafsky and steve omohundro for extensive discussions on the topics in this paper and fernando pereira for helpful advice and pointersjerry feldman terry regier jonathan segal kevin thompson and the anonymous reviewers provided valuable comments for improving content and presentation
J95-2002
an efficient probabilistic contextfree parsing algorithm that computes prefix probabilitieswe describe an extension of earley parser for stochastic contextfree grammars that computes the following quantities given a stochastic contextfree grammar and an input string a probabilities of successive prefixes being generated by the grammar b probabilities of substrings being generated by the nonterminals including the entire string being generated by the grammar c most likely parse of the string d posterior expected number of applications of each grammar production as required for reestimating rule probabilitiesprobabilities and are computed incrementally in a single lefttoright pass over the inputour algorithm compares favorably to standard bottomup parsing methods for scfgs in that it works efficiently on sparse grammars by making use of earley topdown control structureit can process any contextfree rule format without conversion to some normal form and combines computations for through in a single algorithmfinally the algorithm has simple extensions for processing partially bracketed inputs and for finding partial parses and their likelihoods on ungrammatical inputsan earley chart is used for keeping track of all derivations that are consistent with the input
centering a framework for modeling the local coherence of discourse the original motivations for centering the basic definitions underlying the centering framework and the original theoretical claims this paper attempts to meet that need to accomplish this goal we have chosen to remove descriptions of many open research questions posed in grosz joshi and weinstein as well as solutions that were only partially developed we have also greatly shortened the discussion of criteria for and constraints on a possible semantic theory as a foundation for this work this paper concerns relationships among focus of attention choice of referring expression and perceived coherence of utterances within a discourse segmentit presents a framework and initial theory of centering intended to model the local component of attentional statethe paper examines interactions between local coherence and choice of referring expressions it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions given a particular attentional stateit demonstrates that the attentional state properties modeled by centering can account for these differencespreface our original paper on centering claimed that certain entities mentioned in an utterance were more central than others and that this property imposed constraints on a speaker use of different types of referring expressionscentering was proposed as a model that accounted for this phenomenonwe argued that the coherence of discourse was affected by the compatibility between centering properties of an utterance and choice of referring expressionsubsequently we revised and expanded the ideas presented thereinwe defined various centering constructs and proposed two centering rules in terms of these constructsa draft manuscript describing this elaborated centering framework and presenting some initial theoretical claims has been in wide circulation since 1986this draft has led to a number of papers by others on this topic and has been extensively cited but has never been publishedwe have been urged to publish the more detailed description of the centering framework and theory proposed in grosz joshi and weinstein so that an official version would be archivally availablethe task of completing and revising this draft became more daunting as time passed and more and more papers appeared on centeringmany of these papers proposed extensions to or revisions of the theory and attempted to answer questions posed in grosz joshi and weinstein it has become ever more clear that it would be useful to have a quotdefinitivequot statement of the original motivations for centering the basic definitions underlying the centering framework and the original theoretical claimsthis paper attempts to meet that needto accomplish this goal we have chosen to remove descriptions of many open research questions posed in grosz joshi and weinstein as well as solutions that were only partially developedwe have also greatly shortened the discussion of criteria for and constraints on a possible semantic theory as a foundation for this workthis paper presents an initial attempt to develop a theory that relates focus of attention choice of referring expression and perceived coherence of utterances within a discourse segmentthe research described here is a further development of several strands of previous researchit fits within a larger effort to provide an overall theory of discourse structure and meaningin this section we describe the larger research context of this work and then briefly discuss the previous work that led to itcentering fits within the theory of discourse structure developed by grosz and sidner grosz and sidner distinguish among three components of discourse structure a linguistic structure an intentional structure and an attentional stateat the level of linguistic structure discourses divide into constituent discourse segments an embedding relationship may hold between two segmentsthe intentional structure comprises intentions and relations among themthe intentions provide the basic rationale for the discourse and the relations represent the connections among these intentionsattentional state models the discourse participants focus of attention at any given point in the discoursechanges in attentional state depend on the intentional structure and on properties of the utterances in the linguistic structureeach discourse segment exhibits both local coherenceie coherence among the utterances in that segmentand global coherenceie coherence with other segments in the discoursecorresponding to these two levels of coherence are two components of attentional state the local level models changes in attentional state within a discourse segment and the global level models attentional state properties at the intersegmental levelgrosz and sidner argue that global coherence depends on the intentional structurethey propose that each discourse has an overall communicative purpose the discourse purpose and each discourse segment has an associated intention its discourse segment purpose the dp and dsp are speaker intentions they are correlates at the discourse level of the intentions grice argued underlay utterance meaning if a discourse is multiparty then the dsp for a given segment is an intention of the conversational participant who initiates that segmentlochbaum employs collaborative plans to model intentional structure and is thus able to integrate intentions of different participantssatisfaction of the dsps contributes to the satisfaction of the dprelationships between dsps provide the basic structural relationships for the discourse embeddings in the linguistic structure are derived from these relationshipsthe global coherence of a discourse depends on relationships among its dp and dspsgrosz and sidner model the globallevel component of the attentional state with a stack pushes and pops of focus spaces on the stack depend on intentional relationshipsthis paper is concerned with local coherence and its relationship to attentional state at the local levelcentering is proposed as a model of the locallevel component of attentional statewe examine the interactions between local coherence and choices of referring expressions and argue that differences in coherence correspond in part to the different demands for inference made by different types of referring expressions given a particular attentional statewe describe how the attentional state properties modeled by centering can account for these differencesthree pieces of previous research provide the background for this workgrosz defined two levels of focusing in discourse global and immediateparticipants were said to be globally focused on a set of entities relevant to the overall discoursethese entities may either have been explicitly introduced into the discourse or sufficiently closely related to such entities to be considered implicitly in focus in contrast immediate focusing referred to a more local focusing processone that relates to identifying the entity that an individual utterance most centrally concernssidner provided a detailed analysis of immediate focusing including a distinction between the current discourse focus and potential focishe gave algorithms for tracking immediate focus and rules that stated how the immediate focus could be used to identify the referents of pronouns and demonstrative noun phrases joshi and kuhn and joshi and weinstein provided initial results on the connection between changes in immediate focus and the complexity of inferences required to integrate a representation of the meaning of an individual utterance into a representation of the meaning of the discourse of which it was a partto avoid confusion with previous uses of the term quotfocusquot in linguistics they introduced the centering terminologytheir notions of quotforwardlookingquot and quotbackwardlookingquot centers correspond approximately to sidner potential foci and discourse focusin all of this work focusing whether global or immediate was seen to function to limit the inferences required for understanding utterances in a discoursegrosz and sidner were concerned with the inferences needed to interpret anaphoric expressions of various sorts they used focusing to order candidates as a result the need for search was greatly reduced and the use of inference could be restricted to determining whether a particular candidate was appropriate given the embedding utterance interpretationjoshi kuhn and weinstein were concerned with reducing the inferences required to integrate utterance meaning into discourse meaningthey used centering to determine an almost monadic predicate representation of an utterance in discourse they then used this representation to reduce the complexity of inferencein this paper we generalize and clarify certain of sidner results but adopt the quotcenteringquot terminologywe also abstract from sidner focusing algorithm to specify constraints on the centering processwe consider the relationship between coherence and inference load and examine how both interact with attentional state and choices in linguistic expressionthe remainder of this paper is organized as follows in section 2 we briefly describe the phenomena motivating the development of centering that this paper aims to explainsection 3 provides the basic definitions of centers and related definitions needed to present the theoretical claims of the paperin section 4 we state the main properties of the centering framework and the major claims of centering theoryin section 5 we discuss several factors that affect centering constraints and govern the centering rules given in section 6in section 7 we discuss applications of the rules and their ability to explain several discourse coherence phenomenain section 8 we briefly outline the properties of an underlying semantic framework that are required by centeringfinally in section 9 we conclude with a brief comparison of centering with the research that preceded it and a summary of research that expands on grosz joshi and weinstein in particular section 9 provides references to subsequent investigations of additional factors that control centering and examinations of its crosslinguistic applicability and empirical validitydiscourses are more than mere sequences of utterancesfor a sequence of utterances to be a discourse it must exhibit coherencein this paper we investigate linguistic and attentional state factors that contribute to coherence among utterances within a discourse segmentthese factors contribute to the difference in coherence between the following two discourse segments2 discourse is intuitively more coherent than discourse this difference may be seen to arise from different degrees of continuity in what the discourse is aboutdiscourse centers around a single individual describing various actions he took and his reactions to themin contrast discourse seems to flip back and forth among several different entitiesmore specifically the initial utterance in each segment could begin a segment about an individual named john or one about john favorite music store or one about the fact that john wants to buy a pianowhereas discourse is clearly about john discourse has no single clear center of attentionutterance seems to be about the storeif a reader inferred that utterance was about john then that reader would perceive a change in the entity which the discourse seems to be about in going from to on the other hand if the reader took to be about the store then in going to there is no changein either case in utterance john seems to be central requiring a shift from utterance while the store becomes central again in utterance requiring yet another shiftthis changing of aboutness makes discourse less coherent than discourse discourses and convey the same information but in different waysthey differ not in content or what is said but in expression or how it is saidthe variation in aboutness they exhibit arises from different choices of the way in which they express the same propositional contentthe differences can only be explained however by looking beyond the surface form of the utterances in the discourse different types of referring expressions and different syntactic forms make different inference demands on a hearer or readerthese differences in inference load underlie certain differences 2 this example and the others in this paper are singlespeaker textshowever centering also applies to dialogue and multiparty conversationsissues of the interaction between turntaking and changes in centering status remain to be investigated in coherencethe model of local attentional state described in this paper provides a basis for explaining these differencesthus the focus of our investigation is on interactions among choice of referring expression attentional state the inferences required to determine the interpretation of an utterance in a discourse segment and coherencepronouns and definite descriptions are not equivalent with respect to their effect on coherencewe conjecture that this is so because they engender different inferences on the part of a hearer or readerin the most pronounced cases the wrong choice will mislead a hearer and force backtracking to a correct interpretationthe following variations of a discourse sequence illustrate this problem and provide additional evidence for our conjectureby using a pronoun to refer to tony in utterance the speaker may confuse the hearerthrough utterance terry has been the center of attention and hence is the most likely referent of quothequot in utterance it is only when one gets to the word quotsickquot that it is clear that it must be tony and not terry who is sick and hence that the pronoun in utterance refers to tony not terrya much more natural sequence results if quottonyquot is used as the sequence illustrateswe conjecture that the form of expression in a discourse substantially affects the resource demands made upon a hearer in discourse processing and through this influences the perceived coherence of the discourseit is well known from the study of complexity theory that the manner in which a class of problems is represented can significantly affect the time or space resources required by any procedure that solves the problemhere too we conjecture that the manner ie linguistic form in which a discourse represents a particular propositional content can affect the resources required by any procedure that processes that discoursewe use the phrase inference load placed upon the hearer to refer to the resources required to extract information from a discourse because of particular choices of linguistic expression used in the discoursewe conjecture that one psychological reflex of this inference load is a difference in perceived coherence among discourses that express the same propositional content using different linguistic formsone of the tasks a hearer must perform in processing a discourse is to identify the referents of noun phrases in the discourseit is commonly accepted and is a hypothesis under which our work on centering proceeds that a hearer determination of noun phrase reference involves some process of inferencehence a particular claim of centering theory is that the resource demands of this inference process are affected by the form of expression of the noun phrasein section 7 we discuss the effect on perceived coherence of the use of pronouns and definite descriptions by relating different choices to the inferences they require the hearer or reader to makewe use the term centers of an utterance to refer to those entities serving to link that utterance to other utterances in the discourse segment that contains itit is an utterance and not a sentence in isolation that has centersthe same sentence uttered in different discourse situations may have different centerscenters are thus discourse constructsfurthermore centers are semantic objects not words phrases or syntactic formseach utterance you in a discourse segment is assigned a set of forwardlooking centers cf each utterance other than the segment initial utterance is assigned a single backwardlooking center cb to simplify notation when the relevant discourse segment is clear we will drop the associated ds and use cb and cf the backwardlooking center of utterance un1 connects with one of the forwardlooking centers of utterance youthe connection between the backwardlooking center of utterance uni and the forwardlooking centers of utterance un may be of several typesto describe these types we need to introduce two new relations realizes and directly realizes that relate centers to linguistic expressionswe will say that you directly realizes c if you is an utterance of some phrase for which c is the semantic interpretationrealizes is a generalization of directly realizesthis generalization is important for capturing certain regularities in the use of definite descriptions and pronounsthe precise definition of depends on the semantic theory one adoptsone feature that distinguishes centering from other treatments of related discourse phenomena is that the realization relation combines syntactic semantic discourse and intentional factorsthat is the centers of an utterance in general and the backwardlooking center specifically are determined on the basis of a combination of properties of the utterance the discourse segment in which it occurs and various aspects of the cognitive state of the participants of that discoursethus for a semantic theory to support centering it must provide an adequate basis for computing the realization relationfor example np directly realizes c may hold in cases where np is a definite description and c is its denotation its valuefree interpretation or an object related to it by quotspeaker referencequot more importantly when np is a pronoun the principles that determine the c for which it is the case that np directly realizes c do not derive exclusively from syntactic semantic or pragmatic factorsthey are principles that must be elicited from the study of discourse itselfan initial formulation of some such principles is given in section 86 the forwardlooking centers of un depend only on the expressions that constitute that utterance they are not constrained by features of any previous utterance in the segmentthe elements of cf are partially ordered to reflect relative prominence in youin section 5 we discuss a number of factors that may affect the ordering on the elements of cfthe more highly ranked an element of cf the more likely it is to be cb the most highly ranked element of cf that is realized in uni is the cb because cf is only partially ordered some elements may from cf information alone be equally likely to be cb in such cases additional criteria are needed for deciding which single entity is the cbsome recent psycholinguistic evidence suggests that the syntactic role in uni may determine this choice in the remainder of the paper we will use a notation such that the elements of cf are ranked in the order in which they are listedin particular for presentational 4 you need not be a full clausewe use you here to stress again that it is the utterance not the string of words5 in the original manuscript we defined realize in terms of situation semantics and said the relation held quotif either c is an element of the situation described by the utterance you or c is directly realized by some subpart of youquot we discuss this further in section 76 in the examples in this paper we will be concerned with the realization relationship that holds between a center and a singular definite noun phrase ie cases where an np directly realizes a center c several extensions to the theory presented here are needed to handle plural quantified noun phrases and indefinitesit is also important to note that not all noun phrases in an utterance contribute centers to cf and not only noun phrases do somore generally events and other entities that are more often directly realized by verb phrases can also be centers whereas negated noun phrases typically do not contribute centers the study of these issues is however beyond the scope of this paper7 to simplify the presentation in the remainder of this paper we will assume in most of the discussion that there is a total order with strict ordering between any two elements at those places where the partial ordering makes a significant difference we will discuss that purposes we will use the following schematic to refer to the centers of utterances in a sequence for you are coo a cf a ek for some k for und_i cb realizes them and for all j j object otherthe effect of factors such as word order clausal subordination and lexical semantics as well as the interaction among these factors are areas of active investigation section 9 again provides references to such workin summary these examples provide support for the claim that there is only a single cb that grammatical role affects an entity being more highly ranked in cf and that lowerranked elements of the cf cannot be pronominalized unless higherranked ones arekameyama was the first to argue that grammatical role rather than thematic role which sidner used affected the cf rankingpsycholinguistic research since 1986 supports the claims that there is a single cb and that grammatical role plays a determining role in identifying the cbit furthermore suggests that neither thematic role nor surface position is a determinant of the cbin contrast both grammatical role and surface position were shown to affect the cf orderingalthough there are as yet no psycholinguistic results related to the effect of pronominalization on determining cb crosslinguistic work argues that it plays such a rolesection 9 lists several papers appearing after grosz joshi and weinstein that investigate factors affecting the cf orderingthe basic constraint on center realization is given by rule 1 which is stated in terms of the definitions and schematic in section 3rule 1 if any element of cf is realized by a pronoun in uni then the cb must be realized by a pronoun alsoin particular this constraint stipulates that no element in an utterance can be realized as a pronoun unless the backwardlooking center of the utterance is realized as a pronoun alsorule 1 represents one function of pronominal reference the use of a pronoun to realize the cb signals the hearer that the speaker is continuing to talk about the same thingnote that rule 1 does not preclude using pronouns for other entities so long as the cb is realized with a pronounpsychological research and crosslinguistic research have validated that the cb is preferentially realized by a pronoun in english and by equivalent forms in other languagesthe basic constraint on center movement is given by rule 2sequences of continuation are preferred over sequences of retaining and sequences of retaining are to be preferred over sequences of shiftingin particular a pair continuations across un and across un1 represented as cont and cont respectively is preferred over a pair of retentions ret and retthe case is analogous for pair of retentions and a pair of shiftsrule 2 reflects our intuition that continuation of the center and the use of retentions when possible to produce smooth transitions to a new center provides a basis for local coherencein a locally coherent discourse segment shifts are followed by a sequence of continuations characterizing another stretch of locally coherent discoursefrequent shifting leads to a lack of local coherence as was illustrated by the contrast between discourse and discourse in section 2thus rule 2 provides a constraint on speakers and on naturallanguage generation systemsthey should plan ahead to minimize the number of shiftsthis rule does not have the same direct implementation for interpretation systems rather it predicts that certain sequences produce a higher inference load than othersto empirically test the claim made by rule 2 requires examination of differences in inference load of alternative multiutterance sequences that differentially realize the same contentalthough several crosslinguistic studies have investigated rule 2 there are as yet no psycholinguistic results empirically validating itthe two centering rules along with the partial ordering on the forwardlooking centers described in section 5 constitute the basic framework of center managementthese rules can explain a range of variations in local coherencea violation of rule 1 occurs if a pronoun is not used for the backwardlooking center and some other entity is realized by a pronounsuch a violation occurs in the following sequence presumed to be in a longer segment that is currently centered on john and in section 5 the violation of rule 1 leads to the incoherence of the sequencethe only possible interpretation is that the quotjohnquot referred to in is a second person named quotjohnquot not the one referred to in the preceding utterances in however even under this interpretation the sequence is very oddthe next example illustrates that this effect is 16 these rules and constraints have also been used by others as the basis for pronoun resolution algorithms based on centeringthe earliest such attempt used the uniqueness and locality of cb constraints and ranked the cf by grammatical role it employed a variant of rule 2 in which the stated preferences on transitions were restricted to transitions between individual pairs of utterances and used to decide between possible interpretations of pronounssection 9 provides references to other work on centering algorithms independent of the grammatical position of the cb and also demonstrates that rule 1 operates independently of the type of centering transitionwithout utterance this sequence like the sequence in is unacceptable unless it is possible to consider the introduction of a second person named quotjohnquot the intervening utterance here provides for a shift in center from john to mike making the full sequence coherentit is important to notice that rule 1 constrains the realization of the most highly ranked element of the cf that is realized in un1 given that pronominalization is usedobviously any entities realized in un that are not realized in uni including the cb as well as the highest ranked element of cf wo do not affect the applicability of rule 1likewise if no pronouns are used then rule 1 is not applicabletwo particular ways in which such situations may hold have been noticed in previous researcheach leads to a different type of inference load on the hearer both of which we believe relate to rule 1 however neither constitutes a violation of rule 1the resulting discourses are coherent but the determination of local coherence or the detection of a global shift requires additional inferencesthe first case concerns realization of the cb by a nonpronominal expressionrule 1 does not preclude using a proper name or definite description for the cb if there are no pronouns in an utterancehowever it appears that such uses are best when the full definite noun phrases that realize the centers do more than just referthey convey some additional information ie lead the hearer or reader to draw additional inferencesthe hearer or reader not only infers that the cb has not changed even though no pronoun has been used but also recognizes that the description holds of the old cbsequences and are typical casesthe second case concerns the use of a pronoun to realize an entity not in the cf such uses are strongly constrainedthe particular cases that have been identified involve instances where attention is shifted globally back to a previously centered entity in such cases additional inferences are required to determine that the pronoun does not refer to a member of the current forwardlooking centers and to identify the context back to which attention is shiftingfurther investigation is required to determine the linguistic cues and intentional information that are required to enable such shifts while preserving coherence as well as the effect on inference loada third complication arises in the application of rule 1 in sequences in which the cb of an utterance is realized but not directly realized in that utterancethis situation typically holds when an utterance directly realizes an entity implicitly focused by an element of the cf of the previous utterancefor instance it arises in utterances containing noun phrases that express functional relations whose arguments have been directly realized in previous utterances as occurs in the sequence athe house appeared to have been burgled bthe door was ajar c the furniture was in disarrayin this segment the house referred to in is an element of the cfthis house is the cb it is realized but not directly realized in because the house is the cb the cf includes it as well as the door that is directly realized in the utterancethe cb is thus again quothousequot we assume here that the door ranks above the house in cf for example if is followed by a sentence with it in the subject position then it is more likely to refer to the doorthis is consistent with the ranking of the door ahead of the house in cf however continuity of the house as a potential cb for is reflected in the discourse segment being interpreted to be quotaboutquot the house and being interpreted in the same way as with respect to the housein grosz joshi and weinstein we did not explore this issue further the general issue of the roles of functional dependence and implicit focus in centering remain openthe use of different types of transitions following the rankings in rule 2 are illustrated by the discourse belowutterance establishes john both as the cb and as the most highly ranked cfin utterance john continues as the cb but in utterance he is only retained mike has become the most highly ranked element of the cffinally in utterance the backwardlooking center shifts to being mikerule 1 is satisfied throughout rule 1 depends only on the ordering of elements of cf and not on the notions of retaining and continuationdifferent semantic theories make different commitments with respect to the completeness or definiteness required of an interpretationbecause the information needed to compute a unique interpretation for an utterance is not always available at the time the utterance occurs in the discourse the ways in which a theory treats partial information affects its computational tractability as the basis for discourse interpretationit is not merely that utterances themselves contain only partial information but that it may only be subsequent to an utterance that sufficient information is available for computing a unique interpretationno matter how rich a model of context one has it will not be possible to fully constrain the interpretation of an utterance when it occursthis is especially true for definite noun phrase interpretationfor example several interpretations are possible for the noun phrase quotthe vicepresident of the united statesquot in the utterance the vicepresident of the united states is also president of the senateone interpretation namely the individual who is currently vicepresident provides the appropriate basis for the interpretation of quothequot in the subsequent utterance given in however a different interpretation one which retains some descriptive content provides the appropriate basis for an interpretation of the pronoun quothequot in the slightly different subsequent utterance historically he is the president key person in negotiations with congressa semantic theory that forces a unique interpretation of utterance will require that a computational theory or system either manage several alternatives simultaneously or provide some mechanism for retracting one choice and trying another lateron the other hand a theory that allows for a partially specified interpretation must provide for refining that interpretation on the basis of subsequent utterancesadditional utterances may provide further constraints on an interpretation and sequences of utterances may not be coherent if they do not allow for a consistent choice of interpretationfor example the utterance in is perfectly fine after but yields an incoherent sequence after 2 21 these examples were first written in 1986 when george bush was vicepresidentthey remain useful for illustrating the original points if the time of original writing is taken into accountas we discuss later taken as spoken now they illustrate new points as ambassador to china he handled many tricky negotiations so he does well in this jobto summarize given that one purpose of discourse is to increase the information shared by speaker and hearer it is not surprising that individual utterances convey only partial informationhowever the lack of complete information at the time of processing an utterance means that a unique interpretation cannot be definitely determinedin constructing a computational model we are then left with three choices compute all possible interpretations and filter out possibilities as more information is received choose a most likely interpretation and provide for quotbacktrackingquot and computing others later compute a partial interpretationwe conjecture that this third choice is the appropriate one for noun phrase interpretationcentering theory and the centering framework rely on a certain picture of the ways in which utterances function to convey information about the worldone role of a semantic theory is to give substance to such a pictureat the time grosz joshi and weinstein was written it struck us that situation semantics provided a particularly convenient setting in which to frame our own theory of discourse phenomena though our account relied only on general features of this approach and not on details of the theory as then articulatedthe two most important features of situation semantics from the standpoint of the theory of discourse interpretation we wished to develop were that it allows for the partial interpretation of utterances as they occur in discourse and that it provides a framework in which a rich theory of the dependence of interpretation on abstract features of context may be elaboratedthere is now a large situation semantics literature that contains many extensions and refinements of the theory to which we refer the interested readerthe original book may be consulted for an account of the distinction between valuefree and valueloaded interpretations used belowin the discussion and examples in previous sections the cb and the elements of cf have all been the denotations of various noun phrases in an utterancethe actual situation is more complicated even if we ignore for the moment quantifiers and other syntactic complexities as well as cases in which the center is functionally dependent on or otherwise implicitly focused by an element of the cf of the previous utterance a singular definite noun phrase may contribute a number of different interpretations to cfin particular not only the valuefree interpretation but also various loadings may be contributedfor example in the utterance quotthe vicepresident of the united states is also president of the senatequot the noun phrase quotthe vicepresidentquot contributes both a valueloaded and a valuefree interpretationthe valuefree interpretation is needed in the sequence whereas the valueloaded interpretation is needed in the cb and the cb are both directly realized by the anaphoric element quothequot but cb is the valuefree interpretation of the noun phrase quotthe vicepresidentquot whereas cb is the valueloaded interpretation that this is so is demonstrated by the fact that is true in 1994 whereas is notcentering accommodates these differences by allowing the noun phrase quotthe vicepresident of the united statesquot potentially to contribute both its valuefree interpretation and its valueloading at the world type to cfcb is then the valuefree interpretation and cb is the valueloaded one george bush but now 1995 al gorein each sequence the utterance underdetermines what element to add to cfthis underdetermination may continue in a subsequent utterance with the pronounfor example that would be the case if the introductory adverbials were left off the utteranceswe conjecture that the correct approach to take in these cases is to add the valuefree interpretation to cf and then load it for the interpretation of subsequent utterances if this is necessarythis conjecture derives from a belief that this approach will most effectively limit the inferences requiredthese loading situations thus constitute a component of the centering constituent of the discourse situationit remains an open question how long to retain these loading situations although those corresponding to elements of cf that are not carried forward can obviously be droppedit is possible for an utterance to prefer either a valuefree or valueloaded interpretation but not force itfor example the second utterance in the following sequence prefers a vf interpretation but allows for the vl interpretation that is needed in the third utterancein a similar way the second utterance in the following sequencen prefers the vl interpretation but allows for the vfthe third utterance requires the vf interpretationin these examples both valuefree and valueloaded interpretations are shown to stem from the same full definite noun phrasethere appear to be strong constraints on the kinds of transitions that are allowed howeverin particular if a given utterance forces either the vf or the vl interpretation then only this interpretation is possible in the immediately subsequent utterancehowever if some utterance only prefers one interpretation but allows the other then the subsequent utterance may pick up on either onefor example the sequence in which quothequot may be interpreted either vf or vl may be followed by either or however if we change to force the valueloaded interpretation as in then only the valueloaded interpretation is possiblesimilarly if is changed to force the valuefree interpretation as in then only the valuefree interpretation is possiblespeaker intentions may also enter into the determination of which entities are in the cfthe referential uses of descriptions of which donnellan gives examples demonstrate cases in which the quotreferential intentionsquot of the speaker in his use of the description play a role in determining cbfor example consider the following sequence in these examples the speaker uses a description to refer to something other than the semantic denotation of that description ie the unique thing that satisfies the description there are several alternative explanations of such examples involving various accounts of speaker intentions mutual belief and the likea complete discussion of these issues is beyond the scope of this paperthe importance of these cases resides in showing that cf may include more than one entity that is realized by a single np in youin this case the noun phrase quother husbandquot contributes two individuals the husband and the lover to cf and cfthis can be seen by observing that both discourses seem equally appropriate and that the backwardlooking centers of and are respectively the husband and the lover which are realized by their anaphoric elementsthese examples introduce a number of research issues concerning the representation and management of the cb and cf discourse entitiesthe account given here depends on a semantic theory that permits minimal commitment in interpretationsthe open question is which constraints on centers are introduced at which points during processingwe must leave this as a topic for future workthis theory can be contrasted with two previous research efforts that spurred this work sidner original work on immediate focusing and pronouns and joshi and weinstein subsequent work on centering and inferencesthe centering theory discussed here is quite close to sidner original theory both in attacking local discourse issues and in the general outline of approachhowever it differs in several detailsin sidner theory each utterance provides an immediate discourse focus an actor focus and a set of potential focithe discourse and actor foci may coincide but need nother potential foci are roughly analogous to our cfthe cb for an utterance sometimes coincides with her actor focus and sometimes with her discourse focusshe distinguishes these two to handle various cases of multiple pronounshowever as we have shown utterances do not have multiple cbsfurthermore utterances can have more than two pronouns so merely adding a second kind of immediate focus is of limited usethe difference between these two theories can be seen from the following example on sidner account carl is the actor focus after and jeff is the discourse focusbecause the actor focus is preferred as the referent of pronominal expressions carl is the leading candidate for the entity referred to by he in it is difficult to rule this case out without invoking fairly special domainspecific ruleson our account jeff is the cb at and there is no problemthe type of example sidner was concerned about would occur if utterance were replaced by quothe thinks he studies too muchquot however the centering rules would still hold in this casethey provide no constraints on additional pronouns so long as the highest ranked cf is realized by a pronounhowever the rules are incomplete in particular as given they do not specify which pronoun in a multipronoun utterance refers to the cbthe center management rules are based solely on the cb and the highest ranked member of the cfas a result while there are cases of multiple pronouns for which the theory makes incomplete predictions having both an actor and a discourse focus will not handle these cases in generaljoshi and kuhn and joshi and weinstein presented a preliminary report on their research regarding the connection between the computational complexity of the inferences required to process a discourse and the coherence of that discourse as assessed by measures that invoke centering phenomenahowever their basic definitions conflate the centers of an utterance with the linguistic expressions that realize those centersin some of their examples it is unclear whether the shift in center or the particular expression used to realize the center is responsible for differences in coherence and inference loadour present work has clarified these differences while maintaining joshi and weinstein basic focus on the interaction between inference load and center managementsince grosz joshi and weinstein was first circulated a number of researchers have tested and developed aspects of the theory presented herethis followon research can be roughly grouped in a few main areaswe want to thank breck baldwin felicia hurewitz andy kehler karen lochbaum christine nakatani ellen prince and lyn walker for their valuable comments which helped us improve both the content and the presentation of our paperwe are also grateful to carolyn elken for helping us keep track of the various drafts of this paper and for providing valuable editorial helppartial support for the first author was provided by grants nsf iri9009018 and iri9308173 the second author was partially supported by the aro grant daal03890031 and arpa grant n00014901863
J95-2003
centering a framework for modeling the local coherence of discoursethis paper concerns relationships among focus of attention choice of referring expression and perceived coherence of utterances within a discourse segmentit presents a framework and initial theory of centering intended to model the local component of attentional statethe paper examines interactions between local coherence and choice of referring expressions it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions given a particular attentional stateit demonstrates that the attentional state properties modeled by centering can account for these differencesour centering model uses a ranking of discourse entities realized in particular sentence sand computes transitions between adjacent sentences to provide insight in the felicity of textsour centering theory postulates strong links between the center of attention in comprehension of adjacent sentences and syntactic position and form of referenceour centering theory is an entitybased theory of local coherence which claims that certain entities mentioned in an utterance are more central than others and that this property constrains a speaker use of certain referring expressionsour centering theory is an influential framework for modelling entity coherence in computational linguistics in the last two decades
transformationbasederrordriven learning and natural language processing a case study in partofspeech tagging recently there has been a rebirth of empiricism in the field of natural language processing manual encoding of linguistic information is being challenged by automated corpusbased learning as a method of providing a natural language processing system with linguistic knowledge although corpusbased approaches have been successful in many different areas of natural language processing it is often the case that these methods capture the linguistic information they are modelling indirectly in large opaque tables of statistics this can make it difficult to analyze understand and improve the ability of these approaches to model underlying linguistic behavior in this paper we will describe a simple rulebased approach to automated learning of linguistic knowledge this approach has been shown for a number of tasks to capture information in a clearer and more direct fashion without a compromise in performance we present a detailed case study of this learning method applied to partofspeech tagging recently there has been a rebirth of empiricism in the field of natural language processingmanual encoding of linguistic information is being challenged by automated corpusbased learning as a method of providing a natural language processing system with linguistic knowledgealthough corpusbased approaches have been successful in many different areas of natural language processing it is often the case that these methods capture the linguistic information they are modelling indirectly in large opaque tables of statisticsthis can make it difficult to analyze understand and improve the ability of these approaches to model underlying linguistic behaviorin this paper we will describe a simple rulebased approach to automated learning of linguistic knowledgethis approach has been shown for a number of tasks to capture information in a clearer and more direct fashion without a compromise in performancewe present a detailed case study of this learning method applied to partofspeech taggingit has recently become clear that automatically extracting linguistic information from a sample text corpus can be an extremely powerful method of overcoming the linguistic knowledge acquisition bottleneck inhibiting the creation of robust and accurate natural language processing systemsa number of partofspeech taggers are readily available and widely used all trained and retrainable on text corpora endemic structural ambiguity which can lead to such difficulties as trying to cope with the many thousands of possible parses that a grammar can assign to a sentence can be greatly reduced by adding empirically derived probabilities to grammar rules and by computing statistical measures of lexical association wordsense disambiguation a problem that once seemed out of reach for systems without a great deal of handcrafted linguistic and world knowledge can now in some cases be done with high accuracy when all information is derived automatically from corpora an effort has recently been undertaken to create automated machine translation systems in which the linguistic information needed for translation is extracted automatically from aligned corpora these are just a few of the many recent applications of corpusbased techniques in natural language processingalong with great research advances the infrastructure is in place for this line of research to grow even stronger with online corpora the grist of the corpusbased natural language processing grindstone getting bigger and better and becoming more readily availablethere are a number of efforts worldwide to manually annotate large corpora with linguistic information including parts of speech phrase structure and predicateargument structure a vast amount of online text is now available and much more will become available in the futureuseful tools such as large aligned corpora and semantic word hierarchies have also recently become availablecorpusbased methods are often able to succeed while ignoring the true complexities of language banking on the fact that complex linguistic phenomena can often be indirectly observed through simple epiphenomenafor example one could accurately assign a partofspeech tag to the word race in without any reference to phrase structure or constituent movement one would only have to realize that usually a word one or two words to the right of a modal is a verb and not a nounan exception to this generalization arises when the word is also one word to the right of a determinerit is an exciting discovery that simple stochastic ngram taggers can obtain very high rates of tagging accuracy simply by observing fixedlength word sequences without recourse to the underlying linguistic structurehowever in order to make progress in corpusbased natural language processing we must become better aware of just what cues to linguistic structure are being captured and where these approximations to the true underlying phenomena failwith many of the current corpusbased approaches to natural language processing this is a nearly impossible taskconsider the partofspeech tagging example abovein a stochastic ngram tagger the information about words that follow modals would be hidden deeply in the thousands or tens of thousands of contextual probabilities and the result of multiplying different combinations of these probabilities togetherbelow we describe a new approach to corpusbased natural language processing called transformationbased errordriven learningthis algorithm has been applied to a number of natural language problems including partofspeech tagging prepositional phrase attachment disambiguation and syntactic parsing we have also recently begun exploring the use of this technique for lettertosound generation and for building pronunciation networks for speech recognitionin this approach the learned linguistic information is represented in a concise and easily understood formthis property should make transformationbased learning a useful tool for further exploring linguistic modeling and attempting to discover ways of more tightly coupling the underlying linguistic systems and our approximating modelsfigure 1 illustrates how transformationbased errordriven learning worksfirst unannotated text is passed through an initialstate annotatorthe initialstate annotator can range in complexity from assigning random structure to assigning the output of a sophisticated manually created annotatorin partofspeech tagging various initialstate annotators have been used including the output of a stochastic ngram tagger labelling all words with their most likely tag as indicated in the training corpus and naively labelling all words as nounsfor syntactic parsing we have explored initialstate annotations ranging from the output of a sophisticated parser to random tree structure with random nonterminal labelsonce text has been passed through the initialstate annotator it is then compared to the trutha manually annotated corpus is used as our reference for truthan ordered list of transformations is learned that can be applied to the output of the initialstate annotator to make it better resemble the truththere are two components to a transformation a rewrite rule and a triggering environmentan example of a rewrite rule for partofspeech tagging is and an example of a triggering environment is the preceding word is a determinertaken together the transformation with this rewrite rule and triggering environment when applied to the word can would correctly change the mistagged where a b and c can be either terminals or nonterminalsone possible set of triggering environments is any combination of words partofspeech tags and nonterminal labels within and adjacent to the subtreeusing this rewrite rule and the triggering environment a the the bracketing would become ate in all of the applications we have examined to date the following greedy search is applied for deriving a list of transformations at each iteration of learning the transformation is found whose application results in the best score according to the objective function being used that transformation is then added to the ordered transformation list and the training corpus is updated by applying the learned transformationlearning continues until no transformation can be found whose application results in an improvement to the annotated corpusother more sophisticated search techniques could be used such as simulated annealing or learning with a lookahead window but we have not yet explored these alternativesfigure 2 shows an example of learning transformationsin this example we assume there are only four possible transformations ti through t4 and that the objective function is the total number of errorsthe unannotated training corpus is processed by the initialstate annotator and this results in an annotated corpus with 5100 errors determined by comparing the output of the initialstate annotator with the manually derived annotations for this corpusnext we apply each of the possible transformations in turn and score the resulting annotated corpusin this example applying transformation t2 results in the largest reduction of errors so 12 is learned as the first transformationt2 is then applied to the entire corpus and learning continuesat this stage of learning transformation 13 results in the largest reduction of error so it is learned as the second transformationafter applying the initialstate annotator followed by t2 and then t3 no further reduction in errors can be obtained from applying any of the transformations so learning stopsto annotate fresh text this text is first annotated by the initialstate annotator followed by the application of transformation t2 and then by the application of t3to define a specific application of transformationbased learning one must specify the following in cases where the application of a particular transformation in one environment could affect its application in another environment two additional parameters must be specified the order in which transformations are applied to a corpus and whether a transformation is applied immediately or only after the entire corpus has been examined for triggering environmentsfor example take the sequenceand the transformation if the effect of the application of a transformation is not written out until the entire file has been processed for that one transformation then regardless of the order of processing the output will be abbbbb since the triggering environment of a transformation is always checked before that transformation is applied to any surrounding objects in the corpusif the effect of a transformation is recorded immediately then processing the string left to right would result in ababab whereas processing right to left would result in abbbbbthe technique employed by the learner is somewhat similar to that used in decision trees a decision tree is trained on a set of preclassified entities and outputs a set of questions that can be asked about an entity to determine its proper classificationdecision trees are built by finding the question whose resulting partition is the purest2 splitting the training data according to that question and then recursively reapplying this procedure on each resulting subsetwe first show that the set of classifications that can be provided via decision trees is a proper subset of those that can be provided via transformation lists given the same set of primitive questionswe then give some practical differences between the two learning methodswe prove here that for a fixed set of primitive queries any binary decision tree can be converted into a transformation listextending the proof beyond binary trees is straightforwardgiven the following primitive decision tree where the classification is a if the answer to the query x is yes and the classification is b if the answer is no brill transformationbased errordriven learning this tree can be converted into the following transformation list assume that two decision trees t1 and t2 have corresponding transformation lists l1 and l2assume that the arbitrary label names chosen in constructing ll are not used in l2 and that those in l2 are not used in l1given a new decision tree t3 constructed from t1 and t2 as follows x we construct a new transformation list l3assume the first transformation in l1 is label with s and the first transformation in l2 is label with squot the first three transformations in l3 will then be followed by all of the rules in l1 other than the first rule followed by all of the rules in l2 other than the first rulethe resulting transformation list will first label an item as s if x is true or as squot if x is falsenext the tranformations from l1 will be applied if x is true since s is the initialstate label for l1if x is false the transformations from l2 will be applied because squot is the initialstate label for l20 we show here that there exist transformation lists for which no equivalent decision trees exist for a fixed set of primitive queriesthe following classification problem is one examplegiven a sequence of characters classify a character based on whether the position index of a character is divisible by 4 querying only using a context of two characters to the left of the character being classifiedassuming transformations are applied left to right on the sequence the above classification problem can be solved for sequences of arbitrary length if the effect of a transformation is written out immediately or for sequences up to any prespecified length if a transformation is carried out only after all triggering environments in the corpus are checkedwe present the proof for the former casegiven the input sequence the underlined characters should be classified as true because their indices are 0 4 and 8to see why a decision tree could not perform this classification regardless of order of classification note that for the two characters before both a3 and a4 both the characters and their classifications are the same although these two characters should be classified differentlybelow is a transformation list for performing this classificationonce again we assume transformations are applied left to right and that the result of a transformation is written out immediately so that the result of applying transformation x to character a will always be known when applying transformation x to aiithe extra power of transformation lists comes from the fact that intermediate results from the classification of one object are reflected in the current label of that object thereby making this intermediate information available for use in classifying other objectsthis is not the case for decision trees where the outcome of questions asked is saved implicitly by the current location within the treethere are a number of practical differences between transformationbased errordriven learning and learning decision treesone difference is that when training a decision tree each time the depth of the tree is increased the average amount of training material available per node at that new depth is halved in transformationbased learning the entire training corpus is used for finding all transformationstherefore this method is not subject to the sparse data problems that arise as the depth of the decision tree being learned increasestransformations are ordered with later transformations being dependent upon the outcome of applying earlier transformationsthis allows intermediate results in 550 brill transformationbased errordriven learning classifying one object to be available in classifying other objectsfor instance whether the previous word is tagged as toinfinitival or topreposition may be a good cue for determining the part of speech of a wordif initially the word to is not reliably tagged everywhere in the corpus with its proper tag then this cue will be unreliablethe transformationbased learner will delay positing a transformation triggered by the tag of the word to until other transformations have resulted in a more reliable tagging of this word in the corpusfor a decision tree to take advantage of this information any word whose outcome is dependent upon the tagging of to would need the entire decision tree structure for the proper classification of each occurrence of to built into its decision tree pathif the classification of to were dependent upon the classification of yet another word this would have to be built into the decision tree as wellunlike decision trees in transformationbased learning intermediate classification results are available and can be used as classification progresseseven if decision trees are applied to a corpus in a lefttoright fashion they are allowed only one pass in which to properly classifysince a transformation list is a processor and not a classifier it can readily be used as a postprocessor to any annotation systemin addition to annotating from scratch rules can be learned to improve the performance of a mature annotation system by using the mature system as the initialstate annotatorthis can have the added advantage that the list of transformations learned using a mature annotation system as the initialstate annotator provides a readable description or classification of the errors the mature system makes thereby aiding in the refinement of that systemthe fact that it is a processor gives a transformationbased learner greater than the classifierbased decision treefor example in applying transformationbased learning to parsing a rule can apply any structural change to a treein tagging a rule such as change the tag of the current word to x and of the previous word to y if z holds can easily be handled in the processorbased system whereas it would be difficult to handle in a classification systemin transformationbased learning the objective function used in training is the same as that used for evaluation whenever this is feasiblein a decision tree using system accuracy as an objective function for training typically results in poor performance and some measure of node purity such as entropy reduction is used insteadthe direct correlation between rules and performance improvement in transformationbased learning can make the learned rules more readily interpretable than decision tree rules for increasing population purityin this section we describe the practical application of transformationbased learning to partofspeech taggingpartofspeech tagging is a good application to test the learner for several reasonsthere are a number of large tagged corpora available allowing for a variety of experiments to be runpartofspeech tagging is an active area of research a great deal of work has been done in this area over the past few years partofspeech tagging is also a very practical application with uses in many areas including speech recognition and generation machine translation parsing information retrieval and lexicographyinsofar as tagging can be seen as a prototypical problem in lexical ambiguity advances in partofspeech tagging could readily translate to progress in other areas of lexical and perhaps structural ambiguity such as wordsense disambiguation and prepositional phrase attachment disambiguationalso it is possible to cast a number of other useful problems as partofspeech tagging problems such as lettertosound translation and building pronunciation networks for speech recognitionrecently a method has been proposed for using partofspeech tagging techniques as a method for parsing with lexicalized grammars when automated partofspeech tagging was initially explored people manually engineered rules for tagging sometimes with the aid of a corpusas large corpora became available it became clear that simple markovmodel based stochastic taggers that were automatically trained could achieve high rates of tagging accuracy markovmodel based taggers assign to a sentence the tag sequence that maximizes probprobthese probabilities can be estimated directly from a manually tagged corpusthese stochastic taggers have a number of advantages over the manually built taggers including obviating the need for laborious manual rule construction and possibly capturing useful information that may not have been noticed by the human engineerhowever stochastic taggers have the disadvantage that linguistic information is captured only indirectly in large tables of statisticsalmost all recent work in developing automatically trained partofspeech taggers has been on further exploring markovmodel based tagging transformationbased part of speech tagging works as followsthe initialstate annotator assigns each word its most likely tag as indicated in the training corpusthe method used for initially tagging unknown words will be described in a later sectionan ordered list of transformations is then learned to improve tagging accuracy based on contextual cuesthese transformations alter the tagging of a word from x to y iff in taggers based on markov models the lexicon consists of probabilities of the somewhat counterintuitive but proper form pin the transformationbased tagger the lexicon is simply a list of all tags seen for a word in the training corpus with one tag labeled as the most likelybelow we show a lexical entry for the word half in the transformationbased tagger1 half cd dt jj nn pdt rb vb this entry lists the seven tags seen for half in the training corpus with nn marked as the most likelybelow are the lexical entries for half in a markov model tagger extracted from the same corpus it is difficult to make much sense of these entries in isolation they have to be viewed in the context of the many contextual probabilitiesfirst we will describe a nonlexicalized version of the tagger where transformation templates do not make reference to specific wordsin the nonlexicalized tagger the transformation templates we use are change tag a to tag b when where a b z and w are variables over the set of parts of speechto learn a transformation the learner in essence tries out every possible transformationquot and counts the number of tagging errors after each one is appliedafter all possible transformations have been tried the transformation that resulted in the greatest error reduction is chosenlearning stops when no transformations can be found whose application reduces errors beyond some prespecified thresholdin the experiments described below processing was done left to rightfor each transformation application all triggering environments are first found in the corpus and then the transformation triggered by each triggering environment is carried outthe search is datadriven so only a very small percentage of possible transformations really need be examinedin figure 3 we give pseudocode for the learning algorithm in the case where there is only one transformation template in each learning iteration the entire training corpus is examined once for every pair of tags x and y finding the best transformation whose rewrite changes tag x to tag yfor every word in the corpus whose environment matches the triggering environment if the word has tag x and x is the correct tag then making this transformation will result in an additional tagging error so we increment the number of errors caused when making the transformation given the partofspeech tag of the previous word if x is the current tag and y is the correct tag then the transformation will result in one less error so we increment the number of improvements caused when making the transformation given the partofspeech tag of the previous word in certain cases a significant increase in speed for training the transformationbased tagger can be obtained by indexing in the corpus where different transformations can and do applyfor a description of a fast indexbased training algorithm see ramshaw and marcus in figure 4 we list the first twenty transformations learned from training on the penn treebank wall street journal corpus 12 the first transformation states that a noun should be changed to a verb if 12 version 05 of the penn treebank was used in all experiments reported in this paper the previous tag is to as in toto conflictinnvb withthe second transformation fixes a tagging such as mightmd vanishivbpvbthe third fixes mightmd not replynnvbthe tenth transformation is for the token which is a separate token in the penn treebank is most frequently used as a possessive ending but after a personal pronoun it is a verb the transformations changing in to wdt are for tagging the word that to determine in which environments that is being used as a synonym of whichin general no relationships between words have been directly encoded in stochastic ngram taggersin the markov model typically used for stochastic tagging state transition probabilities express the likelihood of a tag immediately following n other tags and emit probabilities express the likelihood of a word given a tagmany useful relationships such as that between a word and the previous word or between a tag and the following word are not directly captured by markovmodel based taggersthe same is true of the nonlexicalized transformationbased tagger where transformation templates do not make reference to wordsto remedy this problem we extend the transformationbased tagger by adding contextual transformations that can make reference to words as well as partofspeech tagsthe transformation templates we add are change tag a to tag b when 8the current word is w the preceding word is w2 and the preceding tag is t where w and x are variables over all words in the training corpus and z and t are variables over all parts of speechbelow we list two lexicalized transformations that were learned training once again on the wall street journalchange the tag the penn treebank tagging style manual specifies that in the collocation as as the first as is tagged as an adverb and the second is tagged as a prepositionsince as is most frequently tagged as a preposition in the training corpus the initialstate tagger will mistag the phrase as tall as as the first lexicalized transformation corrects this mistaggingnote that a bigram tagger trained on our training set would not correctly tag the first occurrence of asalthough adverbs are more likely than prepositions to follow some verb form tags the fact that p is much greater than p and p is much greater than p lead to as being incorrectly tagged as a preposition by a stochastic taggera trigram tagger will correctly tag this collocation in some instances due to the fact that p is greater than p but the outcome will be highly dependent upon the context in which this collocation appearsthe second transformation arises from the fact that when a verb appears in a context such as we do nt eat or we did nt usually drink the verb is in base forma stochastic trigram tagger would have to capture this linguistic information indirectly from frequency counts of all trigrams of the form shown in figure 5 and from the fact that p is fairly highin weischedel et al results are given when training and testing a markovmodel based tagger on the penn treebank tagged wall street journal corpusthey cite results making the closed vocabulary assumption that all possible tags for all words in the test set are knownwhen training contextual probabilities on one million words an accuracy of 967 was achievedaccuracy dropped to 963 when contextual probabilities were trained on 64000 wordswe trained the transformationbased tagger on the same corpus making the same closedvocabulary assumptionwhen training contextual rules on 600000 words an accuracy of 972 was achieved on a separate 150000 word test setwhen the training set was reduced to 64000 words accuracy dropped to 967the transformationbased learner achieved better performance despite the fact that contextual information was captured in a small number of simple nonstochastic rules as opposed to 10000 contextual probabilities that were learned by the stochastic taggerthese results are summarized in table 1when training on 600000 words a total of 447 transformations were learnedhowever transformations toward the end of the list contribute very little to accuracy applying only the first 200 learned transformations to the test set achieves an accuracy of 970 applying the first 100 gives an accuracy of 968to match the 967 accuracy achieved by the stochastic tagger when it was trained on one million words only the first 82 transformations are neededto see whether lexicalized transformations were contributing to the transformationbased tagger accuracy rate we first trained the tagger using the nonlexical transformation template subset then ran exactly the same testaccuracy of that tagger was 970adding lexicalized transformations resulted in a 67 decrease in the error rate 16 we found it a bit surprising that the addition of lexicalized transformations did not result in a much greater improvement in performancewhen transformations are allowed to make reference to words and word pairs some relevant information is probably missed due to sparse datawe are currently exploring the possibility of incorporating word classes into the rulebased learner in hopes of overcoming this problemthe idea is quite simplegiven any source of word class information such as wordnet the learner is extended such that a rule is allowed to make reference to parts of speech words and word classes allowing for rules such as this approach has already been successfully applied to a system for prepositional phrase attachment disambiguation so far we have not addressed the problem of unknown wordsas stated above the initialstate annotator for tagging assigns all words their most likely tag as indicated in a training corpusbelow we show how a transformationbased approach can be taken for tagging unknown words by automatically learning cues to predict the most likely tag for words not seen in the training corpusif the most likely tag for unknown words can be assigned with high accuracy then the contextual rules can be used to improve accuracy as described abovein the transformationbased unknownword tagger the initialstate annotator naively assumes the most likely tag for an unknown word is quotproper nounquot if the word is capitalized and quotcommon nounquot otherwisebelow we list the set of allowable transformationschange the tag of an unknown word to y if 17 if we change the tagger to tag all unknown words as common nouns then a number of rules are learned of the form change tag to proper noun if the prefix is quotequot quotaquot quotbquot etc since the learner is not provided with the concept of upper case in its set of transformation templatesthe first 20 transformations for unknown wordsan unannotated text can be used to check the conditions in all of the above transformation templatesannotated text is necessary in training to measure the effect of transformations on tagging accuracysince the goal is to label each lexical entry for new words as accurately as possible accuracy is measured on a per type and not a per token basisfigure 6 shows the first 20 transformations learned for tagging unknown words in the wall street journal corpusas an example of how rules can correct errors generated by prior rules note that applying the first transformation will result in the mistagging of the word actressthe 18th learned rule fixes this problemthis rule states suffix sskeep in mind that no specific affixes are prespecifieda transformation can make reference to any string of characters up to a bounded lengthso while the first rule specifies the english suffix quotsquot the rule learner was not constrained from considering such nonsensical rules as also absolutely no englishspecific information need be prespecified in the learnerquot we then ran the following experiment using 11 million words of the penn treebank tagged wall street journal corpusof these 950000 words were used for training and 150000 words were used for testingannotations of the test corpus were not used in any way to train the systemfrom the 950000 word training corpus 350000 words were used to learn rules for tagging unknown words and 600000 words were used to learn contextual rules 243 rules were learned for tagging unknown words and 447 contextual tagging rules were learnedunknown word accuracy on the test corpus was 822 and overall tagging accuracy on the test corpus was 966to our knowledge this is the highest overall tagging accuracy ever quoted on the penn treebank corpus when making the open vocabulary assumptionusing the tagger without lexicalized rules an overall accuracy of 963 and an unknown word accuracy of 820 is obtaineda graph of accuracy as a function of transformation number on the test set for lexicalized rules is shown in figure 7before applying any transformations test set accuracy is 924 so the transformations reduce the error rate by 50 over the baselinethe high baseline accuracy is somewhat misleading as this includes the tagging of unambiguous wordsbaseline accuracy when the words that are unambiguous in our lexicon are not considered is 864however it is difficult to compare taggers using this figure as the accuracy of the system depends on the particular lexicon usedfor instance in our training set the word the was tagged with a number of different tags and so according to our lexicon the is ambiguousif we instead used a lexicon where the is listed unambiguously as a determiner the baseline accuracy would be 846for tagging unknown words each word is initially assigned a partofspeech tag based on word and worddistribution featuresthen the tag may be changed based on contextual cues via contextual transformations that are applied to the entire corpus both known and unknownwordswhen the contextual rule learner learns transformations it does so in an attempt to maximize overall tagging accuracy and not unknownword tagging accuracyunknown words account for only a small percentage of the corpus in our experiments typically two to three percentsince the distributional behavior of unknown words is quite different from that of known words and transformations are not englishspecific the set of transformation templates would have to be extended to process languages with dramatically different morphology since a transformation that does not increase unknownword tagging accuracy can still be beneficial to overall tagging accuracy the contextual transformations learned are not optimal in the sense of leading to the highest tagging accuracy on unknown wordsbetter unknownword accuracy may be possible by training and using two sets of contextual rules one maximizing knownword accuracy and the other maximizing unknownword accuracy and then applying the appropriate transformations to a word when tagging depending upon whether the word appears in the lexiconwe are currently experimenting with this ideain weischedel et al a statistical approach to tagging unknown words is shownin this approach a number of suffixes and important features are prespecifiedthen for unknown words using this equation for unknown word emit probabilities within the stochastic tagger an accuracy of 85 was obtained on the wall street journal corpusthis portion of the stochastic model has over 1000 parameters with 108 possible unique emit probabilities as opposed to a small number of simple rules that are learned and used in the rulebased approachin addition the transformationbased method learns specific cues instead of requiring them to be prespecified allowing for the possibility of uncovering cues not apparent to the human language engineerwe have obtained comparable performance on unknown words while capturing the information in a much more concise and perspicuous manner and without prespecifying any information specific to english or to a specific corpusin table 2 we show tagging results obtained on a number of different corpora in each case training on roughly 95 x 105 words total and testing on a separate test set of 152 x 108 wordsaccuracy is consistent across these corpora and tag setsin addition to obtaining high rates of accuracy and representing relevant linguistic information in a small set of rules the partofspeech tagger can also be made to run extremely fastroche and schabes show a method for converting a list of tagging transformations into a deterministic finite state transducer with one state transition taken per word of input the result is a transformationbased tagger whose tagging speed is about ten times that of the fastest markovmodel taggerthere are certain circumstances where one is willing to relax the onetagperword requirement in order to increase the probability that the correct tag will be assigned to each wordin demarcken and weischedel et al kbest tags are assigned within a stochastic tagger by returning all tags within some threshold of probability of being correct for a particular wordwe can modify the transformationbased tagger to return multiple tags for a word by making a simple modification to the contextual transformations described abovethe initialstate annotator is the tagging output of the previously described onebest transformationbased taggerthe allowable transformation templates are the same as the contextual transformation templates listed above but with the rewrite rule change tag x to tag y modified to add tag x to tag y or add tag x to word w instead of changing the tagging of a word transformations now add alternative taggings to a wordwhen allowing more than one tag per word there is a tradeoff between accuracy and the average number of tags for each wordideally we would like to achieve as large an increase in accuracy with as few extra tags as possibletherefore in training we find transformations that maximize the function number of corrected errors number of additional tags in table 3 we present results from first using the onetagperword transformationbased tagger described in the previous section and then applying the kbest tag transformationsthese transformations were learned from a separate 240000 word corpusas a baseline we did kbest tagging of a test corpuseach known word in the test corpus was tagged with all tags seen with that word in the training corpus and the five most likely unknownword tags were assigned to all words not seen in the training corpusthis resulted in an accuracy of 990 with an average of 228 tags per wordthe transformationbased tagger obtained the same accuracy with 143 tags per word one third the number of additional tags as the baseline taggerin this paper we have described a new transformationbased approach to corpusbased learningwe have given details of how this approach has been applied to partofspeech tagging and have demonstrated that the transformationbased approach obtains competitive performance with stochastic taggers on tagging both unknown and known wordsthe transformationbased tagger captures linguistic information in a small number of simple nonstochastic rules as opposed to large numbers of lexical and contextual probabilitiesthis learning approach has also been applied to a number of other tasks including prepositional phrase attachment disambiguation bracketing text and labeling nonterminal nodes recently we have begun to explore the possibility of extending these techniques to other problems including learning pronunciation networks for speech recognition and learning mappings between syntactic and semantic representationsthis work was funded in part by nsf grant iri9502312in addition this work was done in part while the author was in the spoken language systems group at massachusetts institute of technology under arpa grant n00014891332 and by darpaafosr grant afosr900066 at the university of pennsylvaniathanks to mitch marcus mark villain and the anonymous reviewers for many useful comments on earlier drafts of this paper
J95-4004
transformationbasederrordriven learning and natural language processing a case study in partofspeech taggingrecently there has been a rebirth of empiricism in the field of natural language processingmanual encoding of linguistic information is being challenged by automated corpusbased learning as a method of providing a natural language processing system with linguistic knowledgealthough corpusbased approaches have been successful in many different areas of natural language processing it is often the case that these methods capture the linguistic information they are modelling indirectly in large opaque tables of statisticsthis can make it difficult to analyze understand and improve the ability of these approaches to model underlying linguistic behaviorin this paper we will describe a simple rulebased approach to automated learning of linguistic knowledgethis approach has been shown for a number of tasks to capture information in a clearer and more direct fashion without a compromise in performancewe present a detailed case study of this learning method applied to partofspeech taggingwe outline a transformationbased learned which learns guessing rules from a pretagged training corpuswe propose nonsequential transformationbased learningwe introduce a symbolic machine learning method a class sequence example transformationbased learning
translating collocations for bilingual lexicons a statistical approach collocations are notoriously difficult for nonnative speakers to translate primarily because they are opaque and cannot be translated on a wordbyword basis we describe a program named given a pair of parallel corpora in two different languages and a list of collocations in one of them automatically produces their translations our goal is to provide a tool for compiling bilingual lexical information above the word level in multiple languages for different domains the algorithm we use is based on statistical methods and produces pword translations of collocations in which not be the same for example decision employment equity market decision equite matiere demploi testing three years worth of the hansards corpus yielded the french translations of 300 collocations for each year evaluated at 73 accuracy on average in this paper we describe the statistical measures used the algorithm the implementation of our results and evaluation collocations are notoriously difficult for nonnative speakers to translate primarily because they are opaque and cannot be translated on a wordbyword basiswe describe a program named champollion which given a pair of parallel corpora in two different languages and a list of collocations in one of them automatically produces their translationsour goal is to provide a tool for compiling bilingual lexical information above the word level in multiple languages for different domainsthe algorithm we use is based on statistical methods and produces pword translations of nword collocations in which n and p need not be the samefor example champollion translates make decision employment equity and stock market into prendre decision equite en matiere demploi and bourse respectivelytesting champollion on three years worth of the hansards corpus yielded the french translations of 300 collocations for each year evaluated at 73 accuracy on averagein this paper we describe the statistical measures used the algorithm and the implementation of champollion presenting our results and evaluationhieroglyphics remained undeciphered for centuries until the discovery of the rosetta stone in the beginning of the 19th century in rosetta egyptthe rosetta stone is a tablet of black basalt containing parallel inscriptions in three different scripts greek and two forms of ancient egyptian writings jeanfrancois champollion a linguist and egyptologist made the assumption that these inscriptions were parallel and managed after several years of research to decipher the hieroglyphic inscriptionshe used his work on the rosetta stone as a basis from which to produce the first comprehensive hieroglyphics dictionary in this paper we describe a modern version of a similar approach given a large corpus in two languages our system produces translations of common word pairs and phrases that can form the basis of a bilingual lexiconour focus is on the use of statistical methods for the translation of multiword expressions such as collocations which are often idiomatic in naturepublished translations of such collocations are not readily available even for languages such as french and english despite the fact that collocations have been recognized as one of the main obstacles to second language acquisition we have developed a program named champollion1 which given a sentencealigned parallel bilingual corpus translates collocations in the source language into collocations in the target languagethe aligned corpus is used as a reference or database corpus and represents champollion s knowledge of both languageschampollion uses statistical methods to incrementally construct the collocation translation adding one word at a timeas a correlation measure champollion uses the dice coefficient commonly used in information retrieval for a given source language collocation champollion identifies individual words in the target language that are highly correlated with the source collocation thus producing a set of words in the target languagethese words are then combined in a systematic iterative manner to produce a translation of the source language collocationchampollion considers all pairs of these words and identifies any that are highly correlated with the source collocationnext triplets are produced by adding a highly correlated word to a highly correlated pair and the triplets that are highly correlated with the source language collocation are passed to the next stagethis process is repeated until no more highly correlated combinations of words can be foundchampollion selects the group of words with the highest cardinality and correlation factor as the target collocationfinally it produces the correct word ordering of the target collocation by examining samples in the corpusif word order is variable in the target collocation champollion labels it flexible otherwise the correct word order is reported and the collocation is labeled rigidto evaluate champollion we used a collocation compiler xtract to automatically produce several lists of source collocationsthese source collocations contain both flexible word pairs which can be separated by an arbitrary number of words and fixed constituents such as compound noun phrasesusing xtract on three parts of the english data in the hansards corpus each representing one year worth of data we extracted three sets of collocations each consisting of 300 randomly selected collocations occurring with medium frequencywe then ran champollion on each of these sets using three separate database corpora of varying size also taken from the hansards corpuswe asked several people fluent in both french and english to judge the results and the accuracy of champollion was found to range from 65 to 78in our discussion of results we show how problems for the lower score can be alleviated by increasing the size of the database corpusin the following sections we first present a review of related work in statistical natural language processing dealing with bilingual dataour algorithm depends on using a measure of correlation to find words that are highly correlated across languageswe describe the measure that we use and then provide a detailed description of the algorithm following this with a theoretical analysis of the performance of our algorithmnext we turn to a description of the results and evaluationfinally we show how the results can be used for a variety of applications closing with a discussion of the limitations of our approach and of future workthe recent availability of large amounts of bilingual data has attracted interest in several areas including sentence alignment word alignment alignment of groups of words and statistical translation of these aligning groups of words is most similar to the work reported here although as we shall show we consider a greater variety of groups than is typical in other researchin this section we describe work on sentence and word alignment and statistical translation showing how these goals differ from our own and then describe work on aligning groups of wordsnote that there is additional research using statistical approaches to bilingual problems but it is less related to ours addressing for example word sense disambiguation in the source language by statistically examining context in the source language thus allowing appropriate word selection in the target languageour use of bilingual corpora assumes a prealigned corpusthus we draw on work done at att bell laboratories by gale and church and at ibm by brown lai and mercer on bilingual sentence alignmentsentence alignment programs take a paired bilingual corpus as input and determine which sentences in the target language translate which sentences in the source languageboth the att and the ibm groups use purely statistical techniques based on sentence length to identify sentence pairing in corpora such as the hansardsthe att group defines sentence length by the number of characters in the sentences while the ibm group defines sentence length by the number of words in the sentenceboth approaches achieve similar results and have been influential in much of the research on statistical natural language processing including oursit has been noted in more recent work that lengthbased alignment programs such as these are problematic for many cases of real world parallel data such as ocr input in which periods may not be noticeable or languages where insertions or deletions are common these algorithms were adequate for our purposes but could be replaced by algorithms more appropriate for noisy input corpora if necessary sentence alignment techniques are generally used as a preprocessing stage before the main processing component that proposes actual translations whether of words phrases or full text and they are used this way in our work as welltranslation can be approached using statistical techniques alonebrown et al use a stochastic language model based on techniques used in speech recognition combined with translation probabilities compiled on the aligned corpus to do sentence translationtheir system candide uses little linguistic and no semantic information and currently produces good quality translations for short sentences containing high frequency vocabulary as measured by individual human evaluators while they also align groups of words across languages in the process of translation they are careful to point out that such groups may or may not occur at constituent breaks in the sentencein contrast our work aims at identifying syntactically and semantically meaningful units which may be either constituents or flexible word pairs separated by intervening words and provides the translation of these units for use in a variety of bilingual applicationsthus the goals of our research are somewhat differentkupiec describes a technique for finding noun phrase correspondences in bilingual corpora using several stagesfirst as for champollion the bilingual corpus must be aligned by sentencesthen each corpus is separately run through a partofspeech tagger and noun phrase recognizerfinally noun phrases are mapped to each other using an iterative reestimation algorithmevaluation was done on the 100 highestranking correspondences produced by the program yielding 90 accuracyevaluation has not been completed for the remaining correspondences4900 distinct english noun phrasesthe author indicates that the technique has several limitations due in part to the compounded error rates of the taggers and noun phrase recognizersvan der eijk uses a similar approach for translating termshis work is based on the assumption that terms are noun phrases and thus like kupiec uses sentence alignment tagging and a noun phrase recognizerhis work differs in the correlation measure he uses he compares local frequency of the term to global frequency decreasing the resulting score by a weight representing the distance between the actual position of the target term and its expected position in the corpus this weight is small if the target term is exactly aligned with the source term and larger as the distance increaseshis evaluation shows 68 precision and 64 recallwe suspect that the lower precision is due in part to the fact that van der eijk evaluated all translations produced by the program while kupiec only evaluated the top 2note that the greatest difference between these two approaches and ours is that van der eijk and kupiec only handle noun phrases whereas collocations have been shown to include parts of noun phrases categories other than noun phrases as well as flexible phrases that involve words separated by an arbitrary number of other words in this work as in earlier work we address the full range of collocations including both flexible and rigid collocations for a variety of syntactic categoriesanother approach begun more recently than our work is taken by dagan and church who use statistical methods to translate technical terminologylike van der eijk and kupiec they preprocess their corpora by tagging and by identifying noun phraseshowever they use a word alignment program as opposed to sentence alignment and they include single words as candidates for technical termsone of the major differences between their work and ours is that like van der eijk and kupiec they only handle translation of uninterrupted sequences of words they do not handle the broader class of flexible collocationstheir system termight first extracts candidate technical terms presenting them to a terminologist for filteringthen termight identifies candidate translations for each occurrence of a source term by using the word alignment to find the first and last target positions aligned with any words of the source termsall candidate translations for a given source term are sorted by frequency and presented to the user along with a concordancebecause termight does not use additional correlation statistics relying instead only on the word alignment it will find translations for infrequent terms none of the other approaches including champollion can make this claimaccuracy however is considerably lower the most frequent translation for a term is correct only 40 of the time since termight is fully integrated within a translator editor and is used as an aid for human translators it gets around the problem of accuracy by presenting the sorted list of translations to the translator for a choicein all cases the correct translation was found in this list and translators were able to speed up both the task of identifying technical terminology and translating termsother recent related work aims at using statistical techniques to produce translations of single words as opposed to collocations or phraseswu and xia employed an estimationmaximization technique to find the optimal word alignment from previously sentencealigned clean parallel corpora2 with additional significance filteringthe work by fung and mckeown and fung is notable for its use of techniques suitable to asianromance language pairs as well as romance language pairsgiven that asian languages differ considerably in structure from romance languages statistical methods that were previously proposed for pairs of european languages do not work well for these pairsfung and mckeown work also focuses on word alignment from noisy parallel corpora where there are no clear sentence boundaries or perfect translationswork on the translation of single words into multiword sequences that integrates techniques for machinereadable dictionaries with statistical corpus analysis is also relevantwhile this work focuses on a smaller set of words for translation it provides a sophisticated approach using multiple knowledge sources to address both onetomany word translations and the problem of sense disambiguationgiven only one word in the source their system bicord uses the corpus to extend dictionary definitions and provide translations that are appropriate for a given sense but do not occur in the dictionary producing a bilingual lexicon of movement verbs as outputcollocations commonly occurring word pairs and phrases are a notorious source of difficulty for nonnative speakers of a language this is because they cannot be translated on a wordbyword basisinstead a speaker must be aware of the meaning of the phrase as a whole in the source language and know the common phrase typically used in the target languagewhile collocations are not predictable on the basis of syntactic or semantic rules they can be observed in language and thus must be learned through repeated usagefor example in american english one says set the table while in british english the phrase lay the table is usedthese are expressions that have evolved over timeit is not the meaning of the words lay and set that determines the use of one or the other in the full phrasehere the verb functions as a support verb it derives its meaning in good part from the object in this context and not from its own semantic featuresin addition such collocations are flexiblethe constraint is between the verb and its object and any number of words may occur between these two elements collocations also include rigid groups of words that do not change from one context to another such as compounds as in canadian charter of rights and freedomsto understand the difficulties that collocations pose for translation consider sentences and as because similar problems including to take steps to provi2 these corpora had little noisemost sentences neatly corresponded to translations in the paired corpus with few extraneous sentences quotmr speaker our government has demonstrated its support for these important principles by taking steps to enforce the provisions of the charter more vigorouslyquot the ability to automatically acquire collocation translations is thus a definite advantage for sublanguage translationwhen moving to a new domain and sublanguage translations that are appropriate can be acquired by running champollion on a new corpus from that domainsince in some instances parts of a sentence can be translated on a wordbyword basis a translator must know when a full phrase or pair of words must be considered for translation and when a wordbyword technique will sufficetwo tasks must therefore be considered for both tasks general knowledge of the two languages is not sufficientit is also necessary to know the expressions used in the sublanguage since we have seen that idiomatic phrases often have different translations in a restricted sublanguage than in general usagein order to produce a fluent translation of a full sentence it is necessary to know the specific translation for each of the source collocationswe use xtract a tool we developed previously to identify collocations in the source language xtract works in three stagesin the first stage word pairs that cooccur with significant frequency are identifiedthese words can be separated by up to four intervening words and thus constitute flexible collocationsin the second stage xtract identifies combinations of word pairs from stage one with other words and phrases producing compounds and idiomatic templates in the final stage xtract filters any pairs that do not consistently occur in the same syntactic relation using a parsed version of the corpusthis tool has been used in several projects at columbia university and has been distributed to a number of research and commercial sites worldwidextract has been developed and tested on englishonly inputfor optimal performance xtract itself relies on other tools such as a partofspeech tagger and a robust parseralthough such tools are becoming more widely available in many languages they are still hard to findwe have thus assumed in champollion that these tools were only available in one of the two languages namely english termed the source language throughout the paperto rank the proposed translations so that the best one is selected champollion uses a quantitative measure of correlation between the source collocation and its complete or partial translationsthis measure is also used to reduce the search space to a manageable size by filtering out partial translations that are not highly correlated with the source collocationin this section we discuss the properties of similarity measures that are appropriate for our applicationwe explain why the dice coefficient meets these criteria and why this measure is more appropriate than another frequently used measuremutual informationour approach is based on the assumption that each collocation is unambiguous in the source language and has a unique translation in the target language in this way we can ignore the context of the collocations and their translations and base our decisions only on the patterns of cooccurrence of each collocation and its candidate translations across the entire corpusthis approach is quite different from those adopted for the translation of single words since for single words polysemy cannot be ignored indeed the problem of sense disambiguation has been linked to the problem of translating ambiguous words the assumption of a single meaning per collocation was based on our previous experience with english collocations is supported for less opaque collocations by the fact that their constituent words tend to have a single sense when they appear in the collocation and was verified during our evaluation of champollion we construct a mathematical model of the events we want to correlate namely the appearance of any word or group of words in the sentences of our corpus as follows to each group of words g in either the source or the target language we map a binary random variable xg that takes the value quot1quot if g appears in a particular sentence and quot0quot if notthen the corpus of paired sentences comprising our database represents a collection of samples for the various random variables x for the various groups of wordseach new sentence in the corpus provides a new independent sample for every variable xgfor example if g is unemployment rate and the words unemployment rate appear only in the fifth and fiftyfifth sentences of our corpus then in our sample collection xg takes the value quot1quot for the fifth and fiftyfifth sentences and quot0quot for all other sentences in the corpusfurthermore for the measurement of correlation between a word group g in the source language and another word group h in the target language we map the paired sentences in our corpus to a collection of paired samples for the random variables xg and xhthis modeling process allows us to use correlation metrics between paired samples of random variables to measure the correlation between word groups across languagesthere are several ways to measure the correlation of two such random variablesone measure frequently used in information retrieval is the dice coefficient it is defined as where p p and p are the joint and marginal probability mass functions of the variables x and y respectivelyusing maximum likelihood estimates for the probabilities in the above equation we have where fx fy and fxy are the absolute frequencies of appearance of quot1quots for the variables x y and both x and y together respectivelyon the other hand in computational linguistics informationtheoretic measures such as mutual information are widely used in information theory the mutual information i between two binary random variables x and y is defined as however in computational linguistics the term mutual information has been used most of the time to describe only a part of the above sum namely the term from the x 1 y 1 case in other words this alternative measure of mutual information which we will refer to as specific mutual information s is the quantity is the average of si taken over the four combinations of values of x and y according to the joint probability distribution p so sometimes the term average mutual information is used for average mutual information expresses the difference between the entropy of one of the variables and the conditional entropy of that variable given the other variable thus average mutual information measures the reduction in the uncertainty about the value of one variable that knowledge of the value of the other variable provides averaged over all possible values of the two variablesequivalently average mutual information is quotthe information about x contained in yquot specific mutual information represents the loglikelihood ratio of the joint probability of seeing a quot1quot in both variables over the probability that such an event would have if the two variables were independent and thus provides a measure of the departure from independencethe dice coefficient on the other hand combines the conditional probabilities p and p with equal weights in a single numberthis can be shown by replacing p on the right side of equation 3 as is evident from the above equation the dice coefficient depends only on the conditional probabilities of seeing a quot1quot for one of the variables after seeing a quot1quot for the other variable and not on the marginal probabilities of quot1quots for the two variablesin contrast both the average and the specific mutual information depend on both the conditional and the marginal probabilitiesfor s in particular we have to select among the three measures we first observe that for our application 11 matches are significant while 00 matches are notthese two types of matches correspond to the cases where either both word groups of interest appear in a pair of aligned sentences or neither word group doesseeing the two word groups in aligned sentences certainly contributes to their association and increases our belief that one is the translation of the othersimilarly seeing only one of them decreases our belief in their associationbut given the many possible groups of words that can appear in each sentence the fact that neither of two groups of words appears in a pair of aligned sentences does not offer any information about their similarityeven when the word groups have been observed relatively few times seeing additional sentences containing none of the groups of words we are interested in should not affect our estimate of their similarityin other words in our case x and y are highly asymmetric a quot1quot value is much more informative than a quot0quot value therefore we should select a similarity measure that is based only on 11 matches and mismatches00 matches should be completely ignored otherwise they would dominate the similarity measure given the overall relatively low frequency of any particular word or word group in our corpusthe dice coefficient satisfies the above requirement of asymmetry adding 00 matches does not change any of the absolute frequencies fxy fx and fy and so does not affect diceon the other hand average mutual information depends only on the distribution of x and y and not on the actual values of the random variablesin fact i is a completely symmetric measureif the variables x and y are transformed so that every quot1quot is replaced with a quot0quot and vice versa the average mutual information between x and y remains the samethis is appropriate in the context of communications for which mutual information was originally developed where the ones and zeros encode two different states with no special preference for either of thembut in the context of translation exchanging the quot1quots and quot0quots is equivalent to considering a word or word group to be present when it was absent and vice versa thus converting all 11 matches to 00 matches and all 00 matches to 11 matchesas explained above such a change should not be considered similarity preserving since 11 matches are much more significant than 00 onesas a concrete example consider a corpus of 100 matched sentences where each of the word groups associated with x and y appears five timesfurthermore suppose that the two groups appear twice in a pair of aligned sentences and each word group also appears three times by itselfthis situation is depicted in the column labeled quotoriginal variablesquot in table 1since each word group appears two times with the other group and three times by itself we would normally consider the source and target groups somewhat similar but not strongly relatedand indeed the value of the dice coefficient intuitively corresponds to that assessment of similarity4 now suppose that the quot0quots and quot1quots in x and y are exchanged so that the situation is now described by the last column of table 1the transformed variables now indicate that out of 100 sentences the two word groups appear together 92 times while each appears by itself three times and there are two sentences that contain none of the groupswe would consider such evidence to strongly indicate very high similarity between the two groups and indeed the dice coefficient of the transformed variables is now 4115 09684however the average mutual information of the variables would remain the samespecific mutual information falls somewhere in between the dice coefficient and average mutual information it is not completely symmetric but neither does it ignore 00 matchesthis measure is very sensitive to the marginal probabilities of the quot1quots in the two variables tending to give higher values as these probabilities decreaseadding 00 matches lowers the relative frequencies of quot1quots and therefore always increases the estimate of sifurthermore as the marginal probabilities of the two word groups become very small si tends to infinity independently of the distribution of matches and mismatches as long as the joint probability of 11 matches is not zeroby taking the limit of si for p 0 or p 0 in equation we can easily verify that this happens even if the conditional probabilities p and p remain constant a fact that should indicate a constant degree of relatedness between the two variablesneither of these problems occurs with the dice coefficient exactly because that measure combines the conditional probabilities of quot1quots in both directions without looking at the marginal distributions of the two variablesin fact in cases such as the examples of table 1 where p p the dice coefficient becomes equal to these conditional probabilitiesthe dependence of si on the marginal probabilities of quot1quots shows that using it would make rare word groups look more similar than they really arefor our example in table 1 the specific mutual information is si log 005oo2005 log 8 3 bits for the original variables but si log 09592095 log 1019391 0027707 bits for the transformed variablesnote however that the change is in the opposite direction from the appropriate one that is the new variables are deemed far less similar than the old onesthis can be attributed to the fact that the number of quot1quots in the original variables is far smallersi also suffers disproportionately from estimation errors when the observed counts of quot1quots are very smallwhile all similarity measures will be inaccurate when the data is sparse the results produced by specific mutual information can be more misleading than the results of other measures because si is not boundedthis is not a problem for our application as champollion applies absolute frequency thresholds to avoid considering very rare words and word groups but it indicates another potential problem with the use of si to measure similarityfinally another criterion for selecting a similarity measure is its suitability for testing for a particular outcome where outcome is determined by the applicationin our case we need a clearcut test to decide when two events are correlatedboth for mutual information and the dice coefficient this involves comparison with an experimentally determined thresholdalthough the two measures are similar in that they compare the joint probability p with the marginal probabilities they have different asymptotic behaviorsthis was demonstrated in the previous paragraphs for the cases of small and decreasing relative frequencieshere we examine two more cases associated with specific testswe consider the two extreme cases where in the first case both average and specific mutual information are equal to 0 since log p51cxxptyyy log 1 0 for all x and y and are thus easily testable whereas the dice coefficient is equal to 2xp1v01 and is thus a function of the individual frequencies of the two word groupsin this case the test is easier to decide using mutual informationin the second case the results are reversed specific mutual information is equal to log pp2 log and it can be shown that the average mutual information becomes equal to the entropy h of x both of these measures depend on the individual probabilities of the word groups whereas the dice coefficient is equal to px p 1in this case the test is easier to decide using the dice coefficientsince we are looking for a way to identify positively correlated events we must be able to easily test the second case while testing the first case is not relevantspecific mutual information is a good measure of independence but good measures of independence are not necessarily good measures of similaritythe above arguments all support the use of the dice coefficient over either average or specific mutual informationwe have confirmed the theoretically expected behavior of the similarity measures through testingin our early work on champollion we used specific mutual information as a correlation metricafter carefully studying the errors produced we suspected that the dice measure would produce better results for our task according to the arguments given aboveconsider the example given in table 2in the table the second column represents candidate french word pairs for translating the single word todaythe third column gives the frequency of the word today in a subset of the hansards containing 182584 sentencesthe fourth column gives the frequency of each french word pair in the french counterpart of the same corpus and the fifth column gives the frequency of appearance of today and each french word pair in matched sentencesfinally the sixth and seventh columns give the similarity scores for today and each french word pair computed according to the dice measure or specific mutual information respectivelyof the four candidates aujourd hui is the only correct translationwe see from the table that the specific mutual information scores fail to identify aujourd hui as the best candidateit is only ranked fourthfurthermore the four si scores are very similar thus not clearly differentiating the resultsin contrast the dice coefficient clearly identifies aujourd hui as the group of words most similar to today which is what we wantafter implementing champollion we attempted to generalize these results and confirm our theoretical argumentation by performing an experiment to compare si and the dice coefficient in the context of champollionwe selected a set of 45 collocations with midrange frequency identified by xtract and we ran champollion on them using sample training corpora for each run of champollion and for each input collocation we took the final set of candidate translations of different lengths produced by champollion and compared the results obtained using both the dice coefficient and si at the last stage for selecting the proposed translationthe 45 collocations were randomly selected from a larger set of 300 collocations so that the dice coefficient performance on them is representative and the correct translation is always included in the final set of candidate translationsin this way the number of erroneous decisions made when si is used at the final pass is a lower bound on the number of errors that would have been made if si had also been used in the intermediate stageswe compared the results and found that out of the 45 source collocations table 3 summarizes these results and shows the breakdown across categoriesin the table the numbers of collocations correctly and incorrectly translated when the dice coefficient is used are shown in the second and third rows respectivelyfor both cases the second column indicates the number of collocations that were correctly translated with si and the third column indicates the number of these collocations that were incorrectly translated with sithe last column and the last row show the total number of collocations correctly and incorrectly translated when the dice coefficient or si is used respectivelyfrom the table we see that every time si produced good results the dice coefficient also produced good results there were no cases for which si produced a correct result while the dice coefficient produced an incorrect onein addition we see that out of the 17 incorrect results produced by si the dice coefficient corrected 10although based on only a few cases this experiment confirms that the dice coefficient outperforms si in the context of champolliontable 4 gives concrete examples from this experiment in which the dice coefficient outperforms specific mutual informationthe table has a format similar to that of table 2x represents an english collocation and y represents candidate translations in french the correct translations are again shown in boldthe third and fourth columns give the independent frequencies of each word group while the fifth column gives the number of times that both groups appear in matched sentencesthe two subsequent columns give the similarity values computed according to the dice coefficient and specific mutual information the corpus used for these examples contained 54944 sentences in each languagewe see from table 4 that as for the today example in table 2 the si scores are very close to each other and fail to select the correct candidate whereas the dice scores cover a wider range and clearly peak for the correct translationin conclusion both theoretical arguments and experimental results support the choice of the dice coefficient over average or specific mutual information for our champollion translates single words or collocations in one language into collocations in a second language using the aligned corpus as a reference databasebefore running champollion there are two steps that must be carried out source and target language sentences of the database corpus must be aligned and a list of collocations to be translated must be provided in the source languagefor our experiments we used corpora that had been aligned by gale and church sentence alignment program as our input data8 since our intent in this paper is to evaluate champollion we tried not to introduce errors into the training data for this purpose we kept only the 11 alignmentsindeed more complex sentence alignments tend to have a much higher alignment error rate by doing so we lost an estimated 10 of the text which was not problematic since we had enough datain the future we plan to design more flexible techniques that would work from a loosely aligned corpus to compile collocations we used xtract on the english version of the hansardssome of the collocations retrieved are shown in table 5collocations labeled quotfixedquot such as international human rights covenants are rigid compoundscollocations labeled quotflexiblequot are pairs of words that can be separated by intervening words or occur in reverse order possibly with different inflected formsgiven a source english collocation champollion first identifies in the database corpus all the sentences containing the source collocationit then attempts to find all words that can be part of the translation of the collocation producing all words that are highly correlated with the source collocation as a wholeonce this set of words is identified champollion iteratively combines these words in groups so that each group is in turn highly correlated with the source collocationfinally champollion produces as the translation the largest group of words having a high correlation with the source collocationmore precisely for a given source collocation champollion initially identifies a set s of k words that are highly correlated with the source collocationthis operation is described in detail in section 51 belowchampollion assumes that the target collocation is a combination of some subset of these wordsits search space at this point thus consists of the powerset p of s containing 2k elementsinstead of computing a correlation factor for each of the 2quot elements with the source collocation champollion searches a part of this space in an iterative mannerchampollion first forms all pairs of words in s evaluates the correlation between each pair and the source collocation using the dice coefficient and keeps only those pairs that score above some thresholdsubsequently it constructs the threeword elements of p containing one of 7 the choice of the dice coefficient is not crucial for example using the jaccard coefficient or any other similarity measure that is monotonically related to the dice coefficient would be equivalentwhat is important is that the selected measure satisfy the conditions of asymmetry insensitivity to marginal word probabilities and convenience in testing for correlationthere are many other possible measures of association and the general points made in this section may apply to them insofar as they also exhibit the properties we discussedfor example the normalized chisquare measure used in gale and church shares some of the important properties of average mutual information 8 we are thankful to ken church and the att bell laboratories for providing us with a prealigned hansards corpus these highly correlated pairs plus a member of s measures their correlation with the source collocation and keeps the triplets that score above the thresholdthis process is repeated until for some value n 2 be the fraction of proposed translations with i words which pass the threshold tdlet 131 be the number of translations with i words that are examined by champollion and si the number of these translations that actually survive the thresholds and will be used to generate the candidate translations with i 1 wordsclearly si p1 q p2 and si ri p for i 2during the generation of the candidate translations of length i 1 each of the si translations of length i can combine with q i single words that are sufficiently correlated with the source language collocation generating q i possible translations of length i 1 since champollion does not consider translations that include repeated wordshowever there are up to i 1 different ways that the same set of i 1 words can be generated in this manner for example can be generated by adding c to adding b to or adding a to when the set of translations of length i has been filtered it is possible that not all of the i 1 ways to generate a given translation of length i 1 are availablein general we have and with a similar derivation for the upper bound pi 2for a particular translation with i 3 words to be generated at least one of its i subsets with i 1 words must have survived the thresholdwith our assumptions we have from this recurrence equation and the boundary conditions given above we can compute the values of and a for all ithen the expected number of candidate translations with i 3 words examined by champollion will be and the sum of these terms for i 3 to m plus the terms q and gives the total complexity of our algorithmin table 6 we show the number of candidate translations examined by the exhaustive algorithm and the corresponding best worst and averagecase behavior of champollion for several values of q and m using empirical estimates of the riwe showed above that filtering is necessary to bring the number of proposed translations down to manageable levelsfor any corpus of reasonable size we can find cases where a valid translation is missed because a part of it does not pass the thresholdlet n be the size of the corpus in terms of matched sentencesseparate the n sentences into eight categories depending on whether each of the source collocation and the partial translations appear in itlet the counts of these sentences be 11abx nabc nag where a bar indicates that the corresponding term is absentwe can then find values of the n that because the algorithm to miss a valid translation as long as the corpus contains a modest number of sentencesthis happens when one or more of the parts of the final translation appear frequently in the corpus but not together with the other parts or the source collocationthis phenomenon occurs even if we are allowed to vary the dice thresholds at each stage of the algorithmwith our current constant dice threshold td 01 we may miss a valid translation as long as the corpus contains at least 20 sentenceswhile our algorithm will necessarily miss some valid translations this is a worst case scenarioto study the averagecase behavior of our algorithm we simulated its performance with randomly selected points with integer nonnegative coordinates where no is the number of quotinterestingquot sentences in the corpus for the translation under consideration that is the number of sentences that contain at least one of x a or b13 sampling from this sixdimensional polytope in sevendimensional space is not easywe accomplish it by constructing a mapping from the uniform distribution to each allowed value for the n using combinatorial methodsfor example for no 50 there are 3478761 different points with 11abx 0 but only one with nabx 50using the above method we sampled 20000 points for each of several values for no the results of the simulation were very similar for the different values of no with no apparent pattern emerging as no increasedtherefore in the following we give averages over the values of no triedwe first measured the percentage of missed valid translations when either a or b or both do not pass the threshold but ab should for different values of the threshold parameter we observed that for low values of the threshold less than 1 of the valid translations are missed for example for the threshold value of 010 we currently use the error rate is 074however as the threshold increases the rate of failure can become unacceptablea higher value for the threshold has two advantages first it offers higher selectivity allowing fewer false positives represents the basic algorithm with no threshold changes accurate by the human judgessecond it speeds up the execution of the algorithm as all fractions r decrease and the overall number of candidate translations is reducedhowever as figure 3 shows high values of the threshold parameter because the algorithm to miss a significant percentage of valid translationsintuitively we expect this problem to be alleviated if a higher threshold value is used for the final admittance of a translation but a lower threshold is used internally when the subparts of the translation are consideredour second simulation experiment tested this expectation for various values of the final threshold using a lower initial threshold equal to a constant a 1 times the final thresholdthe results are represented by the remaining curves of figure 3surprisingly we found that with moderate values of a this method gives a very low failure rate even for high final threshold values and is preferable to using a constant but lower threshold just to reduce the failure ratefor example running the algorithm at an initial threshold of 03 and a final threshold of 06 gives a failure rate of 045 much less than the failure rate of 659 which corresponds to a constant threshold of 03 for both stagesthe above analyses show that the algorithm fails quite rarely when the threshold is low and its performance can be improved with a sequence of increasing thresholdswe also studied cases where the algorithm does failfor this purpose we stratified 14 the curves in figure 3 become noticeably less smooth for values of the final threshold that are greater than 08this happens for all settings of a in figure 3this apparently different behavior for high threshold values can be traced to sampling issuessince few of the 20000 points in each sample meet the criterion of having dice greater or equal to the threshold for high final threshold values the estimate of the percentage of failures is more susceptible to random variation in such casesfurthermore since the same sample is used for all values of a any such random variation due to small sample size will be replicated in all curves of figure 3 our samples into five groups based on the dice coefficient between the two parts a and b figure 4 shows the failure rate for the groups of low middle and high dice values using the same threshold at both levelswe observe that the algorithm tends to fail much less frequently when the two parts of the final translation are strongly relatedbut this is desirable behavior since a strong correlation between the two subparts of the word group indicates that it is indeed a collocation in the target languagetherefore failures of the algorithm act to some extent as an filter rejecting uninteresting translations that would otherwise have been accepted by the exhaustive methodtable 7 shows sample results from the simulation experiments summarizing figures 3 and 4 for several representative casesthe first column gives the threshold used at the second level while the second through fourth columns show failure rates for various values of afor example the second column shows failure rates when the same threshold is used for both levelswe evaluated champollion in several separate trials varying the collocations provided as input and the database corpora used as referencewe used three different sets of collocations as input each taken from a different year of the english half of the hansards corpuswe tested these collocations on database corpora of varying size 15 we limited the evaluation of champollion to three types of collocations nounnoun verbnoun and adjectivenoun obtained using the first two stages of xtract and year taken from the aligned hansardstable 8 illustrates the range of translations which champollion producesflexible collocations are shown with ellipsis points indicating where additional variable words could appearthese examples show cases where a two word collocation is translated as one word a two word collocation is translated as three words and how words can be inverted in the translation in this section we discuss the design of the separate tests and our evaluation methodology and present the results of our evaluationwe carried out three tests with champollion using two database corpora and three sets of source collocationsthe first database corpus consists of 8 months of hansards aligned data taken from 1986 and the second database corpus consists of all of the 1986 and 1987 transcripts of the canadian parliament for the first corpus we ran xtract and obtained a set of approximately 3000 collocations from which we randomly selected a subset of 300 for manual evaluation purposesthe 300 collocations were selected from among the collocations of midrange frequencycollocations appearing more than 10 times in the corpuswe call this first set of source collocations clthe second set is a set of 300 collocations similarly selected from the set of approximately 5000 collocations identified by xtract on all data from 1987the third set of collocations consists of 300 collocations selected from the set of approximately 5000 collocations identified by xtract on all data from 1988we used db1 with both cl and c2 and we used db2 with c3 we asked three fluent bilingual speakers to evaluate the results for the different experimentsthe evaluators first examined the source collocation validating that it indeed was a word group exhibiting semantic coherencesource collocations that seemed incorrect were removed from further considerationthe evaluators then classified the translations of the remaining collocations as either correct or incorrectin this way we decoupled the evaluation of champollion from the errors made by xtractit is clear that our classification scheme is not perfect because some cases are difficult to judgethe judges were not especially familiar with the institutionalized differences between canadian french and continental french for example without knowledge of canadian french it is difficult to judge if the translation of affirmative action is action positive since this term is not used in other forms of frenchone of the biggest problems for the evaluators was scoring translations of collocations with prepositions and articleschampollion does not translate closedclass words such as prepositions and articlestheir frequency is so high in comparison to openclass words that including them in the candidate translations causes problems with the correlation metricevaluators generally counted a translation as incorrect if it did not contain a preposition or article when it should havethere is one exception to this general rule when the translation should have included one closedclass word it was obvious what that word should be it occurred in one empty slot in the phrase and champollion produced a rigid collocation with an empty slot at that position and with the correct openclass words they judged the translation correctit is exactly in these cases that the closedclass word could easily be filled in when examining samples in the corpus to determine word orderingsince the collocation is rigid the same preposition or article occurs in most cases so it could be extracted from the samples along with word orderingfor example when judging the translation of assistance program into programme x aide the judges knew that the missing word was d even without looking at the corpusin section 9 we describe a later version of champollion in which we added the capability to identify these types of closedclass words during the last stagethe results of the evaluation experiments are given in table 9the first column describes the experiment the second column gives the percentage of xtract errors and the next two columns give the percentages of incorrect and correct translations of source collocations in comparison to the total number of collocationssince our previous work shows that xtract has a rate of accuracy of 80 it is reasonable to expect a certain number of errors in the input to champollion but these should not contribute to the evaluation of champollionconsequently we have included in the last column of the table the percentage of correct translations produced by champollion in comparison to the total number of valid collocations supplied to it namely the percentage of champllion correct translations if xtract errors are filtered from the inputthis quantity is equal to the ratio of the fourth column over the sum of the third and fourth columnsthe accuracy figures shown in table 9 are computed by averaging the scores of the three individual judgeshowever we noted that the scores of the individual evaluators never varied by more than 2 thus showing high agreement between the judgessuch results indicate that in general there is a single correct answer in each case verifying the hypothesis of a unique translation per collocation independently of context which we postulated in section 4they also indicate that it is generally easy for the evaluators to identify this unique correct answerwhen there is not a single correct answer or when it is not easy for the evaluators to identify the correct answer it is prudent to guard against the introduction of bias by asking the evaluators to produce their answers independently of the system output as we have argued elsewhere however for the problem at hand the uniqueness and accessibility of the correct answer greatly alleviates the danger of introducing bias by letting the evaluators grade the translations produced by champollionsince the latter method makes more efficient use of the judges we decided to adopt it for our evaluationamong the three experiments described above our best results are obtained when the database corpus is also used as the corpus from which xtract identifies the source language collocations in this case not counting xtract errors accuracy is rated at 78it should be noted that the thresholds used by champollion were determined by experimenting on a separate data setsince determining the thresholds is the only training required for our statistical method using the same corpus as both the database and the source of input collocations is not a case of testing on the training datathe second experiment yielded the lowest results because many input collocations simply did not appear often enough in the database corpushowever we suspected that this could be corrected by using a larger database corpusthus for our third experiment we used db2 which contained two years of the hansards and drew our input collocations from yet a different year evaluation on this third experiment raised the accuracy to nearly as high as the first experiment yielding 74a bilingual lexicon of collocations has a variety of potential usesthe most obvious are machine translation and machineassisted human translation but other multilingual applications including information retrieval summarization and computational lexicography also require access to bilingual lexiconswhile some researchers are attempting machine translation through purely statistical techniques the more common approach is to use some hybrid of interlingual and transfer techniquesthese symbolic machine translation systems must have access to a bilingual lexicon and the ability to construct one semiautomatically would ease the development of such systemschampollion is particularly promising for this purpose for two reasonsfirst it constructs translations for multiword collocationscollocations are known to be opaque that is their meaning often derives from the combination of the words and not from the meaning of the individual words themselvesas a result translation of collocations cannot be done on a wordbyword basis and some representation of collocations in both languages is needed if the system is to translate fluentlysecond collocations are domain dependentparticularly in technical domains the collocations differ from those in general useaccordingly the ability to automatically discover collocations for a given domain by using a new corpus as input to champollion would ease the work required to transfer an mt system to a new domainmultilingual systems are now being developed in addition to pure machine translation systemsthese systems also need access to bilingual phraseswe are currently developing a multilingual summarization system in which we will use the results from champollionan early version of this system produces short summaries of multiple news articles covering the same event using as input the templates produced by information extraction systems developed under the arpa message understanding programsince some information extraction systems such as general electric nltoolset already produce similar representations for japanese and english news articles the addition of an english summary generator will automatically allow for english summarization of japanesein addition we are planning to add a second language for the summarieswhile the output is not a direct translation of input articles collocations that appear frequently in the news articles will also appear in summariesthus a list of bilingual collocations would be useful for the summarization processinformation retrieval is another prospective applicationas shown in maarek and smadja and more recently in broglio et al the precision of information retrieval systems can be improved through the use of collocations in addition to the more traditional single word indexing unitsa collocation gives the context in which a given word was used which will help retrieve documents using the word with the same sense and thus improve precisionthe wellknown new mexico example in information retrieval describes an oftencountered problem when single word searches are employed searching for new and mexico independently will retrieve a multitude of documents that do not relate to new mexicoautomatically identifying and explicitly using collocations such as new mexico at search or indexing time can help solve this problemwe have licensed xtract to several sites that are using it to improve the accuracy of their retrieval or text categorization systemsa bilingual list of collocations could be used for the development of a multilingual information retrieval systemin cases where the database of texts includes documents written in multiple languages the search query need only be expressed in one languagethe bilingual collocations could be used to translate the query from the input language to other languages in the databaseanother potential application as demonstrated by dagan and church is machineaided human translationfor this scenario when a translator begins work on a collocation inside a translation editor the translation produced by champollion could be provided as a prompt giving the translator the opportunity to approve itin such cases it may be useful to provide the top several translations produced by champollion allowing the translator to choose the best as dagan and church dofinally champollion could also be used for computerassisted lexicographysince its output includes the translation of 1 to n word phrases champollion could be used to automatically translate lexiconswhile it could not translate sentences that are often used in dictionaries as examples it could be used for translation of both individual words and phrasesin this way a list of translated words could be produced automatically from a monolingual dictionary and filtered by a lexicographerchampollion is one of the first attempts at translating lexical constructions using statistical techniques and our work has several limitations which will be addressed in future workin this section we describe some of them and we give possible directions for future researchtranslating closed class wordsin the experiments described in this paper champollion produced only partial collocations in the target language because we eliminated closedclass words from our indicesthere are two reasons for eliminating such wordsfirst they are very frequent and appear in almost any context so that using them would blur our statisticsthe second reason is one of time and space efficiency since these words appear in many sentences in the corpus database it is economical to remove them from the indiceshowever this causes champollion to produce only partial collocations for example to cause havoc gets translated as semer0 desarrois2the position numbers indicate that a word is missing between the two french wordsthis word is the article le and the full collocation is semer le desarroiswe implemented an extension that checks the positions around the words of a rigid collocationquot note that for flexible collocations the words can occur in any order separated by any number of words and therefore it is difficult to check whether the same close class word is consistently usedour extension checks one word to the left and to the right of the collocation plus any gaps between wordsif the same preposition or article is found to occur in the same position in 90 of the sentences in which the rigid collocation occurs it is added to the outputnote that if champollion were to be used as part of machineassisted human translation another option would be to produce a ranked list of several prepositions or articles that are used in the corpus and let the translator choose the best optionthis extension improves the fluency of the translations tremendouslyfor example employment equity is translated as equite en matiere d emploi with prepositions in place of the empty slots shown in table 8 on page 27table 10 shows a variety of translations produced by this extensionwhile we have not yet completed a full evaluation of these results preliminary work using the evaluation of only one judge suggests that our results improve substantiallytools for the target languagetools in french such as a morphological analyzer a tagger a list of acronyms a robust parser and various lists of tagged words would be most helpful and would allow us to improve our resultsfor example a tagger for french would allow us to run xtract on the french part of the corpus and thus to translate from either french or english as inputin addition running xtract on the french part of the corpus would allow for independent confirmation of the proposed translations which should be french collocationssimilarly a morphological analyzer would allow us to produce richer results since several forms of the same word would be conflated increasing both the expected and the actual frequencies of the cooccurrence events this has been found empirically to have a positive effect in overall performance in other problems note that ignoring inflectional distinctions can sometimes have a detrimental effect if only particular forms of a word participate in a given collocationconsequently it might be beneficial to take into account both the distribution of the base form and the differences between the distributions of the various inflected formsin the current implementation of champollion we were restricted to using tools for only one of the two languages since at the time of implementation tools for french were not readily availablehowever from the above discussion it is clear that certain tools would improve the system performanceseparating corpusdependent translations from general oneschampollion identifies translations for the source collocations using the aligned corpora database as its entire knowledge of the two languagesconsequently sometimes the results are specific to the domain and seem peculiar when viewed in a more general contextfor example we have already mentioned that mr speaker was translated as monsieur le président which is obviously only valid for this domaincanadian family is another example it is often translated as famille this is an important feature of the system since in this way the sublanguage of the domain is employed for the translationhowever many of the collocations that champollion identifies are general domainindependent oneschampollion cannot make any distinction between domainspecific and general collocationswhat is clearly needed is a way to determine the generality of each produced translation as many translations found by champollion are of general use and could be directly applied to other domainsthis may be possible by intersecting the output of champollion on corpora from many different domainshandling low frequency collocationsthe statistics we used do not produce good results when the frequencies are lowthis shows up clearly when our evaluation results on the first two experiments are comparedrunning the collocation set c2 over the database db1 produced our worst results and this can be attributed to the low frequency in dbi of many collocations in c2recall that c2 was extracted from a different corpus from dbithis problem is due not only to the frequencies of the source collocations or of the words involved but also to the frequencies of their quotofficialquot translationsindeed while most collocations exhibit unique senses in a given domain sometimes a source collocation appearing multiple times in the corpus is not consistently translated into the same target collocation in the databasethis sampling problem which generally affects all statistical approaches was not addressed in the paperwe reduced the effects of low frequencies by purposefully limiting ourselves to source collocations of frequencies higher than 10 containing individual words with frequencies higher than 15analysis of the effects of our thresholdsvarious thresholds are used in champollion algorithm to reduce the search spacea threshold too low would significantly slow down the search as according to zipf law the number of terms occurring n times in a general english corpus is a decreasing function of n2unfortunately sometimes this filtering step causes champollion to miss a valid translationfor example one of the incorrect translations made by champollion is that important factor was translated into facteur alone instead of the proper translation facteur importantthe error is due to the fact that the french word important did not pass the first step of the algorithm as its dice coefficient with important factor was too lowimportant occurs a total of 858 times in the french part of the corpus and only 8 times in the right context whereas a minimum of 10 appearances is required to pass this stepalthough the theoretical analysis and simulation experiments of section 62 show that such cases of missing the correct translation are rare more work needs to be done in quantifying this phenomenonin particular experiments with actual corpus data should supplement the theoretical results furthermore more experimentation with the values of the thresholds needs to be done to locate the optimum tradeoff point between efficiency and accuracyan additional direction for future experiments is to vary the thresholds according to the size of the database corpus and the frequency of the collocation being translatedincorporating the length of the translation into the scorecurrently our scoring method only uses the lengths of candidate translations to break a tie in the similarity measureit seems however that longer translations should get a quotbonusquot for example using our scoring technique the correlation of the collocation official languages with the french word officielles is equal to 094 and the correlation with the french collocation langues officielles is 095our scoring only uses the relative frequencies of the events without taking into account that some of these events are composed of multiple single eventswe plan to refine our scoring method so that the length of the events is taken into accountusing nonparallel corporachampollion requires an aligned bilingual corpus as inputhowever finding bilingual corpora can be problematic in some domainsalthough organizations such as the united nations the european community and governments of countries with several official languages are big producers such corpora are still difficult to obtain for research purposeswhile aligned bilingual corpora will become more available in the future it would be helpful if we could relax the constraint for aligned databilingual corpora in the same domain which are not necessarily translations of each other are more easily availablefor example news agencies such as the associated press and reuters publish in several languagesnews stories often relate similar facts but they are not direct translations of one anothereven though the stories probably use equivalent terminology totally different techniques would be necessary to be able to use such quotnonalignablequot corpora as databasesultimately such techniques would be more useful than those currently used because they would be able to extract knowledge from noisy datawhile this is definitely a large research problem our research team at columbia university has begun work in this area that shows promise for noisy parallel corpora bilingual word correspondences extracted from nonparallel corpora with techniques such as those proposed by fung also look promisingwe have presented a method for translating collocations implemented in champollionthe ability to provide translations for collocations is important for three main reasonsfirst because they are opaque constructions they cannot be translated on a wordbyword basisinstead translations must be provided for the phrase as a wholesecond collocations are domain dependenteach domain includes a variety of phrases that have specific meanings and translations that apply only in the given domainfinally a quick look at a bilingual dictionary even for two widely studied languages such as english and french shows that correspondences between collocations in two languages are largely unexploredthus the ability to compile a set of translations for a new domain automatically will ultimately increase the portability of machine translation systemsby applying champollion to a corpus in a new domain translations for the domainspecific collocations can be automatically compiled and inaccurate results filtered by a native speaker of the target languagethe output of our system is a bilingual list of collocations that can be used in a variety of multilingual applicationsit is directly applicable to machine translation systems that use a transfer approach since such systems rely on correspondences between words and phrases of the source and target languagesfor interlingua systems identification of collocations and their translations provide a means of augmenting the interlinguasince such phrases cannot be translated compositionally they indicate where concepts representing such phrases must be added to the interlinguasuch bilingual phrases are also useful for other multilingual tasks including information retrieval of multilingual documents given a phrase in one language summarization in one language of texts in another and multilingual generationfinally we have carried out three evaluations of the system on three separate years of the hansards corpusthese evaluations indicate that champollion has a high rate of accuracy in the best case 78 of the french translations of valid english collocations were judged to be goodthis is a good score in comparison with evaluations carried out on full machine translation systemswe conjecture that by using statistical techniques to translate a particular type of construction known to be easily observable in language we can achieve better results than by applying the same technique to all constructions uniformlyour work is part of a paradigm of research that focuses on the development of tools using statistical analysis of text corporathis line of research aims at producing tools that satisfactorily handle relatively simple tasksthese tools can then be used by other systems to address more complex tasksfor example previous work has addressed lowlevel tasks such as tagging a freestyle corpus with partofspeech information aligning a bilingual corpus and producing a list of collocations while each of these tools is based on simple statistics and tackles elementary tasks we have demonstrated with our work on champollion that by combining them one can reach new levels of complexity in the automatic treatment of natural languagesthis work was supported jointly by the advanced research projects agency and the office of naval research under grant n0001489j1782 by the office of naval research under grant n000149510745 by the national science foundation under grant ger9024069 and by the new york state science and technology foundation under grants nysstfcat053 and nysstfcat013we wish to thank pascale fung and dragomir radev for serving as evaluators thanasis tsantilas for discussions relating to the averagecase complexity of champollion and the anonymous reviewers for providing useful comments on an earlier version of the paperwe also thank ofer wainberg for his excellent work on improving the efficiency of champollion and for adding the preposition extension and ken church and att bell laboratories for providing us with a prealigned hansards corpus
J96-1001
translating collocations for bilingual lexicons a statistical approachcollocations are notoriously difficult for nonnative speakers to translate primarily because they are opaque and cannot be translated on a wordbyword basiswe describe a program named champollion which given a pair of parallel corpora in two different languages and a list of collocations in one of them automatically produces their translationsour goal is to provide a tool for compiling bilingual lexical information above the word level in multiple languages for different domainsthe algorithm we use is based on statistical methods and produces pword translations of nword collocations in which n and p need not be the samefor example champollion translates make decision employment equity and stock market into prendre decision equite en matiere demploi and bourse respectivelytesting champollion on three years worth of the hansards corpus yielded the french translations of 300 collocations for each year evaluated at 73 accuracy on averagein this paper we describe the statistical measures used the algorithm and the implementation of champollion presenting our results and evaluationthe relationship between pointwise mutual information and the dice coefficient is discussed in this workwe propose a corpusbased method to extract bilingual lexiconswe propose a statistical association measure of the dice coefficient to deal with the problem of collocation translation
a maximum entropy approach to natural language processing the concept of maximum entropy can be traced back along multiple threads to biblical times only recently however have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition in this paper we describe a method for statistical modeling based on maximum entropy we present a maximumlikelihood approach for automatically constructing maximum entropy models and describe how to implement this approach efficiently using as examples several problems in natural language processing the concept of maximum entropy can be traced back along multiple threads to biblical timesonly recently however have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognitionin this paper we describe a method for statistical modeling based on maximum entropywe present a maximumlikelihood approach for automatically constructing maximum entropy models and describe how to implement this approach efficiently using as examples several problems in natural language processingstatistical modeling addresses the problem of constructing a stochastic model to predict the behavior of a random processin constructing this model we typically have at our disposal a sample of output from the processgiven this sample which represents an incomplete state of knowledge about the process the modeling problem is to parlay this knowledge into a representation of the processwe can then use this representation to make predictions about the future behavior about the processbaseball managers employ batting averages compiled from a history of atbats to gauge the likelihood that a player will succeed in his next appearance at the platethus informed they manipulate their lineups accordinglywall street speculators build models based on past stock price movements to predict tomorrow fluctuations and alter their portfolios to capitalize on the predicted futureat the other end of the pay scale reside natural language researchers who design language and acoustic models for use in speech recognition systems and related applicationsthe past few decades have witnessed significant progress toward increasing the predictive capacity of statistical models of natural languagein language modeling for instance bahl et al have used decision tree models and della pietra et al have used automatically inferred link grammars to model long range correlations in languagein parsing black et al have described how to extract grammatical rules from annotated text automatically and incorporate these rules into statistical models of grammarin speech recognition lucassen and mercer have introduced a technique for automatically discovering relevant features for the translation of word spelling to word pronunciationthese efforts while varied in specifics all confront two essential tasks of statistical modelingthe first task is to determine a set of statistics that captures the behavior of a random processgiven a set of statistics the second task is to corral these facts into an accurate model of the processa model capable of predicting the future output of the processthe first task is one of feature selection the second is one of model selectionin the following pages we present a unified approach to these two tasks based on the maximum entropy philosophyin section 2 we give an overview of the maximum entropy philosophy and work through a motivating examplein section 3 we describe the mathematical structure of maximum entropy models and give an efficient algorithm for estimating the parameters of such modelsin section 4 we discuss feature selection and present an automatic method for discovering facts about a process from a sample of output from the processwe then present a series of refinements to the method to make it practical to implementfinally in section 5 we describe the application of maximum entropy ideas to several tasks in stochastic language processing bilingual sense disambiguation word reordering and sentence segmentationwe introduce the concept of maximum entropy through a simple examplesuppose we wish to model an expert translator decisions concerning the proper french rendering of the english word inour model p of the expert decisions assigns to each french word or phrase f an estimate p of the probability that the expert would choose f as a translation of into guide us in developing p we collect a large sample of instances of the expert decisionsour goal is to extract a set of facts about the decisionmaking process from the sample that will aid us in constructing a model of this process one obvious clue we might glean from the sample is the list of allowed translationsfor example we might discover that the expert translator always chooses among the following five french phrases dans en a au cours de pendantwith this information in hand we can impose our first constraint on our model p this equation represents our first statistic of the process we can now proceed to search for a suitable model that obeys this equationof course there are an infinite number of models p for which this identity holdsone model satisfying the above equation is p 1 in other words the model always predicts dansanother model obeying this constraint predicts pendant with a probability of 12 and a with a probability of 12but both of these models offend our sensibilities knowing only that the expert always chose from among these five french phrases how can we justify either of these probability distributionseach seems to be making rather bold assumptions with no empirical justificationput another way these two models assume more than we actually know about the expert decisionmaking processall we know is that the expert chose exclusively from among these five french phrases given this the most intuitively appealing model is the following this model which allocates the total probability evenly among the five possible phrases is the most uniform model subject to our knowledgeit is not however the most uniform overall that model would grant an equal probability to every possible french phrasewe might hope to glean more clues about the expert decisions from our samplesuppose we notice that the expert chose either dims or en 30 of the timewe could apply this knowledge to update our model of the translation process by requiring that p satisfy two constraints once again there are many probability distributions consistent with these two constraintsin the absence of any other knowledge a reasonable choice for p is again the most uniformthat is the distribution which allocates its probability as evenly as possible subject to the constraints say we inspect the data once more and this time notice another interesting fact in half the cases the expert chose either dans or awe can incorporate this information into our model as a third constraint we can once again look for the most uniform p satisfying these constraints but now the choice is not as obviousas we have added complexity we have encountered two difficulties at oncefirst what exactly is meant by quotuniformquot and how can we measure the uniformity of a modelsecond having determined a suitable answer to these questions how do we go about finding the most uniform model subject to a set of constraints like those we have describedthe maximum entropy method answers both of these questions as we will demonstrate in the next few pagesintuitively the principle is simple model all that is known and assume nothing about that which is unknownin other words given a collection of facts choose a model consistent with all the facts but otherwise as uniform as possiblethis is precisely the approach we took in selecting our model p at each step in the above examplethe maximum entropy concept has a long historyadopting the least complex hypothesis possible is embodied in occam razor and even appears earlier in the bible and the writings of herotodus laplace might justly be considered the father of maximum entropy having enunciated the underlying theme 200 years ago in his quotprinciple of insufficient reasonquot when one has no information to distinguish between the probability of two events the best strategy is to consider them equally likely as e t jaynes a more recent pioneer of maximum entropy put it the fact that a certain probability distribution maximizes entropy subject to certain constraints representing our incomplete information is the fundamental property which justifies use of that distribution for inference it agrees with everything that is known but carefully avoids assuming anything that is not knownit is a transcription into mathematics of an ancient principle of wisdom we consider a random process that produces an output value y a member of a finite set yfor the translation example just considered the process generates a translation of the word in and the output y can be any word in the set dans en a au cours de pendantin generating y the process may be influenced by some contextual information x a member of a finite set xin the present example this information could include the words in the english sentence surrounding inour task is to construct a stochastic model that accurately represents the behavior of the random processsuch a model is a method of estimating the conditional probability that given a context x the process will output ywe will denote by p the probability that the model assigns to y in context xwith a slight abuse of notation we will also use p to denote the entire conditional probability distribution provided by the model with the interpretation that y and x are placeholders rather than specific instantiationsthe proper interpretation should be clear from the contextwe will denote by p the set of all conditional probability distributionsthus a model p is by definition just an element of p to study the process we observe the behavior of the random process for some time collecting a large number of samples in the example we have been considering each sample would consist of a phrase x containing the words surrounding in together with the translation y of in that the process producedfor now we can imagine that these training samples have been generated by a human expert who was presented with a number of random phrases containing in and asked to choose a good translation for eachwhen we discuss realworld applications in section 5 we will show how such samples can be automatically extracted from a bilingual corpuswe can summarize the training sample in terms of its empirical probability distribution 3 defined by number of times that occurs in the sample typically a particular pair will either not occur at all in the sample or will occur at most a few timesour goal is to construct a statistical model of the process that generated the training sample pthe building blocks of this model will be a set of statistics of the training samplein the current example we have employed several such statistics the frequency with which in translated to either dans or en was 310 the frequency with which it translated to either dans or au cours de was 12 and so onthese particular statistics were independent of the context but we could also consider statistics that depend on the conditioning information xfor instance we might notice that in the training sample if april is the word following in then the translation of in is en with frequency 910to express the fact that in translates as en when april is the following word we can introduce the indicator function the expected value off with respect to the empirical distribution 3 is exactly the statistic we are interested inwe denote this expected value by we can express any statistic of the sample as the expected value of an appropriate binaryvalued indicator function f we call such function a feature function or feature for short to denote both the value of f at a particular pair as well as the entire function f when we discover a statistic that we feel is useful we can acknowledge its importance by requiring that our model accord with itwe do this by constraining the expected value that the model assigns to the corresponding feature function f the expected value off with respect to the model p is where 77 is the empirical distribution of x in the training samplewe constrain this expected value to be the same as the expected value of f in the training samplethat is we require we call the requirement a constraint equation or simply a constraintby restricting attention to those models p for which holds we are eliminating from consideration those models that do not agree with the training sample on how often the output of the process should exhibit the feature f to sum up so far we now have a means of representing statistical phenomena inherent in a sample of data and also a means of requiring that our model of the process exhibit these phenomena 1one final note about features and constraints bears repeating although the words quotfeaturequot and quotconstraintquot are often used interchangeably in discussions of maximum entropy we will be vigilant in distinguishing the two and urge the reader to do likewisea feature is a binaryvalued function of a constraint is an equation between the expected value of the feature function in the model and its expected value in the training datasuppose that we are given n feature functions f which determine statistics we feel are important in modeling the processwe would like our model to accord with these statisticsthat is we would like p to lie in the subset c of p defined by figure 1 provides a geometric interpretation of this setuphere p is the space of all probability distributions on three points sometimes called a simplexif we impose no constraints then all probability models are allowableimposing one linear constraint ci restricts us to those p e p that lie on the region defined by ci as shown in a second linear constraint could determine p exactly if the two constraints are satisfiable this is the case in where the intersection of ci and c2 is nonemptyalternatively a second linear constraint could be inconsistent with the firstfor instance the first might require that the probability of the first point is 13 and the second that the probability of the third point is 34this is shown in in the present setting however the linear constraints are extracted from the training sample and cannot by construction be inconsistentfurthermore the linear constraints in our applications will not even come close to determining p e p uniquely as they do in instead the set c c1 n c n n c of allowable models will be infiniteamong the models p e c the maximum entropy philosophy dictates that we select the most uniform distributionbut now we face a question left open in section 2 what does quotuniformquot meana mathematical measure of the uniformity of a conditional distribution p is provided by the conditional entropyl four different scenarios in constrained optimizationp represents the space of all probability distributionsin no constraints are applied and all p e p are allowablein the constraint c1 narrows the set of allowable models to those that lie on the line defined by the linear constraintin two consistent constraints c1 and c2 define a single model p e c1 n c2in the two constraints are inconsistent no p e p can satisfy them boththe entropy is bounded from below by zero the entropy of a model with no uncertainty at all and from above by log y the entropy of the uniform distribution over all possible lyi values of ywith this definition in hand we are ready to present the principle of maximum entropyto select a model from a set c of allowed probability distributions choose the model p e c with maximum entropy h it can be shown that p is always welldefined that is there is always a unique model p with maximum entropy in any constrained set c the maximum entropy principle presents us with a problem in constrained optimization find the p e c that maximizes hin simple cases we can find the solution to this problem analyticallythis was true for the example presented in section 2 when we imposed the first two constraints on p unfortunately the solution to the general problem of maximum entropy cannot be written explicitly and we need a more indirect approachto address the general problem we apply the method of lagrange multipliers from the theory of constrained optimizationthe relevant steps are outlined here the reader is referred to della pietra et al for a more thorough discussion of constrained optimization as applied to maximum entropywe call w the dual functionthe functions pa and may be calculated explicitly using simple calculuswe find at first glance it is not clear what these machinations achievehowever a fundamental principle in the theory of lagrange multipliers called generically the kuhntucker theorem asserts that under suitable assumptions the primal and dual problems are in fact closely relatedthis is the case in the present situationalthough a detailed account of this relationship is beyond the scope of this paper it is easy to state the final result suppose that a is the solution of the dual problemthen px is the solution of the primal problem that is pain other words the maximum entropy model subject to the constraints c has the parametric form pk of where the parameter values a can be determined by maximizing the dual function wthe most important practical consequence of this result is that any algorithm for finding the maximum a of w can be used to find the maximum p of h for p e c the loglikelihood lp of the empirical distribution p as predicted by a model p is defined by3 it is easy to check that the dual function lii of the previous section is in fact just the loglikelihood for the exponential model pa that is with this interpretation the result of the previous section can be rephrased as the model p e c with maximum entropy is the model in the parametric family pa that maximizes the likelihood of the training sample 3this result provides an added justification for the maximum entropy principle if the notion of selecting a model p on the basis of maximum entropy is not compelling enough it so happens that this same p is also the model that can best account for the training sample from among all models of the same parametric form table 1 summarizes the primaldual framework we have established2 it might be that the dual function w does not achieve its maximum at any finite ain this case the maximum entropy model will not have the form pa for any ahowever it will be the limit of models of this form as indicated by the following result whose proof we omit the duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of duality in constrained optimization problem argmaxpec h argmaxx w description maximum entropy maximum likelihood type of search constrained optimization unconstrained optimization search domain p e c realvalued vectors al a2 solution a kuhntucker theorem p p for all but the most simple problems the a that maximize xi cannot be found analyticallyinstead we must resort to numerical methodsfrom the perspective of numerical optimization the function 4f is well behaved since it is smooth and convex11 in a consequently a variety of numerical methods can be used to calculate aone simple method is coordinatewise ascent in which a is computed by iteratively maximizing 41 one coordinate at a timewhen applied to the maximum entropy problem this technique yields the popular brown algorithm other general purpose methods that can be used to maximize t include gradient ascent and conjugate gradientan optimization method specifically tailored to the maximum entropy problem is the iterative scaling algorithm of darroch and ratcliff we present here a version of this algorithm specifically designed for the problem at hand a proof of the monotonicity and convergence of the algorithm is given in della pietra et al the algorithm is applicable whenever the feature functions f are nonnegative this is of course true for the binaryvalued feature functions we are considering herethe algorithm generalizes the darrochratcliff procedure which requires in addition to the nonnegativity that the feature functions satisfy ei f 1 for all x yinput feature functions fif2 fn empirical distribution 19 output optimal parameter values a optimal model n where the key step in the algorithm is step the computation of the increments a a that solve if f is constant m for all x y say then aa is given explicitly as if f is not constant then aa must be computed numericallya simple and effective way of doing this is by newton methodthis method computes the solution a of an equation g 0 iteratively by the recurrence with an appropriate choice for ao and suitable attention paid to the domain of gearlier we divided the statistical modeling problem into two steps finding appropriate facts about the data and incorporating these facts into the modelup to this point we have proceeded by assuming that the first task was somehow performed for useven in the simple example of section 2 we did not explicitly state how we selected those particular constraintsthat is why is the fact that dans or a was chosen by the expert translator 50 of the time any more important than countless other facts contained in the datain fact the principle of maximum entropy does not directly concern itself with the issue of feature selection it merely provides a recipe for combining constraints into a modelbut the feature selection problem is critical since the universe of possible constraints is typically in the thousands or even millionsin this section we introduce a method for automatically selecting the features to be included in a maximum entropy model and then offer a series of refinements to ease the computational burdenwe begin by specifying a large collection y of candidate featureswe do not require a priori that these features are actually relevant or usefulinstead we let the pool be as large as practically possibleonly a small subset of this collection of features will eventually be employed in our final modelif we had a training sample of infinite size we could determine the quottruequot expected value for a candidate feature f e t simply by computing the fraction of events in the sample for which f 1in reallife applications however we are provided with only a small sample of n events which cannot be trusted to represent the process fully and accuratelyspecifically we cannot expect that for every feature f ef the estimate of 3 we derive from this sample will be close to its value in the limit as n grows largeemploying a larger sample of data from the same process might result in different estimates of p for many candidate featureswe would like to include in the model only a subset s of the full set of candidate features f we will call s the set of active featuresthe choice of s must capture as much information about the random process as possible yet only include features whose expected values can be reliably estimateda nested sequence of subsets c d c d c of p corresponding to increasingly large sets of features si c 52 c 3to find s we adopt an incremental approach to feature selection similar to the strategy used for growing decision trees the idea is to build up s by successively adding featuresthe choice of feature to add at each step is determined by the training datalet us denote the set of models determined by the feature set s as cquotaddingquot a feature f is shorthand for requiring that the set of allowable models all satisfy the equality f9 ponly some members of c will satisfy this equality the ones that do we denote by cthus each time a candidate feature is added to s another linear constraint is imposed on the space c of models allowed by the features in s as a result c shrinks the model 13 in c with the greatest entropy reflects everincreasing knowledge and thus hopefully becomes a more accurate representation of the processthis narrowing of the space of permissible models was represented in figure 1 by a series of intersecting lines in a probability simplexperhaps more intuitively we could represent it by a series of nested subsets of p as in figure 2the basic incremental growth procedure may be outlined as followsevery stage of the process is characterized by a set of active features s these determine a space of models by adding feature f to s we obtain a new set of active features s you f following this set of features determines a set of models ce fp e p i p 3 for all fesu the optimal model in this space of models is feature f e f which maximizes the gain al one issue left unaddressed by this algorithm is the termination conditionobviously we would like a condition which applies exactly when all the quotusefulquot features have been selectedone reasonable stopping criterion is to subject each proposed feature to crossvalidation on a sample of data withheld from the initial data setif the feature does not lead to an increase in likelihood of the withheld sample of data the feature is discardedwe will have more to say about the stopping criterion in section 53algorithm 2 is not a practical method for incremental feature selectionfor each candidate feature f e t considered in step 2 we must compute the maximum entropy model p a task that is computationally costly even with the efficient iterative scaling algorithm introduced earlierwe therefore introduce a modification to the algorithm making it greedy but much more feasiblewe replace the computation of the gain al of a feature f with an approximation which we will denote by alrecall that a model ps has a set of parameters a one for each feature in s the model p contains this set of parameters plus a single new parameter a corresponding to i4 given this structure we might hope that the optimal values for a do not change as the feature f is adjoined to s were this the case imposing an additional constraint would require only optimizing the single parameter a to maximize the likelihoodunfortunately when a new constraint is imposed the optimal values of all parameters changehowever to make the featureranking computation tractable we make the approximation that the addition of a feature f affects only a leaving the avalues associated with other features unchangedthat is when determining the gain off over the model ps we pretend that the best model containing features s you f has the form the only parameter distinguishing models of the form is aamong these models we are interested in the one that maximizes the approximate gain we will denote the gain of this model by and the optimal model by suf argmax gs j 19s1 despite the rather unwieldy notation the idea is simplecomputing the approximate gain in likelihood from adding feature f to ps has been reduced to a simple onedimensional optimization problem over the single parameter a which can be solved by any popular linesearch technique such as newton methodthis yields a great savings in computational complexity over computing the exact gain an ndimensional the likelihood l is a convex function of its parametersif we start from a oneconstraint model whose optimal parameter value is a ao and consider the increase in lp from adjoining a second constraint with the parameter a the exact answer requires a search over we can simplify this task by holding a ao constant and performing a line search over the possible values of the new parameter ain the darkened line represents the search space we restrict attention toin we show the reduced problem a line search over a optimization problem requiring more sophisticated methods such as conjugate gradientbut the savings comes at a price for any particular feature f we are probably underestimating its gain and there is a reasonable chance that we will select a feature f whose approximate gain al was highest and pass over the feature f with maximal gain al a graphical representation of this approximation is provided in figure 3here the loglikelihood is represented as an arbitrary convex function over two parameters a corresponds to the quotoldquot parameter and a to the quotnewquot parameterholding a fixed and adjusting a to maximize the loglikelihood involves a search over the darkened line rather than a search over the entire space of the actual algorithms along with the appropriate mathematical framework are presented in the appendixin the next few pages we discuss several applications of maximum entropy modeling within candide a fully automatic frenchtoenglish machine translation system under development at ibmover the past few years we have used candide as a test bed for exploring the efficacy of various techniques in modeling problems arising in machine translationwe begin in section 51 with a review of the general theory of statistical translation describing in some detail the models employed in candidein section 52 we describe how we have applied maximum entropy modeling to predict the french translation of an english word in contextin section 53 we describe maximum entropy models that predict differences between french word order and english word orderin section 54 we describe a maximum entropy model that predicts how to divide a french sentence into short segments that can be translated sequentiallyalignment of a frenchenglish sentence pairthe subscripts give the position of each word in its sentencehere al 1 a2 2 a3 a4 3 a5 4 and a6 5when presented with a french sentence f candide task is to find the english sentence e which is most likely given f candide estimates pthe probability that a string e of english words is a wellformed english sentenceusing a parametric model of the english language commonly referred to as a language modelthe system estimates pthe probability that a french sentence f is a translation of eusing a parametric model of the process of englishtofrench translation known as a translation modelthese two models plus a search strategy for finding the e that maximizes for some f comprise the engine of the translation systemwe now briefly describe the translation model for the probability p a more thorough account is provided in brown et al we imagine that an english sentence e generates a french sentence f in two stepsfirst each word in e independently generates zero or more french wordsthese words are then ordered to give a french sentence f we denote the ith word of e by e and the jth word of f by yiwe denote the number of words in the sentence e by 1e1 and the number of words in the sentence f by if 1the generative process yields not only the french sentence f but also an association of the words of f with the words of e we call this association an alignment and denote it by aan alignment a is parametrized by a sequence of ifi numbers al with 1 j ak al in other words the words to the left of the french word yi are generated by words to the left of the english word eal and the words to the right of yi are generated by words to the right of eain the alignment of figure 4 for example there are rifts at positions j 1 2 4 5 in the french sentenceone visual method of determining whether a rift occurs after the french word j is to try to trace a line from the last letter of yj up to the last letter of eai if the line can be drawn without intersecting any alignment lines position f is a riftusing our definition of rifts we can redefine a safe segmentation as one in which the segment boundaries are located only at riftsfigure 7 illustrates an unsafe segmentation in which a segment boundary lies between a and mange where there is no riftfigure 8 on the other hand illustrates a safe segmentationthe reader will notice that a safe segmentation does not necessarily result in semantically coherent segments mes and devoirs are certainly part of one logical unit yet are separated in this safe segmentationonce such a safe segmentation has been applied to the french sentence we can make the assumption while searching for the appropriate english translation that no word in the translated english sentence will have to account for french words located in multiple segmentsdisallowing interexample of a safe segmentation segment alignments dramatically reduces the scale of the computation involved in generating a translation particularly for large sentenceswe can consider each segment sequentially while generating the translation working from left to right in the french sentencewe now describe a maximum entropy model that assigns to each location in a french sentence a score that is a measure of the safety in cutting the sentence at that locationwe begin as in the word translation problem with a training sample of englishfrench sentence pairs randomly extracted from the hansard corpusfor each sentence pair we use the basic translation model to compute the viterbi alignment a between e and f we also use a stochastic partofspeech tagger as described in merialdo to label each word in f with its part of speechfor each position j in f we then construct a training eventthe value y is rift if a rift belongs at position j and is norift otherwisethe context information x is reminiscent of that employed in the word translation application described earlierit includes a sixword window of french words three to the left of yi and three to the right of ylit also includes the partofspeech tags for these words and the classes of these words as derived from a mutualinformation clustering scheme described in brown et al the complete pair is illustrated in figure 9in creating p we are modeling the decisions of an expert french segmenterwe have a sample of his work in the training sample j3 and we measure the worth of a model by the loglikelihood li3during the iterative modelgrowing procedure the algorithm selects constraints on the basis of how much they increase this objective functionas the algorithm proceeds more and more constraints are imposed on the model p bringing it into everstricter compliance with the empirical data pquot this is useful to a point insofar as the empirical data embodies the expert knowledge of the french segmenter we would like to incorporate this knowledge into a modelbut the data contains only so much expert knowledge the algorithm should terminate when it has extracted this knowledgeotherwise the model p will begin to fit itself to quirks in the empirical dataa standard approach in statistical modeling to avoid the problem of overfitting the training data is to employ crossvalidation techniquesseparate the training data p into a training portion pr and a withheld portion phuse only pr in the modelgrowing process that is select features based on how much they increase the likelihood l as the algorithm progresses lip thus increases monotonicallyas long as each new constraint imposed allows p to better account for the random process that generated both pr and p h the quantity lph also increasesat the point when overfitting begins however the new constraints no longer help p model the random process but instead require p to model the noise in the sample pr itselfat this point lp continues to rise but li no longer doesit is at this point that the algorithm should terminatefigure 10 illustrates the change in loglikelihood of training data 11 and withheld data lphhad the algorithm terminated when the loglikelihood of the withheld data stopped increasing the final model p would contain slightly less than 40 featureswe have employed this segmenting model as a component in a frenchenglish machine translation system in the following manner the model assigns to each position in the french sentence a score p which is a measure of how appropriate a split would be at that locationa dynamic programming algorithm then selects given the quotappropriatenessquot score at each position and the requirement that no segment may contain more than 10 words an optimal splitting of the sentencefigure 11 shows the system segmentation of four sentences selected at random from the hansard datawe remind the reader to keep in mind when evaluating figure 11 that the segmenter task is not to produce logically coherent blocks of words but to divide the sentence into blocks which can be translated sequentially from left to righttranslating a french sentence into english involves not only selecting appropriate english renderings of the words in the french sentence but also selecting an ordering for the english wordsthis order is often very different from the french word orderone way candide captures wordorder differences in the two languages is to allow for alignments with crossing linesin addition candide performs during a preprocessing stage a reordering step that shuffles the words in the input french sentence into an order more closely resembling english word orderone component of this word reordering step deals with french phrases which have the noun de noun formfor some noun de noun phrases the best english translation is nearly word for word conflit dinteret for example is almost always rendered as conflict of interestfor other phrases however the best translation is obtained by interchanging the two nouns and dropping the dethe french phrase taux dinteret for example is best rendered as interest ratetable 7 gives several examples of noun de noun phrases together with their most appropriate english translationsin this section we describe a maximum entropy model that given a french noun de noun phrase estimates the probability that the best english translation involves an interchange of the two nounswe begin with a sample of englishfrench sentence pairs randomly extracted from the hansard corpus such that f contains a noun de noun phrasefor each sentence pair we use the basic translation model to compute the viterbi alignment a between the words in e and f using a we construct an training event as follows we let the context x be the pair of french nouns we let y be nointerchange if the english translation is a wordforword translation of the french phrase and y interchange if the order of the nouns in the english and french phrases are interchangedwe define candidate features based upon the template features shown in table 8in this table the symbol 0 is a placeholder for either interchange or nointerchange and the symbols 01 and 02 are placeholders for possible french wordsif there are iva total french words there are 21vf1 possible features of templates 1 and 2 and 21vi2 features of template 3template 1 features consider only the left nounwe expect these features to be relevant when the decision of whether to interchange the nouns is influenced by the identity of the left nounfor example including the template 1 feature gives the model sensitivity to the fact that the nouns in french noun de noun phrases beginning with systeme are more likely to be interchanged in the english translationsimilarly including the template 1 feature gives the model sensitivity to the fact that french noun de noun phrases beginning with mois such as mois de mai are more likely to be translated word for wordtemplate 3 features are useful in dealing with translating noun de noun phrases in which the interchange decision is influenced by both nounsfor example noun de noun phrases ending in interet are sometimes translated word for word as in conflit dinteret and are sometimes interchanged as in taux dinteret we used the featureselection algorithm of section 4 to construct a maximum entropy model from candidate features derived from templates 1 2 and 3the model was grown on 10000 training events randomly selected from the hansard corpusthe final model contained 358 constraintsto test the model we constructed a noun de noun wordreordering module which interchanges the order of the nouns if p 05 and keeps the order the same otherwisetable 9 compares performance on a suite of test data against a baseline noun de noun reordering module that never swaps the word orderpredictions of the noun de noun interchange model on phrases selected from a corpus unseen during the training processtable 12 shows some randomlychosen noun de noun phrases extracted from this test suite along with p the probability assigned by the model to inversionon the right are phrases such as saison dhiver for which the model strongly predicted an inversionon the left are phrases the model strongly prefers not to interchange such as somme dargent abus de privilege and chambre de commerceperhaps most intriguing are those phrases that lie in the middle such as faux dinflation which can translate either to inflation rate or rate of inflationwe began by introducing the building blocks of maximum entropy modelingrealvalued features and constraints built from these featureswe then discussed the maximum entropy principlethis principle instructs us to choose among all the models consistent with the constraints the model with the greatest entropywe observed that this model was a member of an exponential family with one adjustable parameter for each constraintthe optimal values of these parameters are obtained by maximizing the likelihood of the training datathus two different philosophical approaches maximum entropy and maximum likelihoodyield the same result the model with the greatest entropy consistent with the constraints is the same as the exponential model which best predicts the sample of datawe next discussed algorithms for constructing maximum entropy models concentrating our attention on the two main problems facing wouldbe modelers selecting a set of features to include in a model and computing the parameters of a model containing these featuresthe general featureselection process is too slow in practice and we presented several techniques for making the algorithm feasiblein the second part of this paper we described several applications of our algorithms concerning modeling tasks arising in candide an automatic machine translation system under development at ibmthese applications demonstrate the efficacy of maximum entropy techniques for performing contextsensitive modelingthe authors wish to thank harry printz and john lafferty for suggestions and comments on a preliminary draft of this paper and jerome bellegarda for providing expert french knowledge
J96-1002
a maximum entropy approach to natural language processingthe concept of maximum entropy can be traced back along multiple threads to biblical timesonly recently however have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognitionin this paper we describe a method for statistical modeling based on maximum entropywe present a maximumlikelihood approach for automatically constructing maximum entropy models and describe how to implement this approach efficiently using as examples several problems in natural language processingwe propose a gaininformed selection method
assessing agreement on classification tasks the kappa statistic currently computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics none of which are easily interpretable or comparable to each other meanwhile researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statistic we discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science and argue that we would be better off as afield adopting techniques from content analysis currently computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics none of which are easily interpretable or comparable to each othermeanwhile researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statisticwe discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science and argue that we would be better off as afield adopting techniques from content analysiscomputational linguistic and cognitive science work on discourse and dialogue relies on subjective judgmentsfor instance much current research on discourse phenomena distinguishes between behaviors which tend to occur at or around discourse segment boundaries and those which do not although in some cases discourse segments are defined automatically more usually discourse segments are defined subjectively based on the intentional structure of the discourse and then other phenomena are related to themat one time it was considered sufficient when working with such judgments to show examples based on the authors interpretation research was judged according to whether or not the reader found the explanation plausiblenow researchers are beginning to require evidence that people besides the authors themselves can understand and reliably make the judgments underlying the researchthis is a reasonable requirement because if researchers cannot even show that people can agree about the judgments on which their research is based then there is no chance of replicating the research resultsunfortunately as a field we have not yet come to agreement about how to show reliability of judgmentsfor instance consider the following arguments for reliabilitywe have chosen these examples both for the clarity of their arguments and because taken as a set they introduce the full range of issues we wish to discuss possible to mark conversational move boundaries cite separately for each of three naive coders the ratio of the number of times they agreed with an quotexpertquot coder about the existence of a boundary over the number of times either the naive coder or the expert marked a boundarythey do not describe any restrictions on possible boundary sitesalthough and kid use of differ slightly from litman and hirschberg use of and in clearly designating one coder as an quotexpertquot all of these studies have n coders place some kind of units into m exclusive categoriesnote that the cases of testing for the existence of a boundary can be treated as coding quotyesquot and quotnoquot categories for each of the possible boundary sites this treatment is used by measures and but not by measure all four approaches seem reasonable when taken at face valuehowever the four measures of reliability bear no relationship to each otherworse yet since none of them take into account the level of agreement one would expect coders to reach by chance none of them are interpretable even on their ownwe first explain what effect chance expected agreement has on each of these measures and then argue that we should adopt the kappa statistic as a uniform measure of reliabilitymeasure seems a natural choice when there are two coders and there are several possible extensions when there are more coders including citing separate agreement figures for each important pairing counting a unit as agreed only if all coders agree on it or measuring one agreement over all possible pairs of coders thrown in togethertaking just the twocoder case the amount of agreement we would expect coders to reach by chance depends on the number and relative proportions of the categories used by the codersfor instance consider what happens when the coders randomly place units into categories instead of using an established coding schemeif there are two categories occurring in equal proportions on average the coders would agree with each other half of the time each time the second coder makes a choice there is a fiftyfifty chance of coming up with the same category as the first coderif instead the two coders were to use four categories in equal proportions we would expect them to agree 25 of the time and if both coders were to use one of two categories but use one of the categories 95 of the time we would expect them to agree 905 of the time this makes it impossible to interpret raw agreement figures using measure this same problem affects all of the possible ways of extending measure to more than two codersnow consider measure which has an advantage over measure when there is a pool of coders none of whom should be distinguished in that it produces one figure that sums reliability over all coder pairsmeasure still falls foul of the same problem with expected chance agreement as measure because it does not take into account the number of categories occurring in the coding schememeasure is a different approach to measuring over multiple undifferentiated codersnote that although passonneau and litman are looking at the presence or absence of discourse segment boundaries measure takes into account agreement that a prosodic phrase boundary is not a discourse segment boundary and therefore treats the problem as a twocategory distinctionmeasure falls foul of the same basic problem with chance agreement as measures and but in addition the statistic itself guarantees at least 50 agreement by only pairing off coders against the majority opinionit also introduces an quotexpertquot coder by the back door in assuming that the majority is always right although this stance is somewhat at odds with passonneau and litman subsequent assessment of a boundary strength from one to seven based on the number of coders who noticed itmeasure looks at almost exactly the same type of problem as measure the presence or absence of some kind of boundaryhowever since one coder is explicitly designated as an quotexpertquot it does not treat the problem as a twocategory distinction but looks only at cases where either coder marked a boundary as presentwithout knowing the density of conversational move boundaries in the corpus this makes it difficult to assess how well the coders agreed on the absence of boundaries or to compare measures and in addition note that since false positives and missed negatives are rolled together in the denominator of the figure measure does not really distinguish expert and naive coder roles as much as it mightnonetheless this style of measure does have some advantages over measures and since these measures produce artificially high agreement figures when one category of a set predominates as is the case with boundary judgmentsone would expect measure results to be high under any circumstances and it is not affected by the density of boundariesso far we have shown that all four of these measures produce figures that are at best uninterpretable and at worst misleadingkid make no comment about the meaning of their figures other than to say that the amount of agreement they show is reasonable silverman et al simply point out that where figures are calculated over different numbers of categories they are not comparableon the other hand passonneau and litman note that their figures are not properly interpretable and attempt to overcome this failing to some extent by showing that the agreement which they have obtained at least significantly differs from random agreementtheir method for showing this is complex and of no concern to us here since all it tells us is that it is safe to assume that the coders were not coding randomlyreassuring but no guarantee of reliabilityit is more important to ask how different the results are from random and whether or not the data produced by coding is too noisy to use for the purpose for which it was collectedthe concerns of these researchers are largely the same as those in the field of content analysis which has been through the same problems as we are currently facing and in which strong arguments have been made for using the kappa coefficient of agreement as a measure of reliabilitythe kappa coefficient measures pairwise agreement among a set of coders making category judgments correcting for expected chance agreement where p is the proportion of times that the coders agree and p is the proportion of times that we would expect them to agree by chance calculated along the lines of the intuitive argument presented abovewhen there is no agreement other than that which would be expected by chance k is zerowhen there is total agreement k is oneit is possible and sometimes useful to test whether or not k is significantly different from chance but more importantly interpretation of the scale of agreement is possiblekrippendorff discusses what constitutes an acceptable level of agreement while giving the caveat that it depends entirely on what one intends to do with the codingfor instance he claims that finding associations between two variables that both rely on coding schemes with k 8 as good reliability with 67 k 8 allowing tentative conclusions to be drawnwe would add two further caveatsfirst although kappa addresses many of the problems we have been struggling with as a field in order to compare k across studies the underlying assumptions governing the calculation of chance expected agreement still require the units over which coding is performed to be chosen sensibly and comparablywhere no sensible choice of unit is available pretheoretically measure may still be preferredsecondly coding discourse and dialogue phenomena and especially coding segment boundaries may be inherently more difficult than many previous types of content analysis krippendorff a is more general than siegel and castellan k in that krippendorff extends the argument from category data to interval and ratio scales this extension might be useful for for instance judging the reliability of tobi break index coding since some researchers treat these codes as inherently scalar krippendorff a and siegel and castellan k differ slightly when used on category judgments in the assumptions under which expected agreement is calculatedhere we use siegel and castellan k because they explain their statistic more clearly but the value of a is so closely related especially under the usual expectations for reliability studies that krippendorff statements about a hold and we conflate the two under the more general name quotkappaquot the advantages and disadvantages of different forms and extensions of kappa have been discussed in many fields but especially in medicine see for example berry goldman kraemer soeken and prescott dividing newspaper articles based on subject matterwhether we have reached a reasonable level of agreement in our work as a field remains to be seen our point here is merely that if as a community we adopt clearer statistics we will be able to compare results in a standard way across different coding schemes and experiments and to evaluate current developmentsand that will illuminate both our individual results and the way forwardin assessing the amount of agreement among coders of category distinctions the kappa statistic normalizes for the amount of expected chance agreement and allows a single measure to be calculated over multiple codersthis makes it applicable to the studies we have described and more besideshowever we have yet to discuss the role of expert coders in such studieskid designate one particular coder as the expertpassonneau and litman have only naive coders but in essence have an expert opinion available on each unit classified in terms of the majority opinionsilverman et al treat all coders indistinguishably although they do build an interesting argument about how agreement levels shift when a number of lessexperienced transcribers are added to a pool of highly experienced oneswe would argue that in subjective codings such as these there are no real expertswe concur with krippendorff that what counts is how totally naive coders manage based on written instructionscomparing naive and expert coding as kid do can be a useful exercise but rather than assessing the naive coders accuracy it in fact measures how well the instructions convey what these researchers think they doin passonneau and litman the reason for comparing to the majority opinion is less cleardespite our argument there are occasions when one opinion should be treated as the expert onefor instance one can imagine determining whether coders using a simplified coding scheme match what can be obtained by some better but more expensive method which might itself be either objective or subjectivein these cases we would argue that it is still appropriate to use the kappa statistic in a variation which looks only at pairings of agreement with the expert opinion rather than at all possible pairs of codersthis variation could be achieved by interpreting p as the proportion of times that the naive coders agree with the expert and p as the proportion of times we would expect the naive coders to agree with the expert by chancewe have shown that existing measures of reliability in discourse and dialogue work are difficult to interpret and we have suggested a replacement measure the kappa statistic which has a number of advantages over these measureskappa is widely accepted in the field of content analysisit is interpretable allows different results to be compared and suggests a set of diagnostics in cases where the reliability results are not good enough for the required purposewe suggest that this measure be adopted more widely within our own research communitythis work was supported by grant number humancomputer interaction and an g9111013 of the youk joint councils interdisciplinary research centre grant
J96-2004
assessing agreement on classification tasks the kappa statisticcurrently computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics none of which are easily interpretable or comparable to each othermeanwhile researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statisticwe discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science and argue that we would be better off as afield adopting techniques from content analysisour method kappa statistic is used extensively in empirical studies of discourse
a stochastic finitestate wordsegmentation algorithm for chinese aposhow do you say octopus in japanese el at xfp tel ev plausible segmentation ri4wen2 zhanglyu2 zen3me0 shuol japanese octopus how ay implausible segmentation ri4 wen2zhangl yu2 zen3me0 shuol japan essay fish how ay figure 1 a chinese sentence in illustrating the lack of word boundaries in is a plausible segmentation for this sentence in is an implausible segmentation orthographic words are thus only a starting point for further analysis and can only be regarded as a useful hint at the desired division of the sentence into words whether a language even has orthographic words is largely dependent on the writing system used to represent the language the quotorthographic is not universal most that use roman armenian or semitic scripts and many use scripts mark orthographic word boundaries however languages written in a chinesederived writing system including chinese and japanese as well as indianderived writing systems languages like thai do not delimit orthographic put another way written chinese simply lacks orthographic words in chinese text individual characters of the script to which we shall refer by their traditional of are written one after another with no intervening spaces a chinese is shown in figure partly as a result of this the notion quotwordquot has never played a role in chinese philological tradition and the idea that chinese lacks anything analogous to words in european languages has been prevalent among western sinologists see defrancis twentiethcentury linguistic work on chinese has revealed the incorrectness of this traditional view all notions of word with the exception of the orthographic word are as relevant in chinese as they are in english and just as is the case in other a word in chinese may correspond to one or more symbols in the orthog 1 for a related approach to the problem of wordsegmention in japanese see nagata inter alia chinese avquotcharacter this is the same word as japanese 3 throughout this paper we shall give chinese examples in traditional orthography followed by a romanization into the scheme numerals following each pinyin syllable represent tones examples will usually be accompanied by a translation plus a morphemebymorpheme gloss given in parentheses whenever the translation does not adequately serve this purpose in the pinyin transliterations a dash separates syllables that may be considered part of the same phonological word spaces are used to separate plausible phonological words and a plus sign is used where relevant to indicate morpheme boundaries of interest 378 sproat shih gale and chang wordsegmentation for chinese is a fairly uncontroversial case of a monographemic word 1131s country china a fairly uncontroversial case of a digraphemic word the relevance of the distinction between say phonological words say dictionary words is shown by an example like 11 131 zhonglren2min2 gong4he2guo2 people republic people republic of china arguably this consists of about three phonological words on the other hand in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into english if one wants to any purposefrom chinese sentences one faces a more difficult task than one does in english since one cannot use spacing as a guide for example suppose one is building a tts system for mandarin chinese for that application at a minimum one would want to know the phonological word boundaries now for this application one might be tempted to simply bypass the segmentation problem and pronounce the text characterbycharacter however there are several reasons why this approach will not in general work 1 many hanzi have more than one pronunciation where the correct depends upon word affiliation erg is pronounced is a prenominal modification marker but the word is normally but a person given name 2 some phonological rules depend upon correct word segmentation including third tone sandhi which changes a 3 tone a 2 tone before another 3 tone flao3 shu31 becomes than 1 the first applies within the word blocking its phrasal application 3 in various dialects of mandarin certain phonetic rules apply at the word level for example in northern dialects a full tone is changed to a neutral tone in the final syllable of many melon is often pronounced the high 1 tone of would not normally neutralize in this fashion if it were functioning as a word on its own 4 tts systems in general need to do more than simply compute the pronunciations of individual words they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances it has been shown for english that grammatical part of speech provides useful information for these tasks given that partofspeech labels are properties of words rather than morphemes it follows that one cannot do partofspeech assignment without having access to wordboundary information making the reasonable assumption that similar information is relevant for solving these problems in chinese it follows that a prerequisite for intonationboundary assignment and prominence assignment is word segmentation the points enumerated above are particularly related to tts but analogous arguments can easily be given for other applications see for example wu and tseng discussion of the role of segmentation in information retrieval there are thus some very good reasons why segmentation into words is an important task 379 computational linguistics volume 22 number 3 a minimal requirement for building a chinese word segmenter is obviously a dictionary furthermore as has been argued persuasively by fung and wu one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented for novel texts no lexicon that consists simply of a list of word entries will ever be entirely satisfactory since the list will inevitably omit many constructions that should be considered words among these are words derived by various productive processes including morphologically derived words such tudents which is derived by the affixation of the affix the noun personal names such as kum enlai of course we can expect famous names like zhou enlai to be in many dictionaries names such as ea name of the second author of this paper will not be found in any dictionary transliterated foreign names such as malaysia again famous place names will most likely be found in the but less wellknown names such as will not generally be found in this paper we present a stochastic finitestate model for segmenting chinese text into words both words found in a lexicon as well as words derived via the abovementioned productive processes the segmenter handles the grouping of hanzi into words and outputs word pronunciations with default pronunciations for hanzi it cannot group we focus here primarily on the system ability to segment text appropriately the model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finitestate transducers it also incorporates the goodturing method in estimating the likelihoods of previously unseen constructions including morphological derivatives and personal names we will evaluate of the segmentation as well as the performance this latter evaluation compares the performance of the system with that of several human judges since as we shall show even people do not agree on a single correct way to segment a text finally this effort is part of a much larger program that we are undertaking to develop stochastic finitestate methods for text analysis with applications to tts and other areas in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context the initial stage of text analysis for any nlp task usually involves the tokenization of the input into wordsfor languages like english one can assume to a first approximation that word boundaries are given by whit espace or punctuationin various asian languages including chinese on the other hand whites pace is never used to delimit words so one must resort to lexical information to quotreconstructquot the wordboundary informationin this paper we present a stochastic finitestate model wherein the basic workhorse is the weighted finitestate transducerthe model segments chinese text into dictionary entries and words derived by various productive lexical processes andsince the primary intended application of this model is to texttospeech synthesisprovides pronunciations for these wordswe evaluate the system performance by comparing its segmentation quotjudgmentsquot with the judgments of a pool of human segmenters and the system is shown to perform quite wellany nlp application that presumes as input unrestricted text requires an initial phase of text analysis such applications involve problems as diverse as machine translation information retrieval and texttospeech synthesis an initial step of any textanalysis task is the tokenization of the input into wordsfor a language like english this problem is generally regarded as trivial since words are delimited in english text by whitespace or marks of punctuationthus in an english sentence such as i am going to show up at the acl one would reasonably conjecture that there are eight words separated by seven spacesa moment reflection will reveal that things are not quite that simplethere are clearly eight orthographic words in the example given but if one were doing syntactic analysis one would probably want to consider i am to consist of two syntactic words namely i and amif one is interested in translation one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and upand if one is interested in us one would probably consider the single orthographic word acl to consist of three phonological wordse si elcorresponding to the pronunciation of each of the letters in the acronymspace or punctuationdelimited a chinese sentence in illustrating the lack of word boundariesin is a plausible segmentation for this sentence in is an implausible segmentation orthographic words are thus only a starting point for further analysis and can only be regarded as a useful hint at the desired division of the sentence into wordswhether a language even has orthographic words is largely dependent on the writing system used to represent the language the notion quotorthographic wordquot is not universalmost languages that use roman greek cyrillic armenian or semitic scripts and many that use indianderived scripts mark orthographic word boundaries however languages written in a chinesederived writing system including chinese and japanese as well as indianderived writing systems of languages like thai do not delimit orthographic words1 put another way written chinese simply lacks orthographic wordsin chinese text individual characters of the script to which we shall refer by their traditional name of hanzi2 are written one after another with no intervening spaces a chinese sentence is shown in figure 13 partly as a result of this the notion quotwordquot has never played a role in chinese philological tradition and the idea that chinese lacks anything analogous to words in european languages has been prevalent among western sinologists see defrancis twentiethcentury linguistic work on chinese has revealed the incorrectness of this traditional viewall notions of word with the exception of the orthographic word are as relevant in chinese as they are in english and just as is the case in other languages a word in chinese may correspond to one or more symbols in the orthography ren2 person is a fairly uncontroversial case of a monographemic word and 1131s zhonglguo2 china a fairly uncontroversial case of a digraphemic wordthe relevance of the distinction between say phonological words and say dictionary words is shown by an example like prtv a e 11 131 zhonglhua2 ren2min2 gong4he2guo2 people republic of chinaarguably this consists of about three phonological wordson the other hand in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into englishthus if one wants to segment wordsfor any purposefrom chinese sentences one faces a more difficult task than one does in english since one cannot use spacing as a guidefor example suppose one is building a tts system for mandarin chinesefor that application at a minimum one would want to know the phonological word boundariesnow for this application one might be tempted to simply bypass the segmentation problem and pronounce the text characterbycharacterhowever there are several reasons why this approach will not in general work levelfor example in northern dialects a full tone is changed to a neutral tone in the final syllable of many words a donglgual winter melon is often pronounced donglgua0the high 1 tone of would not normally neutralize in this fashion if it were functioning as a word on its own4tts systems in general need to do more than simply compute the pronunciations of individual words they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterancesit has been shown for english that grammatical part of speech provides useful information for these tasksgiven that partofspeech labels are properties of words rather than morphemes it follows that one cannot do partofspeech assignment without having access to wordboundary informationmaking the reasonable assumption that similar information is relevant for solving these problems in chinese it follows that a prerequisite for intonationboundary assignment and prominence assignment is word segmentationthe points enumerated above are particularly related to tts but analogous arguments can easily be given for other applications see for example wu and tseng discussion of the role of segmentation in information retrievalthere are thus some very good reasons why segmentation into words is an important taska minimal requirement for building a chinese word segmenter is obviously a dictionary furthermore as has been argued persuasively by fung and wu one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmentedfor novel texts no lexicon that consists simply of a list of word entries will ever be entirely satisfactory since the list will inevitably omit many constructions that should be considered wordsamong these are words derived by various productive processes including tudents which is derived by the affixation of the plural affix i am men to the noun x44fi xue2shenglpersonal names such as kum zhoulenllai2 zhou enlaiof course we can expect famous names like zhou enlai to be in many dictionaries but names such as ea shi2jillin2 the name of the second author of this paper will not be found in any dictionary3transliterated foreign names such as ligrg2 ma3lai2xilya3 malaysiaagain famous place names will most likely be found in the dictionary but less wellknown names such as lopnfla bu4lang3shi4wei2ke4 brunswick will not generally be foundin this paper we present a stochastic finitestate model for segmenting chinese text into words both words found in a lexicon as well as words derived via the abovementioned productive processesthe segmenter handles the grouping of hanzi into words and outputs word pronunciations with default pronunciations for hanzi it cannot group we focus here primarily on the system ability to segment text appropriately the model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finitestate transducersit also incorporates the goodturing method in estimating the likelihoods of previously unseen constructions including morphological derivatives and personal nameswe will evaluate various specific aspects of the segmentation as well as the overall segmentation performancethis latter evaluation compares the performance of the system with that of several human judges since as we shall show even people do not agree on a single correct way to segment a textfinally this effort is part of a much larger program that we are undertaking to develop stochastic finitestate methods for text analysis with applications to tts and other areas in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader contextmost readers will undoubtedly be at least somewhat familiar with the nature of the chinese writing system but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the chinese script that will be relevant to topics discussed in this paperthe first point we need to address is what type of linguistic object a hanzi representsmuch confusion has been sown about chinese writing by the use of the term ideograph suggesting that hanzi somehow directly represent ideasthe most accurate characterization of chinese writing is that it is morphosyllabic each hanzi represents one morpheme lexically and semantically and one syllable phonologicallythus in a twohanzi word like itrij zhonglguo2 china there are two syllables and at the same time two morphemesof course since the number of attested mandarin syllables is far smaller than the number of morphemes it follows that a given syllable could in principle be written with any of several different hanzi depending upon which morpheme is intended the syllable zhongl could be pp middle m clock 4 end or 2 loyala morpheme on the other hand usually corresponds to a unique hanzi though there are a few cases where variant forms are foundfinally quite a few hanzi are homographs meaning that they may be pronounced in several different ways and in extreme cases apparently represent different morphemes the prenominal modification marker e0 de0 is presumably a different morpheme from the second morpheme of erg mu4di4 even though they are written the same waythe second point which will be relevant in the discussion of personal names in section 44 relates to the internal structure of hanzifollowing the system devised under the qing emperor kang xi hanzi have traditionally been classified according to a set of approximately 200 semantic radicals members of a radical class share a particular structural component and often also share a common meaning for example hanzi containing the insect radical tend to denote insects and other crawling animals examples include wal frog fengl wasp and it8 she2 nakesimilarly hanzi sharing the ghost radical tend to denote spirits and demons such as gui3 ghost itself m mo2 demon and ri yan3 nightmarewhile the semantic aspect of radicals is by no means completely predictive the semantic homogeneity of many classes is quite striking for example 254 out of the 263 examples of the insect class listed by wieger denote crawling or invertebrate animals similarly 21 out of the 22 examples of the ghost class denote ghosts or spiritsas we shall argue the semantic class affiliation of a hanzi constitutes useful information in predicting its propertiesthere is a sizable literature on chinese word segmentation recent reviews include wang su and mo and wu and tseng roughly speaking previous work can be divided into three categories namely purely statistical approaches purely lexical rulebased approaches and approaches that combine lexical information with statistical informationthe present proposal falls into the last grouppurely statistical approaches have not been very popular and so far as we are aware earlier work by sproat and shih is the only published instance of such an approachin that work mutual information was used to decide whether to group adjacent hanzi into twohanzi wordsmutual information was shown to be useful in the segmentation task given that one does not have a dictionarya related point is that mutual information is helpful in augmenting existing electronic dictionaries and we have used lists of character pairs ranked by mutual information to expand our own dictionarynonstochastic lexicalknowledgebased approaches have been much more numeroustwo issues distinguish the various proposalsthe first concerns how to deal with ambiguities in segmentationthe second concerns the methods used to extend the lexicon beyond the static list of entries provided by the machinereadable dictionary upon which it is basedthe most popular approach to dealing with segmentation ambiguities is the maximum matching method possibly augmented with further heuristicsthis method one instance of which we term the quotgreedy algorithmquot in our evaluation of our own system in section 5 involves starting at the beginning of the sentence finding the longest word starting at that point and then repeating the process starting at the next hanzi until the end of the sentence is reachedpapers that use this method or minor variants thereof include liang li et al cu and mao and nie jin and hannan the simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it since the method is guaranteed to produce only one segmentationmethods that allow multiple segmentations must provide criteria for choosing the best segmentationsome approaches depend upon some form of constraint satisfaction based on syntactic or semantic features others depend upon various lexical heuristics for example chen and liu attempt to balance the length of words in a threeword window favoring segmentations that give approximately equal length for each wordmethods for expanding the dictionary include of course morphological rules rules for segmenting personal names as well as numeral sequences expressions for dates and so forth lexicalknowledgebased approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence and picks the best segmentation from the set of possible segmentations using a probabilistic or costbased scoring mechanismapproaches differ in the algorithms used for scoring and selecting the best path as well as in the amount of contextual information used in the scoring processthe simplest approach involves scoring the various analyses by costs based on word frequency and picking the lowest cost path variants of this approach have been described in chang chen and chen and chang and chen more complex approaches such as the relaxation technique have been applied to this problem fan and tsai note that chang chen and chen in addition to wordfrequency information include a constraintsatisfication model so their method is really a hybrid approachseveral papers report the use of partofspeech information to rank segmentations typically the probability of a segmentation is multiplied by the probability of the tagging for that segmentation to yield an estimate of the total probability for the analysisstatistical methods seem particularly applicable to the problem of unknownword identification especially for constructions like names where the linguistic constraints are minimal and where one therefore wants to know not only that a particular sequence of hanzi might be a name but that it is likely to be a name with some probabilityseveral systems propose statistical methods for handling unknown words some of these approaches attempt to identify unknown words but do not actually tag the words as belonging to one or another class of expressionthis is not ideal for some applications howeverfor instance for tts it is necessary to know that a particular sequence of hanzi is of a particular category because that knowledge could affect the pronunciation consider for example the issues surrounding the pronunciation of e gani qian2 discussed in section 1following sproat and shih performance for chinese segmentation systems is generally reported in terms of the dual measures of precision and recallit is fairly standard to report precision and recall scores in the mid to high 90 rangehowever it is almost universally the case that no clear definition of what constitutes a quotcorrectquot segmentation is given so these performance measures are hard to evaluateindeed as we shall show in section 5 even human judges differ when presented with the task of segmenting a text into words so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measuresin a few cases the criteria for correctness are made more explicitfor example chen and liu report precision and recall rates of over 99 but this counts only the words that occur in the test corpus that also occur in their dictionarybesides the lack of a clear definition of what constitutes a correct segmentation for a given chinese sentence there is the more general issue that the test corpora used in these evaluations differ from system to system so meaningful comparison between systems is rendered even more difficultthe major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen wordsthe dictionary sizes reported in the literature range from 17000 to 125000 entries and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches possibly more important than the particular set of methods used in the segmentationfurthermore even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus as fung and wu have shown one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmentedchinese word segmentation can be viewed as a stochastic transduction problemmore formally we start by representing the dictionary d as a weighted finite state transducer let h be the set of hanzi p be the set of pinyin syllables with tone marks and p be the set of grammatical partofspeech labelsthen each arc of d maps either from an element of h to an element of p or from cie the empty stringto an element of p more specifically each word is represented in the dictionary as a sequence of arcs starting from the initial state of d and labeled with an element s of h x p which is terminated with a weighted arc labeled with an element of e x p the weight represents the estimated cost of the wordnext we represent the input sentence as an unweighted finitestate acceptor i over h let us assume the existence of a function id which takes as input an fsa a and produces as output a transducer that maps all and only the strings of symbols accepted by a to themselves we can then define the best segmentation to be the cheapest or best path in id o d composed with the transitive closure of d6 consider the abstract example illustrated in figure 2in this example there are four quotinput charactersquot a b c and d and these map respectively to four quotpronunciationsquot a b c and d furthermore there are four quotwordsquot represented in the dictionarythese are shown with their associated costs as follows ab nc 40 abcjj 60 cdvb 50 dnc 50 the minimal dictionary encoding this information is represented by the wfst in figure 2an input abcd can be represented as an fsa as shown in figure 2this fsa i can be segmented into words by composing id with d to form the wfst shown in figure 2 then selecting the best path through this wfst to produce the wfst in figure 2this wfst represents the segmentation of the text into the words ab and cd word boundaries being marked by arcs mapping between e and partofspeech labelssince the segmentation corresponds to the sequence of words that has the lowest summed unigram cost the segmenter under discussion here is a zerothorder modelit is important to bear in mind though that this is not an inherent limitation of the modelfor example it is wellknown that one can build a finitestate bigram model by simply assigning a state s to each word wi in the vocabulary and having arcs leaving that state weighted such that for each zal and corresponding arc al leaving s the cost on al is the bigram cost of wwiin section 6 we discuss other issues relating to how higherorder language models could be incorporated into the modelas we have seen the lexicon of basic words and stems is represented as a wfst most arcs in this wfst represent mappings between hanzi and pronunciations and are costlesseach word is terminated by an arc that represents the transduction between c and the part of speech of that word weighted with an estimated cost for that wordthe cost is computed as follows where n is the corpus size and f is the frequency besides actual words from the base dictionary the lexicon contains all hanzi in the big 5 chinese code with their pronunciation plus entries for other characters that can be found in chinese text such as roman letters numerals and special symbolsnote that hanzi that are not grouped into dictionary words or into one of the other categories of words discussed in this paper are left unattached and tagged as unknown wordsother strategies could readily an abstract example illustrating the segmentation algorithmthe transitive closure of the dictionary in is composed with id to form the wfst the segmentation chosen is the best path through the wfst shown in be implemented though such as a maximalgrouping strategy or a pairwisegrouping strategy whereby long sequences of unattached hanzi are grouped into twohanzi words we have not to date explored these various optionsword frequencies are estimated by a reestimation procedure that involves applying the segmentation algorithm presented here to a corpus of 20 million words using a set of initial estimates of the word frequenciesin this reestimation procedure only the entries in the base dictionary were used in other words derived words not in the base dictionary and personal and foreign names were not usedthe best analysis of the corpus is taken to be the true analysis the frequencies are reestimated and the algorithm is repeated until it convergesclearly this is not the only way to estimate wordfrequencies however and one could consider applying other methods in particular since the problem is similar to the problem of assigning partofspeech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags one might consider a more sophisticated approach such as that described in kupiec one could also use methods that depend on a small handtagged seed corpus as suggested by one reviewerin any event to date we have not compared different methods for deriving the set of initial frequency estimatesnote also that the costs currently used in the system are actually string costs rather than word coststhis is because our corpus is not annotated and hence does not distinguish between the various words represented by homographs such as 34 which could be 34 lady jiangl be about to or g inc jiang4 generalas in dniti xiao3jiang4 little generalin such cases we assign all of the estimated probability mass to the form with the most likely pronunciation and assign a very small probability to all other variantsin the case of 34 the most common usage is as an adverb with the pronunciation jiangl so that variant is assigned the estimated cost of 598 and a high cost is assigned to nominal usage with the pronunciation jiang4the less favored reading may be selected in certain contexts however in the case of 34 for example the nominal reading jiang4 will be selected if there is morphological information such as a following plural affix 111 men0 that renders the nominal reading likely as we shall see in section 43figure 3 shows a small fragment of the wfst encoding the dictionary containing both entries for 14 just discussed pp right now zhonglhua2 min2guo2 republic of china and ma nan2gual pumpkinfigure 4 shows two possible paths from the lattice of possible analyses of the input sentence hagf2d1 how do you say octopus in japanese previously shown in figure 1as noted this sentence consists of four words namely iem ri4wen2 japanese wo zhanglyu2 octopus te zen3me0 how and w shuol ayas indicated in figure 1 apart from this correct analysis there is also the analysis taking h ri4 as a word along with v wen2zhangl essay and tiii yu2 fishboth of these analyses are shown in figure 4 fortunately the correct analysis is also the one with the lowest cost so it is this analysis that is chosenthe method just described segments dictionary words but as noted in section 1 there are several classes of words that should be handled that are not found in a standard dictionaryone class comprises words derived by productive morphological processes such as plural noun formation using the suffix ill men0the morphological analysis itself can be handled using wellknown techniques from finitestate morphology we represent the fact that fl attaches to nouns by allowing ctransitions from the final states of all noun entries to the initial state of the subwfst representing flhowever for our purposes it is not sufficient to represent the morphological decomposition of say plural nouns we also need an estimate of the cost of the resulting wordfor derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entryso xue2shenglmen0 tudents occurs and we estimate its cost at 1143 similarly we estimate the cost of mi jiang4men0 generals at 1502but we also need an estimate of the probability for a nonoccurring though possible plural form like ill nan2gualmen0 pumpkinshere we use the goodturing estimate whereby the aggregate probability of previously unseen instances of a construction is estimated as n1n where n is the total number of observed tokens and n1 is the number of types observed only oncelet us notate the set of previously unseen or novel members of a category x as unseen thus novel members of the set of words derived in 11 men will be denoted unseenfor flu the goodturing estimate just discussed gives us an estimate of p i 1the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in ft this goodturing estimate of p in can then be used in the normal way to define the probability of finding a novel instance of a construction in flu in a text p p ifl phere p is just the probability of any construction in it as estimated from the frequency of such constructions in the corpusfinally assuming a simple bigram backoff model we can derive the probability estimate for the particular unseen word malt as the product of the probability estimate for mei and the probability estimate just derived for unseen plurals in p p1pthe cost estimate cost71 fl is computed in the obvious way by summing the negative log probabilities of ma and fifigure 5 shows how this model is implemented as part of the dictionary wfstthere is a transition between the nc node and the transition from to a final state transduces c to the grammatical tag pi with cost cost costjjfi cost11 cost as desiredfor the seen word pm generals there is an cnc transduction from gi to the node preceding ill this arc has cost cost cost so that the cost of the whole path is the desired costthis representation gives 14f9 an appropriate morphological decomposition preserving information that would be lost by simply listing wfl as an unanalyzed formnote that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its pluralan analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlationr2 020 p 0005 see figure 6this suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form10 chinese speakers may object to this form since the suffix flu men0 is usually restricted to attaching to terms denoting human beingshowever it is possible to personify any noun so in children stories or fables in nan2gualmen0 pumpkins is by no means impossiblean example of affixation the plural affixfull chinese personal names are in one respect simple they are always of the form familygiventhe family name set is restricted there are a few hundred singlehanzi family names and about ten doublehanzi onesgiven names are most commonly two hanzi long occasionally one hanzi long there are thus four possible name types which can be described by a simple set of contextfree rewrite rules such as the following name 1hanzifamily 2hanzigiven 1hanzifamily 1hanzigiven 2hanzifamily 2hanzigiven 2hanzifamily 1hanzigiven hanzi hanzi hanzi hanzi hanzi hanzii the difficulty is that given names can consist in principle of any hanzi or pair of hanzi so the possible given names are limited only by the total number of hanzi though some hanzi are certainly far more likely than othersfor a sequence of hanzi that is a possible name we wish to assign a probability to that sequence qua namewe can model this probability straightforwardly enough with a probabilistic version of the grammar just given which would assign probabilities to the individual rulesfor example given a sequence f1g1g2 where f1 is a legal singlehanzi family name and plot of log frequency of base noun against log frequency of plural nounsg1 and g2 are hanzi we can estimate the probability of the sequence being a name as the product of this model is essentially the one proposed in chang et al the first probability is estimated from a name count in a text database and the rest of the probabilities are estimated from a large list of personal namesquot note that in chang et al model the p is estimated as the product of the probability of finding g1 in the first position of a twohanzi given name and the probability of finding g2 in the second position of a twohanzi given name and we use essentially the same estimate here with some modifications as described later onthis model is easily incorporated into the segmenter by building a wfst restricting the names to the four licit types with costs on the arcs for any particular name summing to an estimate of the cost of that namethis wfst is then summed with the wfst implementing the dictionary and morphological rules and the transitive closure of the resulting transducer is computed see pereira riley and sproat for an explanation of the notion of summing wfsts12 conceptual improvements over chang et al modelthere are two weaknesses in chang et al model which we improve uponfirst the model assumes independence between the first and second hanzi of a double given nameyet some hanzi are far more probable in women names than they are in men names and there is a similar list of maleoriented hanzi mixing hanzi from these two lists is generally less likely than would be predicted by the independence modelas a partial solution for pairs of hanzi that cooccur sufficiently often in our namelists we use the estimated bigram cost rather than the independencebased costthe second weakness is purely conceptual and probably does not affect the performance of the modelfor previously unseen hanzi in given names chang et al assign a uniform small cost but we know that some unseen hanzi are merely accidentally missing whereas others are missing for a reasonfor example because they have a bad connotationas we have noted in section 2 the general semantic class to which a hanzi belongs is often predictable from its semantic radicalnot surprisingly some semantic classes are better for names than others in our corpora many names are picked from the grass class but very few from the sickness classother good classes include jade and gold other bad classes are death and ratwe can better predict the probability of an unseen hanzi occurring in a name by computing a withinclass goodturing estimate for each radical classassuming unseen objects within each class are equiprobable their probabilities are given by the goodturing theorem as where kis is the probability of one unseen hanzi in class cls e is the expected number of hanzi in cls seen once n is the total number of hanzi and e is the expected number of unseen hanzi in class clsthe use of the goodturing equation presumes suitable estimates of the unknown expectations it requiresin the denomi11 we have two such lists one containing about 17000 full names and another containing frequencies of hanzi in the various name positions derived from a million names12 one class of full personal names that this characterization does not cover are married women names where the husband family name is optionally prepended to the woman full name thus xu3lin2yan2hai3 would represent the name that ms lin yanhai would take if she married someone named xuthis style of naming is never required and seems to be losing currencyit is formally straightforward to extend the grammar to include these names though it does increase the likelihood of overgeneration and we are unaware of any working systems that incorporate this type of namewe of course also fail to identify by the methods just described given names used without their associated family namethis is in general very difficult given the extremely free manner in which chinese given names are formed and given that in these cases we lack even a family name to give the model confidence that it is identifying a name nator the ng can be measured well by counting and we replace the expectation by the observationin the numerator however the counts of rqs are quite irregular including several zeros however there is a strong relationship between nclis and the number of hanzi in the classfor e then we substitute a smooth s against the number of class elementsthis smooth guarantees that there are no zeroes estimatedthe final estimating equation is then since the total of all these class estimates was about 10 off from the turing estimate nin for the probability of all unseen hanzi we renormalized the estimates so that they would sum to innthis classbased model gives reasonable results for six radical classes table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double given namenote that the good classes jade gold and grass have lower costs than the bad classes sickness death and rat as desired so the trend observed for the results of this method is in the right directionforeign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the namesince foreign names can be of any length and since their original pronunciation is effectively unlimited the identification of such names is trickyfortunately there are only a few hundred hanzi that are particularly common in transliterations indeed the commonest ones such as e bal er3 and i am al are often clear indicators that a sequence of hanzi containing them is foreign even a name like a xia4mi3er3 hamir which is a legal chinese personal name retains a foreign flavor because of fflas a first step towards modeling transliterated names we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary and we estimate the probability of occurrence of each hanzi in a transliteration using the maximum likelihood estimateas with personal names we also derive an estimate from text of the probability of finding a transliterated name of any kind finally we model the probability of a new transliterated name as the product of prn and ptn for each hanzi in the putative namethe foreign name model is implemented as an wfst which is then summed with the wfst implementing the dictionary morpho13 the current model is too simplistic in several respectsfor instance the common quotsuffixesquot nia and sia are normally transliterated as itg2 ni2ya3 and ffigi2 xilya3 respectivelythe interdependence between jei or ffi and 0 is not captured by our model but this could easily be remedied logical rules and personal names the transitive closure of the resulting machine is then computedin this section we present a partial evaluation of the current system in three partsthe first is an evaluation of the system ability to mimic humans at the task of segmenting text into wordsized units the second evaluates the propername identification the third measures the performance on morphological analysisto date we have not done a separate evaluation of foreignname recognitionevaluation of the segmentation as a wholeprevious reports on chinese segmentation have invariably cited performance either in terms of a single percentcorrect score or else a single precisionrecall pairthe problem with these styles of evaluation is that as we shall demonstrate even human judges do not agree perfectly on how to segment a given textthus rather than give a single evaluative score we prefer to compare the performance of our method with the judgments of several human subjectsto this end we picked 100 sentences at random containing 4372 total hanzi from a test corpuswe asked six native speakersthree from taiwan and three from the mainland to segment the corpussince we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part the instructions were simple subjects were to mark all places they might plausibly pause if they were reading the text aloudan examination of the subjects bracketings confirmed that these instructions were satisfactory in yielding plausible wordsized unitsvarious segmentation approaches were then compared with human performance clearly for judges ji and 12 taking ji as standard and computing the precision and recall for 12 yields the same results as taking 12 as the standard and computing for respectively the recall and precisionwe therefore used the arithmetic mean of each interjudge precisionrecall pair as a single measure of interjudge similaritytable 2 shows these similarity measuresthe average agreement among the human judges is 76 and the average agreement between st and the humans is 75 or about 99 of the interhuman agreementone can better visualize the precisionrecall similarity matrix by producing from that matrix a distance matrix computing a classical metric multidimensional scaling on that distance matrix and plotting the first two most significant dimensionsthe result of this is shown in figure 7the horizontal axis in this plot represents the most significant dimension which explains 62 of the variationin addition to the automatic methods ag gr and st just discussed we also added to the plot the values for the current algorithm using only dictionary entries this is to allow for fair comparison between the statistical method and gr which is also purely dictionarybasedas can be seen gr and this quotpareddownquot statistical method perform quite similarly though the statistical method is still slightly better16 ag clearly performs much less like humans than these methods whereas the full statistical algorithm including morphological derivatives and names performs most closely to humans among the automatic methodsit can also be seen clearly in this plot that two of the taiwan speakers cluster very closely together and the third taiwan speaker is also close in the most significant dimension two of the mainlanders also cluster close together but interestingly not particularly close to the taiwan speakers the third mainlander is much more similar to the taiwan speakersthe breakdown of the different types of words found by st in the test corpus is given in table 3clearly the percentage of productively formed words is quite small meaning that dictionary entries are covering most of the 15 gr is 73 or 9616 as one reviewer points out one problem with the unigram model chosen here is that there is still a tendency to pick a segmentation containing fewer wordsthat is given a choice between segmenting a sequence abc into abc and ab c the former will always be picked so long as its cost does not exceed the summed costs of ab and c while it is possible for abc to be so costly as to preclude the larger grouping this will certainly not usually be the casein this way the method reported on here will necessarily be similar to a greedy method though of course not identicalas the reviewer also points out this is a problem that is shared by eg probabilistic contextfree parsers which tend to pick trees with fewer nodesthe question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winningthis is an issue that we have not addressed at the current stage of our researchclassical metric multidimensional scaling of distance matrix showing the two most significant dimensionsthe percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question casesnonetheless the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of wordsit may seem surprising to some readers that the interhuman agreement scores reported here are so lowhowever this result is consistent with the results of experiments discussed in wu and fung wu and fung introduce an evaluation method they call nkblindunder this scheme n human judges are asked independently to segment a texttheir results are then compared with the results of an automatic segmenterfor a given quotwordquot in the automatic segmentation if at least k of the human judges agree that this is a word then that word is considered to be correctfor eight judges ranging k between 1 and 8 corresponded to a precision score range of 90 to 30 meaning that there were relatively few words on which all judges agreed whereas most of the words found by the segmenter were such that one human judge agreedpropername identificationto evaluate propername identification we randomly selected 186 sentences containing 12000 hanzi from our test corpus and segmented the text automatically tagging personal names note that for names there is always a single unambiguous answer unlike the more general question of which segmentation is correctthe performance was 8099 recall and 6183 precisioninterestingly chang et al report 8067 recall and 9187 precision on an 11000 word corpus seemingly our system finds as many names as their system but with four times as many false hitshowever we have reason to doubt chang et al performance claimswithout using the same test corpus direct comparison is obviously difficult fortunately chang et al include a list of about 60 sentence fragments that exemplify various categories of performance for their systemthe performance of our system on those sentences appeared rather better than theirson a set of 11 sentence fragmentsthe a setwhere they reported 100 recall and precision for name identification we had 73 recall and 80 precisionhowever they list two sets one consisting of 28 fragments and the other of 22 fragments in which they had 0 recall and precisionon the first of thesethe b setour system had 64 recall and 86 precision on the secondthe c setit had 33 recall and 19 precisionnote that it is in precision that our overall performance would appear to be poorer than the reported performance of chang et al yet based on their published examples our system appears to be doing better precisionwisethus we have some confidence that our own performance is at least as good as that of chang et al in a more recent study than chang et al wang li and chang propose a surnamedriven nonstochastic rulebased system for identifying personal nameswang li and chang also compare their performance with chang et al systemfortunately we were able to obtain a copy of the full set of sentences from chang et al on which wang li and chang tested their system along with the output of their systemin what follows we will discuss all cases from this set where our performance on names differs from that of wang li and changexamples are given in table 4in these examples the names identified by the two systems are underlined the sentence with the correct segmentation is boxedthe differences in performance between the two systems relate directly to three issues which can be seen as differences in the tuning of the models rather than representing differences in the capabilities of the model per sethe first issue relates to the completeness of the base lexiconthe wang li and chang system fails on fragment because their system lacks the word youlyoul oberly and misinterpreted the thus isolated first ih youl as being the final hanzi of the preceding name similarly our system failed in fragment since it is missing the abbreviation all tai2du2 taiwan independencethis is a rather important source of errors in name identification and it is not really possible to objectively evaluate a name recognition system without considering the main lexicon with which it is used17 they also provide a set of titledriven rules to identify names when they occur before titles such as t xianlshengl mr or1ti1mr tai2bei3 shi4zhang3 taipei mayorobviously the presence of a title after a potential name n increases the probability that n is in fact a nameour system does not currently make use of titles but it would be straightforward to do so within the finitestate framework that we proposethe second issue is that rare family names can be responsible for overgeneration especially if these names are otherwise common as singlehanzi wordsfor example the wang li and chang system fails on the sequence tffplgfe nian2 nei4 sa3 in since nian2 is a possible but rare family name which also happens to be written the same as the very common word meaning yearour system fails in because of shenl a rare family name the system identifies it as a family name whereas it should be analyzed as part of the given namefinally the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively highfrequency singlehanzi wordsan example is in where the system fails to group 144a lin2yang2gang3 as a name because all three hanzi can in principle be separate words in many cases these failures in recall would be fixed by having better estimates of the actual probabilities of singlehanzi words since our estimates are often inflateda totally nonstochastic rulebased system such as wang li and chang will generally succeed in such cases but of course runs the risk of overgeneration wherever the singlehanzi word is really intendedevaluation of morphological analysisin table 5 we present results from small test corpora for the productive affixes handled by the current version of the system as with names the segmentation of morphologically derived words is generally either right or wrongthe first four affixes are socalled resultative affixes they denote some property of the resultant state of a verb as in 1tt wang4bu4liao3 cannot forgetthe last affix in the list is the nominal plural ill men0in the table are the classes of words to which the affix attaches the number found in the test corpus by the method the number correct and the number missed in this paper we have argued that chinese word segmentation can be modeled effectively using weighted finitestate transducersthis architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives and models for personal names and foreign names in transliterationother kinds of productive word classes such as company names abbreviations and place names can easily be 20 note that 7 in ft is normally pronounced as le0 but as part of a resultative it is lino3 handled given appropriate modelswe have argued that the proposed method performs wellhowever some caveats are in order in comparing this method with other approaches to segmentation reported in the literaturefirst of all most previous articles report performance in terms of a single percentcorrect score or else in terms of the paired measures of precision and recallwhat both of these approaches presume is that there is a single correct segmentation for a sentence against which an automatic algorithm can be comparedwe have shown that at least given independent human judgments this is not the case and that therefore such simplistic measures should be mistrustedthis is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised indeed such standards have been proposed and include the published prcnsc and rocling as well as the unpublished linguistic data consortium standards however until such standards are universally adopted in evaluating chinese segmenters claims about performance in terms of simple measures like percent correct should be taken with a grain of salt see again wu and fung for further arguments supporting this conclusionsecond comparisons of different methods are not meaningful unless one can evaluate them on the same corpusunfortunately there is no standard corpus of chinese texts tagged with either single or multiple human judgments with which one can compare performance of various methodsone hopes that such a corpus will be forthcomingfinally we wish to reiterate an important pointthe major problem for our segmenter as for all segmenters remains the problem of unknown words we have provided methods for handling certain classes of unknown words and models for other classes could be provided as we have notedhowever there will remain a large number of words that are not readily adduced to any productive pattern and that would simply have to be added to the dictionarythis implies therefore that a major factor in the performance of a chinese segmenter is the quality of the base dictionary and this is probably a more important factorfrom the point of view of performance alonethan the particular computational methods usedthe method reported in this paper makes use solely of unigram probabilities and is therefore a zeroethorder model the cost of a particular segmentation is estimated as the sum of the costs of the individual words in the segmentationhowever as we have noted nothing inherent in the approach precludes incorporating higherorder constraints provided they can be effectively modeled within a finitestate frameworkfor example as can has noted one can construct examples where the segmentation is locally ambiguous but can be determined on the basis of sentential or even discourse contexttwo sets of examples from can are given in and in the sequence i5m ma31u4 cannot be resolved locally but depends instead upon broader context similarly in the sequence alt ca12neng2 cannot be resolved locally this cl horse way on sick asp this horse got sick on the way he de talent very high he has great talent while the current algorithm correctly handles the sentences it fails to handle the sentences since it does not have enough information to know not to group the sequences ii6m ma31u4 and 4it cai2neng2 respectivelycan solution depends upon a fairly sophisticated language model that attempts to find valid syntactic semantic and lexical relations between objects of various linguistic types an example of a fairly lowlevel relation is the affix relation which holds between a stem morpheme and an affix morpheme such as men a highlevel relation is agent which relates an animate nominal to a predicateparticular instances of relations are associated with goodness scoresparticular relations are also consistent with particular hypotheses about the segmentation of a given sentence and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are quotpopularquot or notwhile can system incorporates fairly sophisticated models of various linguistic information it has the drawback that it has only been tested with a very small lexicon and on a very small test set there is therefore serious concern as to whether the methods that he discusses are scalableanother question that remains unanswered is to what extent the linguistic information he considers can be handledor at least approximatedby finitestate language models and therefore could be directly interfaced with the segmentation model that we have presented in this paperfor the examples given in and this certainly seems possibleconsider first the examples in the segmenter will give both analyses i cai2 neng2 just be able and lit cai2neng2 talent but the latter analysis is preferred since splitting these two morphemes is generally more costly than grouping themin we want to split the two morphemes since the correct analysis is that we have the adverb 4 cai2 just the modal verb it neng2 be able and the main verb a iii ke4fu2 overcome the competing analysis is of course that we have the noun tit cai2neng2 talent followed by aar ke4fu2 overcomeclearly it is possible to write a rule that states that if an analysis modal verb is available then that is to be preferred over noun verb such a rule could be stated in terms of local grammars in the sense of mohri turning now to we have the similar problem that splitting 5wil into x ma3 horse and 1u4 way is more costly than retaining this as one word 5m ma31u4 roadhowever there is again local grammatical information that should favor the split in the case of both 5 ma3 horse and am ma3for horsesby a similar argument the preference for not splitting egm could be strengthened in by the observation that the classifier ficc tiao2 is consistent with long or winding objects like am ma31u4 road but not with ma3 horsenote that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features which can be referred to by finitestate grammatical rulesthus we feel fairly confident that for the examples we have considered from can study a solution can be incorporated or at least approximated within a finitestate frameworkwith regard to purely morphological phenomena certain processes are not handled elegantly within the current frameworkany process involving reduplication for instance does not lend itself to modeling by finitestate techniques since there is no way that finitestate networks can directly implement the copying operations requiredmandarin exhibits several such processes including anota question formation illustrated in and adverbial reduplication illustrated in in the particular form of anota reduplication illustrated in the first syllable of the verb is copied and the negative marker bu4 not is inserted between the copy and the full verbin the case of adverbial reduplication illustrated in an adjective of the form ab is reduplicated as aabbthe only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand and incorporate the expanded forms into the lexical transducerdespite these limitations a purely finitestate approach to chinese word segmentation enjoys a number of strong advantagesthe model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way the use of weighted transducers in particular has the attractive property that the model as it stands can be straightforwardly interfaced to other modules of a larger speech or natural language system presumably one does not want to segment chinese text for its own sake but instead with a larger purpose in mindas described in sproat the chinese segmenter presented here fits directly into the context of a broader finitestate model of text analysis for speech synthesisfurthermore by inverting the transducer so that it maps from phonemic transcriptions to hanzi sequences one can apply the segmenter to other problems such as speech recognition since the transducers are built from humanreadable descriptions using a lexical toolkit the system is easily maintained and extendedwhile size of the resulting transducers may seem dauntingthe segmenter described here as it is used in the bell labs mandarin tts system has about 32000 states and 209000 arcsrecent work on minimization of weighted machines and transducers shows promise for improving this situationthe model described here thus demonstrates great potential for use in widespread applicationsthis flexibility along with the simplicity of implementation and expansion makes this framework an attractive base for continued researchwe thank united informatics for providing us with our corpus of chinese text and bdc for the behavior chineseenglish electronic dictionarywe further thank dr js chang of tsinghua university taiwan roc for kindly providing us with the name corporawe also thank chaohuang chang reviewers for the 1994 acl conference and four anonymous reviewers for computational linguistics for useful comments
J96-3004
a stochastic finitestate wordsegmentation algorithm for chinesethe initial stage of text analysis for any nlp task usually involves the tokenization of the input into wordsfor languages like english one can assume to a first approximation that word boundaries are given by whitespace or punctuationin various asian languages including chinese on the other hand whitespace is never used to delimit words so one must resort to lexical information to reconstruct the wordboundary informationin this paper we present a stochastic finitestate model wherein the basic workhorse is the weighted finitestate transducerthe model segments chinese text into dictionary entries and words derived by various productive lexical processes andsince the primary intended application of this model is to texttospeech synthesisprovides pronunciations for these wordswe evaluate the system performance by comparing its segmentation judgments with the judgments of a pool of human segmenters and the system is shown to perform quite wellwe built a word uni gram model using the viterbi reestimation whose initial estimates were derived from the frequencies in the corpus of the strings of each word in the lexiconwe proposed a method to estimate a set of initial word frequencies without segmenting the corpus
the reliability of a dialogue structure coding scheme this paper describes the reliability of a dialogue structure coding scheme based on utterance function game structure and higherlevel transaction structure that has been applied to a corpus of spontaneous taskoriented spoken dialogues this paper describes the reliability of a dialogue structure coding scheme based on utterance function game structure and higherlevel transaction structure that has been applied to a corpus of spontaneous taskoriented spoken dialoguesdialogue work like the rest of linguistics has traditionally used isolated examples either constructed or realnow many researchers are beginning to try to code large dialogue corpora for higherlevel dialogue structure in the hope of giving their findings a firmer basisthe purpose of this paper is to introduce and describe the reliability of a scheme of dialogue coding distinctions that have been developed for use on the map task corpus these dialogue structure distinctions were developed within a larger quotvertical analysisquot of dialogue encompassing a range of phenomena beginning with speech characteristics and therefore are intended to be useful whenever an expression of dialogue structure is requireda number of alternative ways of coding dialogue are mentioned in the recent literaturewalker and whittaker mark utterances as assertions commands questions or prompts in an investigation of mixed initiative in dialoguesutton et al classify the possible responses to a question in terms of whether or not they answer the question and how complete and concise the answer is as part of designing an automated spoken questionnairealexandersson et al devise a set of 17 quotspeech actsquot that occur in dialogues between people setting the date for a business meeting some of these speech acts are taskspecificthey use these speech acts to derive statistical predictions about which speech act will come next within verbmobil a speechtospeech dialogue translation system that operates on demand for limited stretches of dialoguenagata and morimoto use a set of nine more taskindependent illocutionary force distinctions for a similar purposeahrenberg dahlback and jonsson divide moves in wizardofoz informationseeking dialogues into initiations and responses and then further classify them according to the function they serve in the information transfer in order to show how this relates to the focus structure of the dialoguescondon and cech while investigating the difference between facetoface and computermediated communication classify utterances according to the role they take in decision makingthe coding described in this paper differs from all of these coding schemes in three important waysfirst although the move categories are informed by computational models of dialogue the categories themselves are more independent of the task than schemes devised with particular machine dialogue types in mindsecond although other coding schemes may distinguish many categories for utterances segmented according to the discourse goals they serve by showing game and transaction structures this coding scheme attempts to classify dialogue structure at higher levels as wellfinally although the other coding schemes appear to have been devised primarily with one purpose in mind this coding scheme is intended to represent dialogue structure generically so that it can be used in conjunction with codings of many other dialogue phenomenathe coding distinguishes three levels of dialogue structure similar to the three middle levels in sinclair and coulthard analysis of classroom discourseat the highest level dialogues are divided into transactions which are subdialogues that accomplish one major step in the participants plan for achieving the taskthe size and shape of transactions is largely dependent on the taskin the map task two participants have slightly different versions of a simple map with approximately fifteen landmarks on itone participant map has a route printed on it the task is for the other participant to duplicate the routea typical transaction is a subdialogue that gets the route follower to draw one route segment on the maptransactions are made up of conversational games which are often also called dialogue games interactions or exchanges and show the same structure as grosz and sidner discourse segments when applied to taskoriented dialogueall forms of conversational games embody the observation that by and large questions are followed by answers statements by acceptance or denial and so ongame analysis makes use of this regularity to differentiate between initiations which set up a discourse expectation about what will follow and responses which fulfill those expectationsin addition games are often differentiated by the kind of discourse purpose they havefor example getting information from the partner or providing informationa conversational game is a set of utterances starting with an initiation and encompassing all utterances up until the purpose of the game has been either fulfilled or abandonedgames can nest within each other if one game is initiated to serve the larger goal of a game that has already been initiated games are themselves made up of conversational moves which are simply different kinds of initiations and responses classified according to their purposesall levels of the dialogue coding are described in detail in carletta et al is the utterance an initiation response or preparationis the person who is transferring information asking a question in an attempt to get evidence that the transfer was successful so they can move onresponse preparation does the response contribute taskdomain ready information or does it only show evidence that communication has been successfulthe information requested or is it amplifiedthe move coding analysis is the most substantial levelit was developed by extending the moves that make up houghton interaction frames to fit the kinds of interactions found in the map task dialoguesin any categorization there is a tradeoff between usefulness and ease or consistency of codingtoo many semantic distinctions make coding difficultthese categories were chosen to be useful for a range of purposes but still be reliablethe distinctions used to classify moves are summarized in the actionthe instruction can be quite indirect as in example 3 below as long as there is a specific action that the instructor intends to elicit in the map task this usually involves the route giver telling the route follower how to navigate part of the routeparticipants can also give other instruct moves such as telling the partner to go through something again but more slowlyin these and later examples g denotes the instruction giver the participant who knows the route and f the instruction follower the one who is being told the routeeditorial comments that help to establish the dialogue context are given in square brackets312 the explain movean explain states information that has not been directly elicited by the partnerthe information can be some fact about either the domain or the state of the plan or task including facts that help establish what is mutually knowng where the dead tree is on the other side of the stream there is farmed land the information to be confirmed is something the partner has tried to convey explicitly or something the speaker believes was meant to be inferred from what the partner has saidin principle check moves could cover past dialogue events or any other information that the partner is in a position to confirmhowever check moves are almost always about some information that the speaker has been toldone exception in the map task occurs when a participant is explaining a route for the second time to a different route follower and asks for confirmation that a feature occurs on the partner map even though it has not yet been mentioned in the current dialogueexample 11 g you go up to the top lefthand corner of the stile but you are only say about a centimetre from the edge so that is your linenote that in example 13 the move marked is not a check because it asks for new informationf has only stated that he will have to go below the blacksmithbut the move marked is a check because f has inferred this information from g prior contributions and wishes to have confirmation314 the align movean align move checks the partner attention agreement or readiness for the next moveat most points in taskoriented dialogue there is some piece of information that one of the participants is trying to transfer to the other participantthe purpose of the most common type of align move is for the transferer to know that the information has been successfully transferred so that they can close that part of the dialogue and move onif the transferee has acknowledged the information clearly enough an align move may not be necessaryif the transferer needs more evidence of success then alignment can be achieved in two waysif the transferer is sufficiently confident that the transfer has been successful a question such as quotokquot sufficessome participants ask for this kind of confirmation immediately after issuing an instruction probably to force more explicit responses to what they saylessconfident transferers can ask for confirmation of some fact that the transferee should be able to infer from the transferred information since this provides stronger evidence of successalthough align moves usually occur in the context of an unconfirmed information transfer participants also use them at hiatuses in the dialogue to check that quoteverything is okquot without asking about anything in particularg okafter an instruction and an acknowledgment g you should be skipping the edge of the page by about half an inch okg then move that point up half an inch so you have got a kind of diagonal line againf rightg this is the lefthand edge of the page yeahwhere the query is asked very generally about a large stretch of dialogue quotjust in case 315 the queryyn movea queryyn asks the partner any question that takes a yes or no answer and does not count as a check or an alignin the map task these questions are most often about what the partner has on the mapthey are also quite often questions that serve to focus the attention of the partner on a particular part of the map or that ask for domain or task information where the speaker does not think that information can be inferred from the dialogue context316 the queryw movea queryw is any query not covered by the other categoriesalthough most moves classified as queryw are whquestions otherwise unclassifiable queries also go in this categorythis includes questions that ask the partner to choose one alternative from a set as long as the set is not yes and noalthough technically the tree of coding distinctions allows for a check or an align to take the form of a whquestion this is unusual in englishin both align and check moves the speaker tends to have an answer in mind and it is more natural to formulate them as yesno questionstherefore in english all whquestions tend to be categorized as querywit might be possible to subdivide queryw into theoretically interesting categories rather than using it as a quotwastebasketquot but in the map task such queries are rare enough that subdivision is not worthwhileg towards the chapel and then you have f towards whatg right okayjust move round the crashed spaceship so that you have you reach the finish which should be left just left of the the chestnut treef left of the bottom or left of the top of the chestnut treef no i have got a ye got a trout farm over to the right underneath indian country hereg mmhmmi want you to go three inches past that going south in other words just to the level of that i mean not the trout farmf to the level of whatthe following moves are used within games after an initiation and serve to fulfill the expectations set up within the game321 the acknowledge movean acknowledge move is a verbal response that minimally shows that the speaker has heard the move to which it responds and often also demonstrates that the move was understood and acceptedverbal acknowledgments do not have to appear even after substantial explanations and instructions since acknowledgment can be given nonverbally especially in facetoface settings and because the partner may not wait for one to occurclark and schaefer give five kinds of evidence that an utterance has been accepted continued attention initiating a relevant utterance verbally acknowledging the utterance demonstrating an understanding of the utterance by paraphrasing it and repeating part or all of the utterance verbatimof these kinds of evidence only the last three count as acknowledge moves in this coding scheme the first kind leaves no trace in a dialogue transcript to be coded and the second involves making some other more substantial dialogue moveg so you are at a point that is probably two or three inches away from both the top edge and the lefthand side edgeis that correctf no not at the momentone caveat about the meaning of the difference between replyy and replyn rarely queries include negation as for the other replies whether the answer is coded as a replyy or a replyn depends on the surface form of the answer even though in this case quotyesquot and quotnoquot can mean the same thing325 the clarify movea clarify move is a reply to some kind of question in which the speaker tells the partner something over and above what was strictly askedif the information is substantial enough then the utterance is coded as a reply followed by an explain but in many cases the actual change in meaning is so small that coders are reluctant to mark the addition as truly informativeroute givers tend to make clarify moves when the route follower seems unsure of what to do but there is not a specific problem on the agenda example 35 goal or because the responder does not share the same goals as the initiatoroften refusal takes the form of ignoring the initiation and simply initiating some other movehowever it is also possible to make such refusals explicit for instance a participant could rebuff a question with quotno let us talk about quot an initiation with quotwhat do you meanthat will not workquot or an explanation about the location of a landmark with quotis itquot said with an appropriately unbelieving intonationone might consider these cases akin to acknowledge moves but with a negative slantthese cases were sufficiently rare in the corpora used to develop the coding scheme that it was impractical to include a category for themhowever it is possible that in other languages or communicative settings this behavior will be more prevalentgrice and savino found that such a category was necessary when coding italian map task dialogues where speakers were very familiar with each otherthey called the category objectin addition to the initiation and response moves the coding scheme identifies ready moves as moves that occur after the close of a dialogue game and prepare the conversation for a new game to be initiatedspeakers often use utterances such as quotokquot and quotrightquot to serve this purposeit is a moot point whether ready moves should form a distinct move class or should be treated as discourse markers attached to the subsequent moves but the distinction is not a critical one since either interpretation can be placed on the codingit is sometimes appropriate to consider ready moves as distinct complete moves in order to emphasize the comparison with acknowledge moves which are often just as short and even contain the same words as ready movesmoves are the building blocks for conversational game structure which reflects the goal structure of the dialoguein the move coding a set of initiating moves are differentiated all of which signal some kind of purpose in the dialoguefor instance instructions signal that the speaker intends the hearer to follow the command queries signal that the speaker intends to acquire the information requested and statements signal that the speaker intends the hearer to acquire the information givena conversational game is a sequence of moves starting with an initiation and encompassing all moves up until that initiation purpose is either fulfilled or abandonedthere are two important components of any game coding schemethe first is an identification of the game purpose in this case the purpose is identified simply by the name of the game initiating movethe second is some explanation of how games are related to each otherthe simplest paradigmatic relationships are implemented in computercomputer dialogue simulations such as those of power and houghton in these simulations once a game has been opened the participants work on the goal of the game until they both believe that it has been achieved or that it should be abandonedthis may involve embedding new games with subservient purposes to the toplevel one being played but the embedding structure is always clear and mutually understoodalthough some natural dialogue is this orderly much of it is not participants are free to initiate new games at any time and these new games can introduce new purposes rather than serving some purpose already present in the dialoguein addition natural dialogue participants often fail to make clear to their partners what their goals arethis makes it very difficult to develop a reliable coding scheme for complete game structurethe game coding scheme simply records those aspects of embedded structure that are of the most interestfirst the beginning of new games is coded naming the game purpose according to the game initiating movealthough all games begin with an initiating move not all initiating moves begin games since some of the initiating moves serve to continue existing games or remind the partner of the main purpose of the current game againsecond the place where games end or are abandoned is markedfinally games are marked as either occurring at top level or being embedded in the game structure and thus being subservient to some toplevel purposethe goal of these definitions is to give enough information to study relationships between game structure and other aspects of dialogue while keeping those relationships simple enough to codetransaction coding gives the subdialogue structure of complete taskoriented dialogues with each transaction being built up of several dialogue games and corresponding to one step of the taskin most map task dialogues the participants break the route into manageable segments and deal with them one by onebecause transaction structure for map task dialogues is so closely linked to what the participants do with the maps the maps are included in the analysisthe coding system has two components how route givers divide conveying the route into subtasks and what parts of the dialogue serve each of the subtasks and what actions the route follower takes and whenthe basic route giver coding identifies the start and end of each segment and the subdialogue that conveys that route segmenthowever map task participants do not always proceed along the route in an orderly fashion as confusions arise they often have to return to parts of the route that have already been discussed and that one or both of them thought had been successfully completedin addition participants occasionally overview an upcoming segment in order to provide a basic context for their partners without the expectation that their partners will be able to act upon their descriptions they also sometimes engage in subdialogues not relevant to any segment of the route sometimes about the experimental setup but often nothing at all to do with the taskthis gives four transaction types normal review overview and irrelevantother types of subdialogues are possible but are not included in the coding scheme because of their raritycoding involves marking where in the dialogue transcripts a transaction starts and which of the four types it is and for all but irrelevant transactions indicating the start and end point of the relevant route section using numbered crosses on a copy of the route giver mapthe ends of transactions are not explicitly coded because generally speaking transactions do not appear to nest for instance if a transaction is interrupted to review a previous route segment participants by and large restart the goal of the interrupted transaction afterwardsit is possible that transactions are simply too large for the participants to remember how to pick up where they left offnote that it is possible for several transactions to have the same starting point on the routethe basic route follower coding identifies whether the follower action was drawing a segment of the route or crossing out a previously drawn segment and the start and end points of the relevant segment indexed using numbered crosses on a copy of the route follower mapit is important to show that subjective coding distinctions can be understood and applied by people other than the coding developers both to make the coding credible in its own right and to establish that it is suitable for testing empirical hypotheseskrippendorff working within the field of content analysis describes a way of establishing reliability which applies herekrippendorff argues that there are three different tests of reliability with increasing strengththe first is stability also sometimes called testrest reliability or intertest variance a coder judgments should not change over timethe second is reproducibility or intercoder variance which requires different coders to code in the same waythe third is accuracy which requires coders to code in the same way as some known standardstability can be tested by having a single coder code the same data at different timesreproducibility can be tested by training several coders and comparing their resultsaccuracy can be tested by comparing the codings produced by these same coders to the standard if such a standard existswhere the standard is the coding of the scheme quotexpertquot developer the test simply shows how well the coding instructions fit the developer intentionwhichever type of reliability is being assessed most coding schemes involve placing units into one of n mutually exclusive categoriesthis is clearly true for the dialogue structure coding schemes described here once the dialogues have been segmented into appropriately sized unitsless obviously segmentation also often fits this descriptionif there is a natural set of possible segment boundaries that can be treated as units one can recast segmentation as classifying possible segment boundaries as either actual segment boundaries or nonboundariesthus for both classification and segmentation the basic question is what level of agreement coders reach under the reliability testsit has been argued elsewhere that since the amount of agreement one would expect by chance depends on the number and relative frequencies of the categories under test reliability for category classifications should be measured using the kappa coefficienteven with a good yardstick however care is needed to determine from such figures whether or not the exhibited agreement is acceptable as krippendorff explainsreliability in essence measures the amount of noise in the data whether or not that will interfere with results depends on where the noise is and the strength of the relationship being measuredas a result krippendorff warns against taking overall reliability figures too seriously in favor of always calculating reliability with respect to the particular hypothesis under testusing a a generalized version of kappa which also works for ordinal interval and ratioscaled data he remarks that a reasonable rule of thumb for associations between two variables that both rely on subjective distinctions is to require a 8 with 67 a 8 allowing tentative conclusions to be drawnkrippendorff also describes an experiment by brouwer in which englishspeaking coders reached a 44 on the task of assigning television characters to categories with complicated dutch names that did not resemble english wordsit is interesting to note that medical researchers have agreed on much less strict guidelines first drawn up by landis and koch who call k 0 quotpoorquot agreement 0 to 2 quotslightquot 21 to 40 quotfairquot 41 to 60 quotmoderatequot 61 80 quotsubstantialquot and 81 to 1 quotnear perfectquotlandis and koch describe these ratings as quotclearly arbitrary but useful benchmarksquot krippendorff also points out that where one coding distinction relies on the results of another the second distinction cannot be reasonable unless the first also isfor instance it would be odd to consider a classification scheme acceptable if coders were unable to agree on how to identify units in the first placein addition when assessing segmentation it is important to choose the class of possible boundaries sensiblyalthough kappa corrects for chance expected agreement it is still susceptible to order of magnitude differences in the number of units being classified when the absolute number of units placed in one of the categories remains the samefor instance one would obtain different values for kappa on agreement for move segment boundaries using transcribed word boundaries and transcribed letter boundaries simply because there are so many extra agreed nonboundaries in the transcribed letter casedespite these warnings kappa has clear advantages over simpler metrics and can be interpreted as long as appropriate care is usedthe main move and game crosscoding study involved four coders all of whom had already coded substantial portions of the map task corpusfor this study they simply segmented and coded four dialogues using their normal working procedures which included access to the speech as well as the transcriptsall of the coders interacted verbally with the coding developers making it harder to say what they agree upon than if they had worked solely from written instructionson the other hand this is a common failing of coding schemes and in some circumstances it can be more important to get the ideas of the coding scheme across than to tightly control how it is done431 reliability of move segmentationfirst the move coders agree on how to segment a dialogue into movestwo different measures of agreement are usefulin the first kappa is used to assess agreement on whether or not transcribed word boundaries are also move segment boundarieson average the coders marked move boundaries roughly every 57 words so that there were roughly 47 times as many word boundaries that were not marked as move boundaries as word boundaries that werethe second measure similar to information retrieval metrics is the actual agreement reached measuring pairwise over all locations where any coder marked a boundarythat is the measure considers each place where any coder marked a boundary and averages the ratio of the number of pairs of coders who agreed about that location over the total number of coder pairsnote that it would not be possible to define quotunitquot in the same way for use in kappa because then it would not be possible for the coders to agree on a nonboundary classificationpairwise percent agreement is the best measure to use in assessing segmentation tasks when there is no reasonable independent definition of units to use as the basis of kappait is provided for readers who are skeptical about our use of transcribed word boundariesthe move coders reached k 92 using word boundaries as units pairwise percent agreement on locations where any coder had marked a move boundary was 89 most of the disagreement fell into one of two categoriesfirst some coders marked a ready move but the others included the same material in the move that followedone coder in particular was more likely to mark ready moves indicating either greater vigilance or a less restrictive definitionsecond some coders marked a reply while others split the reply into a reply plus some sort of move conveying further information not strictly elicited by the opening question this confusion was general suggesting that it might be useful to think more carefully about the difference between answering a question and providing further informationit also suggests possible problems with the clarify category since unlike explain and instruct moves most clarify moves follow replies and since clarify moves are intended to contain unelicited informationhowever in general the agreement on segmentation reached was very good and certainly provides a solid enough foundation for more classification432 reliability of move classificationthe argument that move classification is reliable uses the kappa coefficient units in this case are moves for which all move coders agreed on the boundaries surrounding the movenote that it is only possible to measure reliability of move classification over move segments where the boundaries were agreedthe more unreliable the segmentation the more data must be omittedclassification results can only be interpreted if the underlying segmentation is reasonably robustoverall agreement on the entire coding scheme was good with the largest confusions between check and queryyn instruct and clarify and acknowledge ready and replyycombining categories agreement was also very good for whether a move was an initiation type or a response or ready typefor agreed initiations themselves agreement was very high on whether the initiation was a command a statement or one of the question types coders were also able to agree on the subclass of question coders could also reliably classify agreed responses as acknowledge clarify or one of the reply categories however coders had a little more difficulty distinguishing between different types of moves that all contribute new unelicited information sponsored by the university of pennsylvania three nonhcrc computational linguists and one of the original coding developers who had not done much coding move coded a map task dialogue from written instructions only using just the transcript and not the speech sourceagreement on move classification was k 69 leaving the coding developer out of the coder pool did not change the results suggesting that the instructions conveyed his intentions fairly wellthe coding developer matched the official map task coding almost entirelyone coder never used the check move when that coder was removed from the pool k 73 when check and queryyn were conflated agreement was k 77 agreement on whether a move was an initiation response or ready type was good surprisingly nonhcrc coders appeared to be able to distinguish the clarify move better than inhouse codersthis amount of agreement seems acceptable given that this was a first coding attempt for most of these coders and was probably done quicklycoders generally become more consistent with experience level of coding most useful for work in other domainsto test how well the scheme would transfer it was applied by two of the coders from the main move reliability study to a transcribed conversation between a hifi sales assistant and a married couple intending to purchase an amplifierdialogue openings and closings were omitted since they are well understood but do not correspond to categories in the classification schemethe coders reached k 95 on the move segmentation task using word boundaries as possible move boundaries and k 81 for move classificationthese results are in line with those from the main trialthe coders recommended adding a new move category specifically for when one conversant completes or echoes an utterance begun by another conversantneither of the coders used instruct ready or check moves for this dialoguethe game coding results come from the same study as the results for the expert move crosscoding resultssince games nest it is not possible to analyze game segmentation in the same way as was done for movesmoreover it is possible for a set of coders to agree on where the game begins and not where it ends but still believe that the game has the same goal since the game goal is largely defined by its initiating utterancetherefore the best analysis considers how well coders agree on where games start and for agreed starts where they endsince game beginnings are rare compared to word boundaries pairwise percent agreement is usedcalculating as described coders reached promising but not entirely reassuring agreement on where games began although one coder tended to have longer games than the others there was no striking pattern of disagreementwhere the coders managed to agree on the beginning of a game they also tended to agree on what type of game it was although this is not the same as agreeing on the category of an initiating move because not all initiating moves begin games disagreement stems from the same move naming confusions there was also confusion about whether a game with an agreed beginning was embedded or not the question of where a game ends is related to the embedding subcode since games end after other games that are embedded within themusing just the games for which all four coders agreed on the beginning the coders reached 65 pairwise percent agreement on where the game endedthe abandoned game subcode turned out to be so scarce in the crosscoding study that it was not possible to calculate agreement for it but agreement is probably poorsome coders have commented that the coding practice was unstructured enough that it was easy to forget to use the subcodeto determine stability the most experienced coder completed the same dialogue twice two months and many dialogues apartshe reached better agreement on where games began suggesting that one way to improve the coding would be to formalize more clearly the distinctions that she believes herself to usewhen she agreed with herself on where a game began she also agreed well with herself about what game it was whether or not games were embedded and where the games ended there were not enough instances of abandoned games marked to test formally but she did not appear to use the coding consistentlyin general the results of the game crosscoding show that the coders usually agree especially on what game category to use but when the dialogue participants begin to overlap their utterances or fail to address each other concerns clearly the game coders have some difficulty agreeing on where to place game boundarieshowever individual coders can develop a stable sense of game structure and therefore if necessary it should be possible to improve the coding schemeunlike the other coding schemes transaction coding was designed from the beginning to be done solely from written instructionssince it is possible to tell uncontroversially from the video what the route follower drew and when they drew it reliability has only been tested for the other parts of the transaction coding schemethe replication involved four naive coders and the quotexpertquot developer of the coding instructionsall four coders were postgraduate students at the university of edinburgh none of them had prior experience of the map task or of dialogue or discourse analysisall four dialogues used different maps and differently shaped routesto simplify the task coders worked from maps and transcriptssince intonational cues can be necessary for disambiguating whether some phrases such as quotokquot and quotrightquot close a transaction or open a new one coders were instructed to place boundaries only at particular sites in the transcripts which were marked with blank linesthese sites were all conversational move boundaries except those between ready moves and the moves following themnote that such move boundaries form a set of independently derived units which can be used to calculate agreement on transaction segmentationthe transcripts did not name the moves or indicate why the potential transaction boundaries were placed where they wereeach subject was given the coding instructions and a sample dialogue extract and pair of maps to take away and examine at leisurethe coders were asked to return with the dialogue extract codedwhen they returned they were given a chance to ask questionsthey were then given the four complete dialogues and maps to take away and code in their own timethe four coders did not speak to each other about the exercisethree of the four coders asked for clarification of the overview distinction which turned out to be a major source of unreliability there were no other queries451 measuresoverall each coder marked roughly a tenth of move boundaries as transaction boundarieswhen all coders were taken together as a group the agreement reached on whether or not conversational move boundaries are transaction boundaries was k 59 the same level of agreement was reached when the expert was left out of the poolthis suggests the disagreement is general rather than arising from problems with the written instructionskappa for different pairings of naive coders with the expert were 68 65 53 and 43 showing considerable variation from subject to subjectnote that the expert interacted minimally with the coders and therefore differences were not due to trainingagreement on the placement of map reference points was good where the coders agreed that a boundary existed they almost invariably placed the begin and end points of their segments within the same four centimeter segment of the route and often much closer as measured on the original a3 mapsin contrast the closest points that did not refer to the same boundary were usually five centimeters apart and often much furtherthe study was too small for formal results about transaction categoryfor 64 out of 78 boundaries marked by at least two coders the category was agreed452 diagnosticsbecause this study was relatively small problems were diagnosed by looking at coding mismatches directly rather than by using statistical techniquescoders disagreed on where to place boundaries with respect to introductory questions about a route segment and attempts by the route follower to move on both of these confusions can be corrected by clarifying the instructionsin addition there were a few cases where coders were allowed to place a boundary on either side of a discourse marker but the coders did not agreeusing the speech would probably help but most uses of transaction coding would not require boundary placement this preciseoverview transactions were too rare to be reliable or useful and should be dropped from future coding systemsfinally coders had a problem with quotgrain sizequot one coder had many fewer transactions than the other coders with each transaction covering a segment of the route which other coders split into two or more transactions indicating that he thought the route givers were planning ahead much further than the other coders didthis is a general problem for discourse and dialogue segmentationgreene and cappella show very good reliability for a monologue segmentation task based on the quotideaquot structure of the monologue but they explicitly tell the coders that most segments are made up of two or three clausesdescribing a typical size may improve agreement but might also weaken the influence of the real segmentation criteriain addition higherlevel segments such as transactions vary in size considerablymore discussion between the expert and the novices might also improve agreement on segmentation but would make it more difficult for others to apply the coding systemssubjective coding has been described for three different levels of taskoriented dialogue structure called conversational moves games and transactions and the reliability of all three kinds of coding discussedthe codings were devised for use with the hcrc map task corpusthe move coding divides the dialogue up into segments corresponding to the different discourse goals of the participants and classifies the segments into 1 of 12 different categories some of which initiate a discourse expectation and some of which respond to an existing expectationthe coders were able to reproduce the most important aspects of the coding reliably such as move segmentation classifying moves as initiations or responses and subclassifying initiation and response typesthe game coding shows how moves are related to each other by placing into one game all moves that contribute to the same discourse goal including the possibility of embedded games such as those corresponding to clarification questionsthe game coding was somewhat less reproducible but still reasonableindividual coders can come to internally stable views of game structurefinally the transaction coding divides the entire dialogue into subdialogues corresponding to major steps in the participants plan for completing the taskalthough transaction coding has some problems the coding can be improved by correcting a few common confusionsgame and move coding have been completed on the entire 128 dialogue map task corpus transaction coding is still experimentalgame and move coding are currently being used to study intonation both in oneword english utterances and in longer utterances across languages the differences between audioonly facetoface textbased and videomediated communication and the characteristics of dialogue where one of the participants is a nonfluent brocatype aphasic in addition the move coded corpus has been used to train a program to spot the dialogue move category based on typical word patterns in aid of speech recognition the move categories themselves have been incorporated into a computational model of move goals within a spoken dialogue system in order to help the system predict what move the user is making this work was completed within the dialogue group of the human communication research centreit was funded by an interdisciplinary research centre grant from the economic and social research council to the universities of edinburgh and glasgow and grant number g9111013 of the joint councils initiativeauthors jc and al are responsible for developing the transaction coding scheme and for carrying out the reliability studies all authors contributed to the development of the move and game coding schemeswe would like to thank our anonymous reviewers for their comments on the draft manuscript
J97-1002
the reliability of a dialogue structure coding schemethis paper describes the reliability of a dialogue structure coding scheme based on utterance function game structure and higherlevel transaction structure that has been applied to a corpus of spontaneous taskoriented spoken dialogueswe computed agreement on a coarse segmentation level that was constructed on the top of finer segments by determining how well coders agreed on where the coarse segments started and for agreed starts by computing how coders agreed on where coarse segments ended
texttiling segmenting text into multiparagraph subtopic passages text tiling is a technique for subdividing texts into multiparagraph units that represent passages or subtopics the discourse cues for identifying major subtopic shifts are patterns of lexical cooccurrence and distribution the algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts multiparagraph subtopic segmentation should be useful for many text analysis tasks including information retrieval and summarization xerox parc text tiling is a technique for subdividing texts into multiparagraph units that represent passages or subtopicsthe discourse cues for identifying major subtopic shifts are patterns of lexical cooccurrence and distributionthe algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 textsmultiparagraph subtopic segmentation should be useful for many text analysis tasks including information retrieval and summarizationmost work in discourse processing both theoretical and computational has focused on analysis of interclausal or intersentential phenomenathis level of analysis is important for many discourseprocessing tasks such as anaphor resolution and dialogue generationhowever important and interesting discourse phenomena also occur at the level of the paragraphthis article describes a paragraphlevel model of discourse structure based on the notion of subtopic shift and an algorithm for subdividing expository texts into multiparagraph quotpassagesquot or subtopic segmentsin this work the structure of an expository text is characterized as a sequence of subtopical discussions that occur in the context of one or more main topic discussionsconsider a 21paragraph science news article called stargazers whose main topic is the existence of life on earth and other planetsits contents can be described as consisting of the following subtopic discussions subtopic structure is sometimes marked in technical texts by headings and subheadingsbrown and yule state that this kind of division is one of the most basic in discoursehowever many expository texts consist of long sequences of paragraphs with very little structural demarcation and for these a subtopical segmentation can be usefulthis article describes fully implemented techniques for the automatic detection of multiparagraph subtopical structurebecause the goal is to partition texts into contiguous nonoverlapping subtopic segments i call the general approach texttiling 1 subtopic discussions are assumed to occur within the scope of one or more overarching main topics which span the length of the textthis twolevel structure is chosen for reasons of computational feasibility and for the purposes of the application types described belowtexttiling makes use of patterns of lexical cooccurrence and distributionthe algorithm has three parts tokenization into terms and sentencesized units determination of a score for each sentencesized unit and detection of the subtopic boundaries which are assumed to occur at the largest valleys in the graph that results from plotting sentenceunits against scoresthree methods for score assignment have been explored blocks vocabulary introductions and chains although only the first two are evaluated in this article all three scoring methods make use only of patterns of lexical cooccurrence and distribution within texts eschewing other kinds of discourse cuesthe ultimate goal of passagelevel structuring is not just to identify the subtopic units but also to identify and label their subject matterthis article focuses only on the discovery of the segment boundaries but there is extensive ongoing research on automated topic classification most classification work focuses on identifying main topic as opposed to texttiling method of finding both globally distributed main topics and locally occurring subtopics nevertheless variations on some existing algorithms should be applicable to subtopic classificationthe next section argues for the need for algorithms that can detect multiparagraph subtopic structure and discusses application areas that should benefit from such structuresection 3 describes in more detail what is meant in this article by quotsubtopicquot and presents a description of the discourse model that underlies this worksection 4 introduces the general framework of using lexical cooccurrence information for detecting subtopic shift and describes other related work in empirical discourse analysisthe texttiling algorithms are described in more detail in section 5 and their performance is assessed in section 6finally section 7 summarizes the work and describes future directionsin school we are taught that paragraphs are to be written as coherent selfcontained units complete with topic sentence and summary sentencein realworld text these expectations are often not metparagraph markings are not always used to indicate a change in discussion but instead can sometimes be invoked just to break up the physical appearance of the text in order to aid reading a conspicuous example of this practice can be found in the layout of the columns of text in many newspapers brown and yule note that text genre has a strong influence on the role of paragraph markings and that markings differ for different languageshinds also suggests that different discourse types have different organizing principlesalthough most discourse segmentation work is done at a finer granularity than that suggested here multiparagraph segmentation has many potential applicationstexttiling is geared towards expository text that is text that explicitly explains or teaches as opposed to say literary texts since expository text is better suited to the main target applications of information retrieval and summarizationmore specifically texttiling is meant to apply to expository text that is not heavily stylized or structured and for simplicity does not make use of headings or other kinds of orthographic informationa typical example is a 5page science magazine article or a 20page environmental impact reportthis section concentrates on two application areas for which the need for multiparagraph units has been recognized hypertext display and information retrievalthere are also potential applications in some other areas such as text summarizationsome summarization algorithms extract sentences directly from the textthese methods make use of information about the relative positions of the sentences in the text however these methods do not use subtopic structure to guide their choices focusing more on the beginning and ending of the document and on position within paragraphspaice recognizes the need for taking topical structure into account but does not suggest a method for determining such structureanother area that models the multiparagraph unit is automated text generationmooney carberry and mccoy present a method centered around the notion of basic blocks multiparagraph units of text each of which consists of an organizational focus such as a person or a location and a set of concepts related to that focustheir scheme emphasizes the importance of organizing the highlevel structure of a text according to its topical content and afterwards incorporating the necessary related information as reflected in discourse cues in a finergrained passresearch in hypertext and text display has produced hypotheses about how textual information should be displayed to usersone study of an online documentation system compares display of finegrained portions of text full texts and intermediatesized unitsgirill finds that divisions at the finegrained level are less efficient to manage and less effective in delivering useful answers than intermediatesized units of textgirill does not make a commitment about exactly how large the desired text unit should be but talks about quotpassagesquot and describes passages in terms of the communicative goals they accomplish the implication is that the proper unit is the one that groups together the information that performs some communicative function in most cases this unit will range from one to several paragraphstombaugh lickorish and wright explore issues relating to ease of readability of long texts on crt screenstheir study explores the usefulness of multiple windows for organizing the contents of long texts hypothesizing that providing readers with spatial cues about the location of portions of previously read texts will aid in their recall of the information and their ability to quickly locate information that has already been read oncein the experiment the text is divided using premarked sectional information and one section is placed in each windowthey conclude that segmenting the text by means of multiple windows can be very helpful if readers are familiar with the mechanisms supplied for manipulating the displayconverting text to hypertext in what is called post hoc authoring requires division of the original text into meaningful units as well as meaningful interconnection of the unitsautomated multiparagraph segmentation should help with the first step of this process and is more important than ever now that preexisting documents are being put up for display on the world wide websalton et al have recognized the need for multiparagraph units in the automatic creation of hypertext links as well as theme generation in the field of information retrieval there has recently been a surge of interest in the role of passages in full textuntil very recently most information retrieval experiments made use only of titles and abstracts bibliographic entries or very short newswire articles as opposed to full textwhen long texts are available there arises the question can retrieval results be improved if the query is compared against only a passage or subpart of the text as opposed to the text as a wholeand if so what size unit should be usedin this context quotpassagequot refers to any segment of text isolated from the full textthis includes authordetermined segments marked orthographically andor automatically derived units of text including fixedlength blocks segments motivated by subtopic structure or segments motivated by properties of the query hearst and plaunt in some early passagebased retrieval experiments report improved results using passages over fulltext documents but do not find a significant difference between using motivated subtopic segments and arbitrarily chosen block lengths that approximated the average subtopic segment lengthsalton allan and buckley working with encyclopedia text find that comparing a query against orthographically marked sections and then paragraphs is more successful than comparing against full documents alonemoffat et al find somewhat surprisingly that manually supplied sectioning information may lead to poorer retrieval results than techniques that automatically subdivide the textthey compare two methods of subdividing long textsthe first consists of using authorsupplied sectioning informationthe second uses a heuristic in which small numbers of paragraphs are grouped together until they exceed a size thresholdthe results are that the small artificial multiparagraph groupings seemed to perform better than the authorsupplied sectioning information more experiments in this vein are necessary to firmly establish this result but it does lend support to the conjecture that multiparagraph subtopicsized segments such as those produced by texttiling are useful for similaritybased comparisons in information retrievalit will not be surprising if motivated subtopic segments are not found to perform significantly better than appropriately sized but arbitrarily segmented units in a coarsegrained information retrieval evaluationat trec the most prominent information retrieval evaluation platform the top 1000 documents are evaluated for each query and the bestperforming systems tend to use very simple statistical methods for ranking documentsin this kind of evaluation methodology subtle distinctions in analysis techniques tend to be lost whether those distinctions be how accurately words are reduced to their roots or exactly how passages are subdividedthe results of hearst and plaunt salton allan and buckley and moffat et al suggest that it is the nature of the intermediate size of the passages that mattersperhaps a more appropriate use of motivated segment information is in the display of information to the userone obvious way to use segmentation information is to have the system display the passages with the closest similarity to the query and to display a passagebased summary of the documents contentsas a more elaborate example of using segmentation in fulltext information access i have used the results of texttiling in a new paradigm for display of retrieval results this approach called tilebars allows the user to make informed decisions about which documents and which passages of those documents to view based on the distributional behavior of the query terms in the documentstilebars allows users to specify different sets of query terms as discussed laterthe goal is to simultaneously and compactly indicate texttiling is used to partition each document in advance into a set of multiparagraph subtopical segmentsfigure 1 shows an example query about automated systems for medical diagnosis run over the ziff portion of the tipster collection each large rectangle next to a title indicates a document and each square within the rectangle represents a texttile in the documentthe darker the tile the more frequent the term the top row of each rectangle corresponds to the hits for term set 1 the middle row to hits for term set 2 and the bottom row to hits for term set 3the first column of each rectangle corresponds to the first texttile of the document the second column to the second texttile and so onthe patterns of graylevel are meant to provide a compact summary of which passages of the document matched which topics of the queryusers queries are written as lists of words where each list or term set is meant to correspond to a different component of the query2 this list of words is then translated into conjunctive normal formfor example the query in the figure is translated by the system as and and this formulation allows the interface to reflect each conceptual part of the query the medical terms the diagnosis terms and the software termsthe document whose title begins quotva automation means faster admissionsquot is quite likely to be relevant to the query and has hits on all three term sets throughout the documentby contrast the document whose title begins quotit is hard to ghostbust a network quot is about computeraided diagnosis but has only a passing reference to medical diagnosis as can be seen by the graphical representationthis version of the tilebars interface allows the user to filter the retrieved documents according to which aspects of the query are most importantfor example if the user decides that medical terms should be better represented the mm hits or min the tilebars display on a query about automated systems for medical diagnosis acmdistribution constraint on this term set can be adjusted accordinglymin hits indicates the minimum number of times words from a term set must appear in the document in order for it to be displayedsimilarly min distribution indicates the minimum percentage of tiles that must have a representative from the term setthe setting min overlap span refers to the minimum number of tiles that must have at least one hit from each of the three term setsin figure 1 the user has indicated that the diagnosis aspect of the query must be strongly present in the retrieved documents by setting the min distribution to 30 for the second term setwhen the user mouseclicks on a square in a tilebar the corresponding document is displayed beginning at the selected texttilethus the user can also view the subtopic structure within the document itselfthis section has discussed why multiparagraph segmentation is important and how it might be usedthe next section elaborates on what is meant by multiparagraph subtopic structure casting the problem in terms of detection of topic or subtopic shiftin order to describe the detection of subtopic structure it is important to define the phenomenon of interestthe use of the term subtopic here is meant to signify pieces of text quotaboutquot something and is not to be confused with the topiccomment distinction also known as the givennew contrast found within individual sentencesthe difficulty of defining the notion of topic is discussed at length in brown and yule they note the notion of topic is clearly an intuitively satisfactory way of describing the unifying principle which makes one stretch of discourse about something and the next stretch about something else for it is appealed to very frequently in the discourse analysis literatureyet the basis for the identification of topic is rarely made explicit after many pages of attempting to pin the concept down they suggest as one alternative investigating topicshift markers instead it has been suggested that instead of undertaking the difficult task of attempting to define what a topic is we should concentrate on describing what we recognize as topic shiftthat is between two contiguous pieces of discourse which are intuitively considered to have two different topics there should be a point at which the shift from one topic to the next is markedif we can characterize this marking of topicshift then we shall have found a structural basis for dividing up stretches of discourse into a series of smaller units each on a separate topicthe burden of analysis is consequently transferred to identifying the formal markers of topicshift in discourse this notion of looking for a shift in content bears a close resemblance to chafe notion of the flow model of discourse in narrative texts in description of which he writes our data suggest that as a speaker moves from focus to focus there are certain points at which there may be a more or less radical change in space time character configuration event structure or even world at points where all of these change in a maximal way an episode boundary is strongly presentbut often one or another will change considerably while others will change less radically and all kinds of varied interactions between these several factors are possible thus rather than identifying topics per se several theoretical discourse analysts have suggested that changes or shifts in topic can be more readily identified and discussedtexttiling adopts this stancethe problem remains then of how to detect subtopic shiftbrown and yule consider in detail two markers adverbial clauses and certain kinds of prosodic markersby contrast the next subsection will show that lexical cooccurrence patterns can be used to identify subtopic shiftmuch of the current work in empirical discourse processing makes use of hierarchical discourse models and several prominent theories of discourse assume a hierarchical segmentation modelforemost among these are the attentionalintentional structure of grosz and sidner and the rhetorical structure theory of mann and thompson the building blocks for these theories are phrasal or clausal units and the targets of the analyses are usually very short texts typically one to three paragraphs in lengthmany problems in discourse analysis such as dialogue generation and turntaking require finegrained hierarchical models that are concerned with utterancelevel segmentationprogress is being made in the automatic detection of boundaries at this level of granularity using machine learning techniques combined with a variety of wellchosen discourse cues in contrast texttiling has the goal of identifying major subtopic boundaries attempting only a linear segmentationwe should expect to see in grouping together paragraphsized units instead of utterances a decrease in the complexity of the feature set and algorithm neededthe work described here makes use only of lexical distribution information in lieu of prosodic cues such as intonational pitch pause and duration discourse markers such as oh well ok however pronoun reference resolution and tense and aspect from a computational viewpoint deducing textual topic structure from lexical occurrence information alone is appealing both because it is easy to compute and because discourse cues are sometimes misleading with respect to the topic structure texttiling assumes that a set of lexical items is in use during the course of a given subtopic discussion and when that subtopic changes a significant proportion of the vocabulary changes as wellthe algorithm is designed to recognize episode boundaries by determining where thematic components like those listed by chafe change in a maximal wayhowever unlike other researchers who have studied setting time characters and the other thematic factors that chafe mentions i attempt to determine where a relatively large set of active themes changes simultaneously regardless of the type of thematic factorthis is especially important in expository text in which the subject matter tends to structure the discourse more so than characters setting and so onfor example in the stargazers text introduced in section 1 a discussion of continental movement shoreline acreage and habitability gives way to a discussion of binary and unary star systemsthis is not so much a change in setting or character as a change in subject matterthe flow of subtopic structure as determined by lexical cooccurrence is illustrated graphically in figure 2this figure shows the distribution by sentence number of selected terms from the stargazers textthe number of times a given word occurs in a given sentence is shown with blank spaces indicating zero occurrenceswords that occur frequently throughout the text are often indicative of the main topic of the textwords that are less frequent but more uniform in distribution such as form and scientist tend to be neutral and do not provide much information about the divisions within the discussionsthe remaining words are what are of interest herethey are quotclumpedquot together and it is these clumps or groups that texttiling assumes are indicative of the subtopic structurethe problem of segmentation therefore becomes the problem of detecting where these clumps begin and endfor example words binary through planet have considerable overlap in sentences 58 to 78 and correspond to the subtopic discussion binarytrinary star systems make life unlikely shown in the outline in section 1there is also a welldemarcated cluster of terms between sentences 35 and 50 corresponding to the grouping together of paragraphs 10 11 and 12 by human judges who have read the text and to the subtopic discussion in section 1 of how the moon helped life evolve on earththese observations suggest that a very simple take on lexical cohesion relations can be used to determine subtopic boundarieshowever from the diagram it is evident that simply looking for chains of repeated terms is not sufficient for determining subtopic breakseven combining terms that are closely related semantically into single chains is insufficient since often several different themes are active within the same segmentfor example sentences 37 to 51 contain dense interactions among the terms move continent shoreline time species and life and all but the latter occur only in this regionbecause groups of words that are not necessarily closely related conceptually seem to work together to indicate subtopic structure i adopt a technique that can take into account the occurrences of multiple simultaneous themes rather than use chains of lexical cohesion relations alonethis viewpoint is also advocated by skorochodko who suggests discovering a text structure by dividing it up into sentences and seeing how much wordoverlap appears among the sentencesthe overlap forms a kind of infrastructure fully connected graphs might indicate dense discussions of a topic while long spindly chains of connectivity might indicate a sequential accountthe central idea is that of defining the structure of a text as a function of the connectivity patterns of the terms that comprise it in contrast with segmentation guided primarily by finegrained discourse cues such as register change and cue wordsmany researchers have noted that term repetition is a strong cohesion indicatorphillips suggests performing quotan analysis of the distribution of the selected text elements relative to each other in some suitable text interval for whatever patterns of association they may contract with each other as a function of repeated cooccurrencequot perhaps surprisingly however the results in section 6 show that term repetition alone independent of other discourse cues can be a very useful indicator of subtopic structurethis may be less true in the case of narrative texts which tend to use more distribution of selected terms from the stargazer text with a single digit frequency per sentence number variation in the way concepts are expressed and so may require that thesaural relations be used as well as in it should be noted that other researchers have experimented with the display of patterns of cohesion cues other than lexical cohesion as tools for analyzing discourse structuregrimes introduces span charts to show the interaction of various thematic devices such as character identification setting and tensestoddard creates cohesion maps by assigning to each word a location on a twodimensional grid corresponding to the word position in the textto summarize many discourse analysis tasks require a finegrained hierarchical model and consequently require many kinds of discourse cues for segmentation in practicetexttiling attempts a coarsergrained analysis and so gets away with using a simpler feature setadditionally if we think of subtopic segmentation in terms of detection of shift from one discussion to the next we can simplify the task to one of detecting where the use of one set of terms ends and another set beginsfigure 2 illustrates that lexical distribution information can be used to discover such subtopic shiftsthe next subsections describe three different strategies for detecting subtopic shiftthe first is based on the observations of this subsection that subtopics can be viewed as quotclumpsquot of vocabulary and the problem of segmentation is one of detecting these clumpsthe following two subsections describe alternative techniques derived by recasting other researchers algorithms into a more appropriate framework for the texttiling taskin the block comparison algorithm adjacent pairs of text blocks are compared for overall lexical similaritythe texttiling algorithm requires that a score called the lexical score be computed for every sentence or more precisely for the gap between every pair of sentences the sketch in figure 3 illustrates the scores computed for the block comparison algorithmin this figure is shown a sequence of eight hypothetical sentences their contents represented as columns of letters where each letter represents a term or wordthe sentences are grouped into blocks of size k where in this illustration k 2the more words the blocks have in common the higher the lexical score at the gap between themif a low lexical score is preceded by and followed by high lexical scores this is assumed to indicate a shift in vocabulary corresponding to a subtopic changethe blocks act as moving windows over the textseveral sentences can be contained within a block but the blocks shift by only one sentence at a timethus if there are k sentences within a block each sentence occurs in k 2 score computations the current version of the block algorithm computes scores in a very simple manner as the inner product of two vectors where a vector contains the number of times each lexical item occurs in its corresponding blockthe inner product is normalized to make the score fall between 0 and 1 inclusivefigure 3 shows the computation of the scores at the gaps between sentences 2 and 3 between 4 and 5 and between 6 and 7the scores shown are simple unnormalized inner products of the frequencies of the terms in the blocksfor example the gap between sentences 2 and 3 gets assigned a score of 8 computed as 2 1 1 1 2 1 1 1 1 2 results for this approach are reported in section 6after these scores are computed the blocks are shifted by one sentence so for example in addition illustration of three ways to compute the lexical score at gaps between sentencesnumbers indicate a numbered sequence of sentences columns of letters signify the terms in the given sentence blocks dot product of vectors of word counts in the block on the left and the block on the right vocabulary introduction the number of words that occur for the first time within the interval centered at the sentence gap chains the number of active chains or terms that repeat within threshold sentences and span the sentence gap to comparing sentences 3 and 4 against sentences 5 and 6 the algorithm compares sentences 4 and 5 against sentences 6 and 7an earlier version of the algorithm weighted terms according to tfidf weights from information retrieval this weighting function computes for each word the number of times it occurs in the document tf times the inverse of the number of documents that the term occurs in in a large collection idf or as in this case with some normalizing constantsthe idea is that terms that commonly occur throughout a collection are not necessarily good indicators of relevance to a query because they are so common and so their importance is downweightedhearst posited that this argument should also apply to determining which words best distinguish one subtopic from anotherhowever the estimates of importance that tfidf makes seem not to be accurate enough within the scope of comparing adjacent pieces of text to justify using this measure and the results seem more robust weighting the words according to their frequency within the block aloneanother recent analytic technique that makes use of lexical information is described in youmans which introduces a variant on typetoken curves called the vocabularymanagement profiletypetoken curves are simply plots of the number of unique words against the number of words in a text starting with the first word and proceeding through the lastyoumans modifies this algorithm to keep track of how many firsttime uses of words occur at the midpoint of every 35word window in a textyoumans goal is to study the distribution of vocabulary in discourse rather than to segment it along topical lines but upon examining many english narratives essays and transcripts he notices that sharp upturns after deep valleys in the curve quotcorrelate closely to constituent boundaries and information flowquot youmans analysis of the graphs is descriptive in nature mainly attempting to identify the because of each peak or valley in terms of a principle of narrative structure and is done at a very finedgrained levelhe discusses one text in detail describing changes at the singleword level and focusing on withinparagraph and withinsentence eventsexamples of events are changes in characters occurrences of dialogue and descriptions of places each of which ranges in length from one clause to a few sentenceshe also finds that paragraph boundaries are not always predictedsometimes the onset of a new paragraph is signaled by the occurrence of a valley in the graph but often paragraph onset is not signaled until one or two sentences beyond onsetone of youmans main foci is an attempt to cast the resulting peaks in terms of coordination and subordination relationshowever in the discussion he notes that this does not seem like an appropriate use of the graphsno systematic evaluation of the algorithm is presented nor is there any discussion of how one might automatically determine the significance of the peaks and valleysnomoto and nitta attempt to use youmans algorithm for distinguishing entire articles from one another when they are concatenated into a single filethey find that it quotfails to detect any significant pattern in the corpusquot i recast youmans algorithm into the texttiling framework renaming it the vocabulary introduction methodfigure 3 illustratesthe text is analyzed and the positions at which terms are first introduced are recorded a moving window is used again as in the blocks algorithm and this window corresponds to youmans intervalthe number of new terms that occur on either side of the midpoint or the sentence gap of interest are added together and plotted against sentence gap numberthis approach differs from that of youmans and nomoto and nitta in two main waysfirst nomoto and nitta use too large an interval300 words because this is approximately the average size needed for their implementation of the blocks version of texttilinglarge paragraphsized intervals for measuring introduction of new words seem unlikely to be useful since every paragraph of a given length should have approximately the same number of new words although those at the beginning of a subtopic segment will probably have slightly moreinstead i use interval lengths of size 40 closer to youmans suggestion of 35second the granularity at which youmans takes measurements is too fine since he plots the score at every wordsampling this frequently yields a very spiky plot from which it is quite difficult to draw conclusions at a paragraphsized granularityi 6 this might be explained in part by stark who shows that readers disagree measurably about where to place paragraph boundaries when presented with texts with those boundaries removed plot the score at every sentence gap thus eliminating the wide variation that is seen when measuring after each wordresults for this approach are reported in section 6morris and hirst pioneering work on computing discourse structure from lexical relations is a precursor to the work reported on hereinfluenced by halliday and hasan theory of lexical coherence morris developed an algorithm that finds chains of related terms via a comprehensive thesaurus for example the words residential and apartment both index the same thesaural category and can thus be considered to be in a coherence relation with one anotherthe chains are used to structure texts according to the attentionalintentional theory of discourse structure discussed abovethe extent of the lexical chains is assumed to correspond to the extent of a segmentthe algorithm also incorporates the notion of chain returnsrepetition of terms after a long hiatusto complete an intention that spans over a digressionthe boundaries of the segments correspond to the sentences that contain the first and last words of the chainsince the morris and hirst algorithm attempts to discover attentionalintentional structure its goals are different than those of texttilingspecifically the discourse structure it attempts to discover is hierarchical and more finegrained than that discussed heremorris provides five short example texts for which she has determined the intentional structure and states that the lexical chains generated by her algorithm provide a good indication of the segment boundaries that grosz and sidner theory assumesin morris and morris and hirst tables are presented showing the sentences spanned by the lexical chains and by the corresponding segments of the attentionalintentional structure but no formal evaluation is performedthis algorithm is not directly applicable for texttiling for several reasonsfirst many words are ambiguous and fall into more than one thesaurus classthis is not stated as a concern in morris work perhaps because the texts were short and presumably if a word were ambiguous the correct thesaurus class would nevertheless be chosen because the chainedto words would share only the correct thesaurus classhowever my experimentation with an implemented version of morris algorithm that made use of roget 1911 thesaurus when run on longer texts found ambiguous links to be a common oca44tence and detrimental to the algorithma thesaurusbased disambiguation algorithm may help alleviate this problem but another solution is to move away from thesaurus classes and use simple word cooccurrence instead since within a given text a word is usually used with only one sense the potential downside of this approach is that many useful links may be missedanother limitation of the morris algorithm is that it does not take advantage of or discuss how to account for the tendency for multiple simultaneous chains to occur over the same intention related to this is the fact that chains tend to overlap one another in long texts as can be seen in figure 2these two types of difficulties can be circumvented by recasting the morris algorithm to take advantage of the observations at the beginning of this sectionthree changes are made to the algorithm first no thesaurus classes are used second multiple chains are allowed to span an intention and third chains at all levels of intentions are analyzed simultaneouslyinstead of deciding which chain is the applicable one for a given intention it measures how many chains at all levels are active at each sentence gapthis approach is illustrated in figure 3a lexical chain for term t is considered active across a sentence gap if instances of t occur within some distance threshold of one anotherin the figure all three instances of the word a occur within the distance thresholdthe third b however follows too far after the second b to continue the chainthe score for the gap between 2 and 3 is simply the number of active chains that span this gapboundaries are determined as specified in section 5this variation of the texttiling algorithm is explored and evaluated in hearst as mentioned in section 2 salton and allan report work in the automatic detection of hypertext links and theme generation from large documents focusing primarily on encyclopedia textthey describe the application of similarity comparisons between articles sections and paragraphs within an encyclopedia both for creating links among related passages and for better facilitating retrieval of articles in response to user queriestheir approach finds similarities among the paragraphs of large documents using normalized tfidf term weighting scoring text segments according to a normalized inner product of vectors of these weights salton and allan do not try to determine the extents of passages within articles or sectionsinstead all paragraphs sections and articles are assigned pairwise similarity scores and links are drawn between those with the highest scores independent of their position within the textthis distinction is important because the difficulty in subtopic segmentation lies in detecting the subtle differences between adjacent text blocksa method that finds blocks with the topmost similarity to one another can succeed at finding the equivalent of the center of a subtopic extent but does not distinguish where one subtopic ends and the next beginsif the algorithm of salton and allan were transformed so that adjacent text units were compared and a method for determining where the similarity scores are low were used then it would resemble the blocks algorithm with tfidf weighting but without the use of overlapping text windowshowever a consequence of the fact that the vector space method is better at distinguishing similarities than differences is that similarity scores alone are probably less effective at finding the transition points between subtopic discussions than sequences of similarity scores using moving windows of text in the manner described abovesalton et al attempt to address a version of the subtopic segmentation problem by extending the algorithm to finding quottext pieces exhibiting internal consistency that can be distinguished from the remainder of the surrounding textquot as one part of this goal they seek what is called the text segment which is defined as quota contiguous piece of text that is linked internally but largely disconnected from the adjacent texttypically a segment might consist of introductory material or cover the exposition and development of the text or contain conclusions and resultsquot thus they do not address the subtopic detection task because they attempt only to find those segments of text that are strongly different than the surrounding textthey do this by comparing similarity between a paragraph and its four closest paragraph neighbors to the left and the rightif a similarity score between a pair of paragraphs does not exceed a threshold then the link between that pair is removedif a disconnected sequence of paragraphs is found that sequence is considered a text segmentthis algorithm is not evaluatedkozima describes an algorithm for the detection of text segments which are defined as quota sequence of clauses or sentences that display local coherencequot in narrative textkozima presents a very elaborate algorithm for computing the lexical cohesiveness of a window of words using spreading activation in a semantic network created from an english dictionarythe cohesion score is plotted against words and smoothed and boundaries are considered to fall at the lowestscoring wordsthis complex computation as opposed to simple term repetition may be necessary when working with narrative texts but no comparison of methods is donethe algorithm results are shown on one text but are not evaluated formallyreynar describes an algorithm similar to that of hearst and hearst and plaunt with a difference in the way in which the size of the blocks of adjacent regions are chosena greedy algorithm is used the algorithm begins with no boundaries then a boundary b is chosen which maximizes the lexical score resulting from comparing the block on the left whose extent ranges from b to the closest existing boundary on the left and similarly for the rightthis process is repeated until a prespecified number of boundaries have been chosenthis seems problematic since the initial comparisons are between very large text segments the first boundary is chosen by comparing the entire text to the right and left of the initial positionthe algorithm is evaluated only in terms of how well it distinguishes entire articles from one another when concatenated into one filethe precisionrecall tradeoffs varied widely on 660 wall street journal articles if the algorithm is allowed to be off by up to three sentences it achieves precision of 80 with recall of 30 and precision of 30 with recall of 92the texttiling algorithm for discovering subtopic structure using term repetition has three main parts each is discussed in turn belowthe methods for lexical score determination were outlined in section 4 but more detail is presented heretokenization refers to the division of the input text into individual lexical units and is sensitive to the format of the input textfor example if the document has markup information the header and other auxiliary information is skipped until the body of the text is locatedtokens that appear in the body of the text are converted to all lowercase characters and checked against a stop list of closedclassed and other highfrequency wordsif the token is a stop word then it is not passed on to the next 8 quotstop listquot is a term commonly used in information retrieval in this case the list consists of 898 words developed in a somewhat ad hoc manner stepotherwise the token is reduced to its root by a morphological analysis function based on that of kartunen koskenniemi and kaplan converting regularly and irregularly inflected nouns and verbs to their rootsthe text is subdivided into pseudosentences of a predefined size w rather than using quotrealquot syntacticallydetermined sentencesthis is done to allow for comparison between equalsized units since the number of shared terms between two long sentences and between a long and a short sentence would probably yield incomparable scores for the purposes of the rest of the discussion these groupings of tokens will be referred to as tokensequencesthe morphologically analyzed token is stored in a table along with a record of the tokensequence number it occurred in and the number of times it appeared in the tokensequencea record is also kept of the locations of the paragraph breaks within the textstop words contribute to the computation of the size of the tokensequence but not to the computation of the similarity between blocks of textas mentioned above two methods for determining the score to be assigned at each tokensequence gap are explored herethe first block comparison compares adjacent blocks of text to see how similar they are according to how many words the adjacent blocks have in commonthe second the vocabulary introduction method assigns a score to a tokensequence gap based on how many new words were seen in the interval in which it is the midpoint521 blocksin the block comparison algorithm adjacent pairs of blocks of tokensequences are compared for overall lexical similaritythe block size labeled k is the number of tokensequences that are grouped together into a block to be compared against an adjacent group of tokensequencesthis value is meant to approximate the average paragraph lengthactual paragraphs are not used because their lengths can be highly irregular leading to unbalanced comparisons but perhaps with a clever normalizing scheme quotrealquot paragraphs could be used similarity values are computed for every tokensequence gap number that is a score is assigned to tokensequence gap i corresponding to how similar the tokensequences from tokensequence i k to i are to the tokensequences from i 1 to i k 1note that this moving window approach means that each tokensequence appears in k 2 similarity computationsthe lexical score for the similarity between blocks is calculated by a normalized inner product given two text blocks b1 and b2 each with k tokensequences where b1 tokensequence_k tokensequence and b2 tokensequenceii tokensequenceik11 w2tbi et w2tb2 where t ranges over all the terms that have been registered during the tokenization step and wtb is the weight assigned to term t in block bas mentioned in section 4 in this version of the algorithm the weights on the terms are simply their frequency within the blockthis formula yields a score between 0 and 1 inclusivethese scores can be plotted tokensequence number against similarity scorehowever since similarity is measured between blocks bi and b2 the score xaxis coordinate falls between tokensequences i and i 1rather than plotting a tokensequence number on the xaxis the tokensequence gap number i is plotted instead tion version of scoring is the ratio of new words in an interval divided by the length of that intervaltokenization is as described above eliminating stop words and performing morphological analysisa score is then assigned to a tokensequence gap as follows the number of neveryetseen words in the tokensequence to the left of the gap is added to the number of neveryetseen words in the tokensequence to the right and this number is divided by the total number of tokens in the two tokensequences or w 2since in these experiments w is set to 20 this yields an interval length of 40 which is close to the parameter 35 suggested as most useful in as in the block version of the algorithm the score is plotted at the tokensequence gap and scores can range from 0 to 1 inclusivethe lexical score is computed as followsfor each tokensequence gap i create a text interval b of length w 2 centered around i and let b be subdivided into two equallength parts b1 and b2 where b1 where numnewterms returns the number of terms in interval b seen for the first time in the textboundary identification is done identically for all lexical scoring methods and assigns a depth score the depth of the valley to each tokensequence gapthe depth score corresponds to how strongly the cues for a subtopic changed on both sides of a given tokensequence gap and is based on the distance from the peaks on both sides of the valley to that valleyfigure 4 illustratesin figure 4 the depth score at gap a2 is relatively quotdeeperquot valleys receive higher scores than shallower onesmore formally for a given tokensequence gap i the program records the lexical score of the tokensequence gap 1 to the left of i until the score for 1 1 is smaller than the score for 1 similarly for token sequences to the right of i the program monitors the score of tokensequence r until the score for are 1 is less than that of r finally score score is added to score score and the result is the depth score at ia potential problem with this scoring method is illustrated in figure 4here we see a small valley at gap before that can be said to quotinterruptquot the score for b2as one safeguard the algorithm uses smoothing to help eliminate small perturbations of the kind seen at 174additionally because the distance between yb3 and yb4 is small in these kinds of cases this gap is less likely to be marked as a boundary than gaps like b2 which have large peak distances both to the left and the rightthis example illustrates the need to take into account the length of both sides of the valley since a valley that has high peaks on both sides indicates that not only has the vocabulary on the left decreased in score but the vocabulary on the right has increasing score thus signaling a strong subtopic changefigure 4 shows another potentially problematic case in which two strong peaks flank a long flat valleythe question becomes which of gaps c2 c3 or both should be assigned a boundarysuch quotplateausquot occur when vocabulary changes very gradually and reflect a poor fit of the corresponding portion of the document to the model a sketch illustrating the computation of depth scores in three different situationsthe xaxis indicates token sequence gap number and the yaxis indicates lexical score assumed by texttilingwhen the plateau occurs over a longer stretch usually it is reasonable to choose both bordering gaps as boundarieshowever when such a plateau occurs over a very short stretch of text the algorithm is forced to make a somewhat arbitrary choicechoices like these are cases in which the algorithm should probably make use of additional information such as more localized lexical distribution information or perhaps more conventional discourse cuesnote that the depth scores are based only on relative score information ignoring absolute valuesthe justification for this is twofoldfirst it helps make decisions in the cases in which a gap lexical score falls into the middle of the lexical score range but is flanked by tall peaks on either side and this situation happens commonly enough to be importantsecond using relative rather than absolute scores helps avoid problems associated with situations like that of figure 4 in which all gaps between c2 and c3 would be considered boundaries if only absolute scores were taken into accountthe depth scores are sorted and used to determine segment boundariesthe larger the score the more likely the boundary occurs at that location modulo adjustments as necessary to place the boundaries at orthographically marked paragraphs a proviso check is made to prevent assignment of very close adjacent segment boundariescurrently at least three intervening tokensequences are required between boundariesthis helps control for the fact that many texts have spurious header information and singlesentence paragraphsan alternative to this method of computing depth scores is to use the slope of the valley sides or the quotsharpnessquot of the vocabulary changehowever because deeper valleys with smaller slopes indicate larger although more gradual shifts in vocabulary usage than shallower valleys with larger slopes they are preferable for detecting subtopic boundariesfurthermore steep slopes can sometimes indicate a spurious change associated with a very short digressionthe depth score is more robust for the purposes of subtopic boundary detectionas mentioned above the plot is smoothed to remove small dips using average smoothing with a width of size s as follows for each tokensequence gap g and a small even number s find the scores of the s2 gaps to the left of g find the scores of the s2 gaps to the right of g find the score at g take the average of these scores and assign it to g repeat this procedure n times the choice of smoothing function is somewhat arbitrary other lowpass filters could be used insteadthe algorithm must determine how many segments to assign to a document since every paragraph is a potential segment boundaryany attempt to make an absolute cutoff even one normalized for the length of the document is problematic since there should be some relationship between the structure and style of the text and the number of segments assigned to itas discussed above a cutoff based on a particular valley depth is similarly problematicinstead i suggest making the cutoff a function of the characteristics of the depth scores for a given document using the average and standard deviation a of their scores one version of this function entails drawing a boundary only if the depth score exceeds g a this function can be varied to achieve correspondingly varying precisionrecall tradeoffsa higher precision but lower recall can be found by setting the limit to be depth scores exceeding a2 instead of athere are several ways to evaluate a segmentation algorithm including comparing its segmentation against that of human judges comparing its segmentation against authorspecified orthographic information and comparing its segmentation against other automated segmentation strategies in terms of how they effect the outcome of some computational taskthis section presents comparisons of the results of the algorithm against human judgments and against article boundariesit is possible to compare against authorspecified markups but unfortunately as discussed above authors usually do not specify the kind of subtopic information desiredas mentioned above hearst and hearst and plaunt show how to use texttiles in information retrieval tasks although this work does not show whether or not the results of these algorithms produce better performance than the results of some other segmentation strategy wouldthere is a growing concern surrounding issues of intercoder reliability when using human judgments to evaluate discourseprocessing algorithms proposals have recently been made for protocols for the collection of human discourse segmentation data and for how to evaluate the validity of judgments so obtained recently hirschberg and nakatani have reported promising results for obtaining higher interjudge agreement using their collection protocolsfor the evaluation of the texttiling algorithms judgments were obtained from seven readers for each of 12 magazine articles that satisfied the length criteria 9 and that contained little structural demarcationthe judges were asked simply to mark the paragraph boundaries at which the topic changed they were not given more explicit instructions about the granularity of the segmentationfigure 5 shows the boundaries marked by seven judges on the stargazers textthis format helps illustrate the general trends in the judges assessments and also helps show where and how often they disagreefor instance all but one judge marked a boundary between paragraphs 2 and 3the dissenting judge did mark a boundary after 3 as did two of the concurring judgesthe next three major boundaries occur after paragraphs 5 9 12 and 13there is some contention in the later paragraphs three readers marked both 16 and 18 two marked 18 alone and two marked 17 alonethe outline in the introduction gives an idea of what each segment is aboutpassonneau and litman discuss at length considerations about evaluating segmentation algorithms according to reader judgment informationas figure 5 shows agreement among judges is imperfect but trends can be discernedin the data of passonneau and litman if four or more out of seven judges mark a boundary the segmentation is found to be significant using a variation of the qtest however in later work three out of seven judges marking a boundary was considered sufficient to classify that point as a quotmajorquot boundarycarletta and rosé point out the importance of taking into account the expected chance agreement among judges when computing whether or not judges agree significantlythey suggest using the kappa coefficient for this purposeaccording to carletta k measures pairwise agreement among a set of coders making category judgments correcting for expected chance agreement as follows where p is the proportion of times that the coders agree and p is the proportion of times that they would be expected to agree by chancethe coefficient can be computed by making pairwise comparisons against an expert or by comparing to a group decisioncarletta also states that in the behavioral sciences k 8 signals good replicability and 67 k 8 allows tentative conclusions to be drawnthe kappa coefficients found in isard and carletta ranged from 43 to 68 for four coders placing transaction boundaries and those found in ranged from 65 to 90 for four coders segmenting sentencescarletta cautions however that quot coding discourse and dialogue phenomena and especially coding segment boundaries may be inherently more difficult than many previous types of content analysis quot and so implies that the levels of agreement needed to indicate good reliability for texttiling may be justified in being lowerfor my test texts the judges placed boundaries on average 391 of the time and nonboundaries 609thus the expected chance agreement p is 524 391 and p 609 524to compute k each judge decision was compared to the group decision where a paragraph gap was considered a quottruequot boundary if at least three out of seven judges placed a boundary mark there as in litman and passonneau 11 the remaining gaps are considered nonboundariesthe average k for these texts was 647this score is at the low end of the stated acceptability range but is comparable with those of other interreliability results found in discourse segmentation experimentsan unfortunate aspect of the algorithm in its current form is that it requires the setting of several interdependent parameters the most important of which are the size of the text unit that is compared and the number of words in a tokensequence the method width and number of rounds of smoothing must also be chosenusually only modest amounts of smoothing can be allowed since more dramatic smoothing tends to obscure the point at which the subtopic transition takes placefinally the method for determining how many boundaries to assign must be specifiedthe three are interrelated for example using a larger text window requires less smoothing and fewer boundaries will be found yielding a coarsergrained segmentationinitial testing was done on the texts evaluated with several different sets of parameter settings and a default configuration that seems to cover many different text types was chosenthe defaults set w 20 k 10 n 1 s 2 for tokensequence size block size number of rounds of smoothing and smoothing width respectivelythe evaluation presented here shows the results for different setting types to give a feeling for the space of resultsbecause the evaluation collection is very small these results can be seen only as a suggestion different settings may work better in different situationsfigure 6 shows a plot of the results of applying the block comparison algorithm to the stargazer text with k set to 10when the lowermost portion of a valley is not located at a paragraph gap the judgment is moved to the nearest paragraph gap12 for the most part the regions of strong similarity correspond to the regions of strong agreement among the readersnote however that the similarity information around paragraph 12 is weakthis paragraph briefly summarizes the contents of the previous three paragraphs much of the terminology that occurred in all of them reappears in judgments of seven readers on the stargazer textinternal numbers indicate location of gaps between paragraphs xaxis indicates tokensequence gap number yaxis indicates judge number a break in a horizontal line indicates a judgespecified segment breakresults of the block similarity algorithm on the stargazer text with k set to 10 and the loose boundary cutoff limitboth the smoothed and unsmoothed plot are showninternal numbers indicate paragraph numbers xaxis indicates tokensequence gap number yaxis indicates similarity between blocks centered at the corresponding tokensequence gapvertical lines indicate boundaries chosen by the algorithm for example the leftmost vertical line represents a boundary after paragraph 3note how these align with the boundary gaps of figure 5 above this one location thus it displays low similarity both to itself and to its neighborsthis is an example of a breakdown caused by the assumptions about the subtopic structurebecause of the depth score cutoff not all valleys are chosen as boundariesalthough there is a dip around paragraph gaps 5 and 6 no boundary is marked therefrom the summary of the text contents in section 1 we know that paragraphs 4 and 5 discuss the moon chemical composition while 6 to 8 discuss how it got its shape these two subtopic discussions are more similar to one another in content than they are to the subtopics on either side of them thus accounting for the small change in similarityfive out of seven readers indicated a break between paragraphs 18 and 19the algorithm registers a slight but not significant valley at this pointupon inspection it turns out that paragraph 19 really is a continuation of the discussion in 18 answering a question that is posed at the end of 18however paragraph 19 begins with an introductory phrase type that strongly signals a change in subtopic for the last two centuries astronomers have studiedthe final paragraph is a summary of the entire text the algorithm recognizes the change in terminology from the preceding paragraphs and marks a boundary but only two of the readers chose to differentiate the summary for this reason the algorithm is judged to have made an error even though this sectioning decision is reasonablethis illustrates the inherent fallibility of testing against reader judgments although in part this is because the judges were given loose constraintsto assess the results of the algorithm quantitatively i follow the advice of gale church and yarowsky and compare the algorithm against both upper and lower boundsthe upper bound in this case is the reader judgment datathe lower bound is a baseline algorithm that is a simple reasonable approach to the problem which can be automateda simple way to segment the texts is to place boundaries randomly in the document constraining the number of boundaries to equal that of the average number of paragraph gaps per document assigned as boundaries by judgesin the test data boundaries are placed in about 39 of the paragraph gapsa program was written that places a boundary at each potential gap 39 of the time and run 10000 times for each text and the average of the scores of these runs was foundthese scores appear in table 1the algorithms are evaluated according to the proportion of quottruequot or majority boundaries they select out of the total selected and the proportion of quottruequot boundaries found out of the total possible precision also implies the number of extraneous boundaries and recall implies the number of missed boundaries table 1 shows that both the blocks algorithm for lexical score assignment and the vocabulary introduction algorithm fall between the upper and lower boundsthe results are shown for making both a liberal and a conservative number of boundary assignments as is to be expected when more boundaries can be assigned recall becomes higher at the expense of precision and conversely when boundary assignment is conservative better precision is obtained at the expense of recallthis table also shows the average k scores for the agreement between the algorithm and the judgesthe scores for the blocks version of the algorithm are stronger than those for the vocabulary introduction versiontable 2 shows results in more detail varying some of the parameter settingsto allow for a more direct comparison the precision for each version of the algorithm is shown at the recall level obtained by the judges on averagethis is computed as follows for each version of the algorithm the depth scores are examined in order of their strengthfor each depth score if it corresponds to a true boundary the count of correct boundaries is incremented otherwise the count of incorrect boundaries is incrementedprecision and recall are computed after each correct boundary encounteredwhen the recall equals that of the judges average recall the corresponding precision of the algorithm is returnedif the recall level exceeds that of the judges then the value of the precision is estimated as a linear interpolation between the two precision scores whose recall scores most closely surround that of the judges average recallin some cases the algorithm does not produce a recall level as high as that found by the judges since paragraphs with a nonpositive depth score are not eligible for boundary assignment and these cases are marked with a dashnote that this evaluation does away with the need for lc and hc cutoff levelsfrom table 2 we can see that varying the parameter settings improves the scores for some texts while detracting from otherswe can also see that the blocks algorithm for lexical score determination produces stronger results in most cases than the vocabulary introduction method although the latter seems to do better on the cases where the blocks algorithm finds few boundaries in almost all cases the algorithms are not as accurate as the judges but the scores for the blocks version of the algorithm are very strong in many casesin looking at the results in more detail one might wonder why the algorithm performs better on some texts than on otherstext 7 for example scores especially poorlythis may be caused by the fact that this text has a markedly different style from the othersit is a chatty article and consists of a series of anecdotes about particular individualsthe article is interspersed throughout with spoken quotations and these tend to throw the algorithm off because spoken statements usually contain different vocabulary than the surrounding prosethis phenomenon occurs in some of the other texts as well but to a much lesser extentit suggests a need for recognizing and accommodating very short digressions more effectivelyanother interesting property of this text is that most of the subtopic switches occur when switching from one anecdote to another and by inspection it appears that the best cues for these switches are pronouns that appear on the stop list and are discarded however in most cases use of the stop list improves resultsit should also be noted that the texts used in this study were not chosen to have welldefined boundaries and so pose a difficult test for the algorithmperhaps some tests against texts with more obvious subtopic boundaries would be illuminatingone way to evaluate the algorithm is in terms of how well it distinguishes entire articles from one another when they are concatenated into one filenomoto and nitta implement the tfidf version of texttiling from hearst and hearst and plaunt and evaluate it this way on japanese newswire text13 also as discussed in section 4 reynar uses this form of evaluation on a greedy version of the blocks algorithmthis task violates a major assumption of the texttiling algorithmtexttiling assumes that the similarity comparisons are done within the vocabulary patterns of one text and so a relatively large shift in vocabulary indicates a change in subtopicbecause this evaluation method assumes that article boundary changes are more important than subtopic boundary changes it penalizes the algorithm for marking very strong subtopic changes that occur within a very cohesive document before relatively weaker changes in vocabulary between similar articlesfor example for hypothetical articles d1 d2 and d3 assume d1 has very strong internal coherence indicators d2 has relatively weak ones and d3 is in the midrangethe interidr subtopic transition scores for d1 can swamp out the score for the transition between d2 and d3nevertheless because others have used this evaluation method one such evaluation is shown here as wellthe evaluation set consisted of 44 articles from the wall street journal from 1989consecutive articles were used except any article fewer than 10 sentences was removedthe data consisted of 691 paragraphs most of which contained between 1 and 3 sentences some of which were very short eg article bylines the text was not quotcleanquot several articles consisted of a sequence of stories several had tabular data and one article was just a listing of interest ratesthe blocks version of texttiling was run over this data using the default parameter settingsthe depth scores were sorted and the number of assignments to article boundaries that were within three sentences of the correct location were recorded at several cutoff levels and are shown in table 3b corresponds to the number of bound13 instead of using fixedsized blocks nomoto and nitta take advantage of the fact that japanese provides discourse markers indicating multisentence units that participate in a topiccomment relationship and find these motivated units can work slightly better aries assigned in sorted order c corresponds to the number of correctly placed boundaries p the precision r the recall and the asterisk shows the precisionrecall breakeven pointthe higherscoring boundaries are almost always exact hits but those farther down are more likely to be off by one to three sentencesonly one transition is missed entirely and it occurs after a sequence of five isolated sentences and a byline the highscoring boundaries that do not correspond to shifts between articles almost always correspond to strong subtopic shiftsone exception occurs in the article consisting only of interest rate listingsanother occurs in an article associating numerical information with namesoverall the scores are much stronger than those reported in reynar and are comparable to those of nomoto and nitta whose best precisionrecall tradeoff on a collection of approximately 80 articles is approximately 50 precision and 81 recallhowever all three studies are done on different test collections and so comparisons are at best suggestivethis article has described an algorithm that uses changes in patterns of lexical repetition as the cue for the segmentation of expository texts into multiparagraph subtopic structureit has also advocated the investigation and use of the multiparagraph discourse unit something that had not been explored in the computational literature until this work was introducedthe algorithms described here are fully implemented and use term repetition alone without requiring thesaural relations knowledge bases or inference mechanismsevaluation reveals acceptable performance when compared against human judgments of segmentation although there is room for improvementtexttiles have already been integrated into a user interface in an information retrieval system and have been used successfully for segmenting arabic newspaper texts which have no paragraph breaks for information retrieval with the increase in importance of multimedia information especially in the context of digital library projects the need for segmentation and summarization of alternative media types is becoming increasingly importantfor example the algorithms described here should prove useful for topicbased segmentation of video transcripts in a line of work we call mixedmedia access textual subtopic structure is being integrated with other media types such as images and speechtexttiling has been used in innovative ways by other researcherskarlgren in a study of the effects of stylistic variation in texts on information retrieval results uses texttiling as one of several ways of characterizing newspaper textsoverall he finds that relevant documents tend to be more complex than nonrelevant ones in terms of length sentence structure and other metricswhen examining documents of all lengths he finds that relevant documents tend to have more texttiles than nonrelevant ones as another example of an innovative application van der eijk suggests using texttiles to align parallel multilingual text corpora according to the overlap in their subtopic structure for english german and french textthis work along with that of nomoto and nitta on japanese and hasnah on arabic also provides evidence that texttiling is applicable to a wide range of natural languagesthere are several ways that the algorithms could be modified to attempt to improve the resultsone way is to use thesaural relations in addition to term repetition to make better estimates about the cohesiveness of the discussionearlier work incorporated thesaural information into the algorithms but later experiments found that this information degrades the performancethis could very well be due to problems with the thesaurus and assignment algorithm useda simpler algorithm that just posits relations among terms that are a small distance apart according to wordnet modeled after morris and hirst heuristics might work bettertherefore the issue should not be considered closed but rather as an area for future exploration with this work as a baseline for comparisonthe approach to similarity comparison suggested by kozima while very expensive to compute might also prove able to improve resultsother ways of computing semantic similarity such as those of schiltze or resnik may also prove usefulas a related point experimentation should be done with variations in tokenization strategies and it may be especially interesting to incorporate phrase or bigram information into the similarity computationthe methods for computing lexical score also have the potential to be improvedsome possibilities are weighting terms according to their prior probabilities weighting terms according to the distance from the location under scrutiny according to a gaussian distribution or treating the plot as a probabilistic time series and detecting the boundaries based on the likelihood of a transition from nontopic to topicanother alternative is to devise a good normalization strategy that would allow for meaningful comparisons of quotrealquot paragraphs rather than regularsized windows of textthe question arises as to how to extend the algorithm to capture hierarchical structureone solution is to use the coarse subtopic structure to guide the more finegrained methodsanother is to make several passes through the text using the results of one round as the input in terms of which blocks of text are compared in the next roundfinally it may prove fruitful to use localized discourse cue information or other specialized processing around potential boundary locations to help better determine exactly where segmentation should take placethe use of discourse cues for detection of segment boundaries and other discourse purposes has been extensively researched although predominantly on spoken text it is possible that incorporation of such information may improve the cases where the algorithm is off by one paragraphthis article was enormously improved as a result of the careful comments of four anonymous reviewers the editors of this
J97-1003
texttiling segmenting text into multiparagraph subtopic passagestexttiling is a technique for subdividing texts into multiparagraph units that represent passages or subtopicsthe discourse cues for identifying major subtopic shifts are patterns of lexical cooccurrence and distributionthe algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 textsmultiparagraph subtopic segmentation should be useful for many text analysis tasks including information retrieval and summarizationwe compute chance agreement in terms of the probability that coders would say that a segment boundary exists and the probability that they would not
discourse segmentation by human and automated means the need to model the relation between discourse structure and linguistic features of utterances is almost universally acknowledged in the literature on discourse however there is only weak consensus on what the units of discourse structure are or the criteria for recognizing and generating them we present quantitative results of a twopart study using a corpus of spontaneous narrative monologues the first part of our paper presents a method for empirically validating multiutterance units referred to as discourse segments we report highly significant results of segmentations performed by naive subjects where a commonsense notion of speaker intention is the segmentation criterion in the second part of our study data abstracted from the subjects segmentations serve as a target for evaluating two sets of algorithms that use utterance features to perform segmentation on the first algorithm set we evaluate and compare the correlation of discourse segmentation with three types of linguistic cues we then develop a second set using two methods error analysis and machine learning testing the new algorithms on a new data set shows that when multiple sources of linguistic knowledge are used concurrently algorithm performance improves the need to model the relation between discourse structure and linguistic features of utterances is almost universally acknowledged in the literature on discoursehowever there is only weak consensus on what the units of discourse structure are or the criteria for recognizing and generating themwe present quantitative results of a twopart study using a corpus of spontaneous narrative monologuesthe first part of our paper presents a method for empirically validating multiutterance units referred to as discourse segmentswe report highly significant results of segmentations performed by naive subjects where a commonsense notion of speaker intention is the segmentation criterionin the second part of our study data abstracted from the subjects segmentations serve as a target for evaluating two sets of algorithms that use utterance features to perform segmentationon the first algorithm set we evaluate and compare the correlation of discourse segmentation with three types of linguistic cues we then develop a second set using two methods error analysis and machine learningtesting the new algorithms on a new data set shows that when multiple sources of linguistic knowledge are used concurrently algorithm performance improveseach utterance of a discourse contributes to the communicative import of preceding utterances or constitutes the onset of a new unit of meaning or action that subsequent utterances may add tothe need to model the relation between the structure of such units and linguistic features of utterances is almost universally acknowledged in the literature on discoursehowever natural language systems rarely exploit the relation between discourse segment structure and linguistic devices because there is very little data about how they constrain one anotherwe have been engaged in a twopart study addressing this gapwe report on a method for empirically validating discourse segments and on our development and evaluation of algorithms to identify these segments from linguistic features of discoursewe show that human subjects can reliably perform discourse segmentation using speaker intention as a criterionwe also show that when multiple sources of linguistic knowledge are used algorithm performance approaches human performancethe excerpt in figure 1 illustrates the two aspects of discourse that our study addressesthe first pertains to an abstract structure consisting of meaningful discourse segments and their interrelationsthe utterances in segments x and z of figthere are three little boys up on the road a little bit and they see this little accidentand youh they come over and they help and you know segment y help him pick up the pears and everythingsegment z and the one thing that struck me about the three little boys that were there is that one had ay uh i do not know what you call them but it is a paddle and a ball is attached to the paddle and you know you bounce itand that sound was really prominentwell anyway so youm tsk all the pears are picked up and hel s on his way again ure 1which describe how three boys come to the aid of another boy who fell off of a bike are more closely related to one another than to those in the intervening segment ywhich describe the paddleball toy owned by one of the three boysthe second discourse feature of interest is that the usage of a wide range of lexicogrammatical devices seems to constrain or be constrained by this more abstract structureconsider the interpretation of the referent of the boxed pronoun he in segment zthe referent of the underlined noun phrase one in segment y is the most recently mentioned male referent without the segmentation the reasoning required to reject it in favor of the intended referent of he is quite complexhowever segment z begins with certain features that indicate a resumption of the speaker goals associated with segment x such as the use of the phrase well anyway and the repeated mention of the event of picking up the pearsin terms of the segmentation shown here the referents introduced in segment x are more relevant for interpreting the pronoun in segment znote also that cue words explicitly mark the boundaries of all three segmentsour work is motivated by the hypothesis that natural language technologies can more sensibly interpret discourse and can generate more comprehensible discourse if they take advantage of this interplay between segmentation and linguistic devicesin section 2 we give a brief overview of related workin section 3 we present our analysis of segmentation data collected from a population of naive subjectsour results demonstrate an extremely significant pattern of agreement on segment boundariesin section 4 we use boundaries abstracted from the data produced by our subjects to quantitatively evaluate algorithms for segmenting discoursein section 41 we discuss the coding and evaluation methodsin section 42 we test an initial set of algorithms for computing segment boundaries from a particular type of linguistic feature either referential noun phrases cue phrases or pausesin section 431 we analyze the errors of our initial algorithms in order to identify a set of enriched input features and to determine how to combine information from the three linguistic knowledge sourcesin section 432 we use machine learning to automatically construct segmentation algorithms from large feature setsour results suggest that it is possible to approach human levels of performance given multiple knowledge sourcesin section 5 we discuss the significance of our results and briefly highlight our current directionsthere is much debate about what to define discourse segments in terms of and what kinds of relations to assign among segmentsthe nature of any hypothesized interaction between discourse structure and linguistic devices depends both on the model of discourse that is adopted and on the types of linguistic devices that are investigatedhere we briefly review previous work on characterizing discourse segments and on correlating discourse segments with utterance featureswe conclude each review by summarizing the differences between our study and previous worka number of alternative proposals have been presented which relate segments to intentions rhetorical structure theory relations or other semantic relations the linguistic structure of grosz and sidner discourse model consists of multiutterance segments and structural relations among them yielding a discourse tree structurethe hierarchical relations of their linguistic structure are isomorphic with the two other levels of their model intentional structure and attentional staterhetorical relations do not play a role in their modelin hobbs and polanyi segmental structure is an artifact of coherence relations among utterances such as elaboration evaluation because and so ontheir coherence relations are similar to those posited in rst which informs much work in generationpolanyi distinguishes among four types of discourse constituent units based on different types of structural relations as in grosz and sidner model polanyi proposes that dcus are structured as a tree and in both models the tree structure of discourse constrains how the discourse evolves and how referring expressions are processedrecent work has argued that to account for explanation dialogues it is necessary to independently model both rst relations and intentionsresearchers have begun to investigate the ability of humans to agree with one another on segmentation and to propose methodologies for quantifying their findingsthe types of discourse units being coded and the relations among them varyseveral studies have used trained coders to locally and globally structure spontaneous or read speech using the model of grosz and sidner including grosz and hirschberg 1992 nakatani hirschberg and grosz 1995 stifleman 1995 hirschberg and nakatani 1996in grosz and hirschberg percent agreement among 7 coders on 3 texts under two conditionstext plus speech or text aloneis reported at levels ranging from 743 to 951in hirschberg and nakatani average reliability of segmentinitial labels among 3 coders on 9 monologues produced by the same speaker labeled using text and speech is 8 or above for both read and spontaneous speech values of at least 8 are typically viewed as representing high reliability reliability labeling from text alone is 56 for read and 63 for spontaneous speechother notions of segment have also been used in evaluating naive or trained codershearst asked naive subjects to place boundaries between paragraphs of running text to indicate topic changeshearst reports agreement of greater than 80 and indicates that significance results were found that were similar to those reported in passonneau and litman flammia and zue asked subjects to segment textual transcriptions of telephone taskoriented dialogues using minimal segmentation instructions based on a notion of topic 18 dialogues were segmented by 5 coders with an average pairwise kappa coefficient of 45to evaluate hierarchical aspects of segmentation flammia and zue also developed a new measure derived from the kappa coefficientswerts asked 38 subjects to mark quotparagraph boundariesquot in transcriptions of 12 spontaneous spoken monologues half of the subjects segmented from text alone and half from text plus speechhowever no quantitative evaluation of the results were reportedswerts and ostendorf also empirically derived discourse structure using a spoken corpus of database query interactionsalthough the labelers had high levels of agreement the segmentations were fairly trivialisard and carletta presented 4 naive subjects and 1 expert coder with transcripts of taskoriented dialogues from the hcrc map task corpus utterancelike units referred to as moves were identified in the transcripts and subjects were asked to identify transaction boundariessince reliability was lower than the 80 threshold they concluded that their coding scheme and instructions required improvementmoser and moore investigated the reliability of various features defined in relational discourse analysis based in part on rsttheir corpus consisted of written interactions between tutors and students using 3 different tutorstwo coders were asked to identify segments the core utterance of each segment and certain intentional and informational relations between the core and the other contributor utterancesas reported in their talk reliability on segment structure and core identification was well over the 80 thresholdreliability on intentional and informational relations was around 75 high enough to support tentative conclusionsfinally a method for segmenting dialogues based on a notion of control was used in whittaker and stenton and walker and whittaker utterances were classified into four types each of which was associated with a rule that assigned a controller the discourse was then divided into segments based on which speaker had controlneither study presented any quantitative analysis of the ability to reliably perform the initial utterance classificationhowever in whittaker and stenton a higher level of discourse structure based on topic shifts was agreed upon by at least 4 of 5 judges for 46 of the 56 control shiftsin sum relatively few quantitative empirical studies have been made of how to annotate discourse corpora with features of discourse structure and those recent ones that exist use various models such as the grosz and sidner model an informal notion of topic transactions relational discourse analysis or control the modalities of the corpora investigated include dialogic or monologic written spontaneous or read and the genres also varyquantitative evaluations of subjects annotations using notions of agreement interrater reliability andor significance show that good results can be difficult to achieveas discussed in section 3 our initial aim was to explore basic issues about segmentation thus we used naive subjects on a highly unstructured taskour corpus consists of transcripts of spontaneous spoken monologues produced by 20 different speakerswe use an informal notion of communicative intention as the segmentation criterion motivated by grosz and sidner and polanyi who argue that defining a segment as having a coherent goal is more general than establishing a repertoire of specific types of segment goalswe do not however ask coders to identify hierarchical relations among segmentsthe hypothesis that discourse has a tree struchire has frequently been questioned and the magnitude of our segmentation task precludes asking subjects to specify hierarchical relationsfinally we quantify our results using a significance test a reliability measure and for purposes of comparison with other work percent agreementthe segmental structure of discourse has been claimed to constrain and be constrained by disparate phenomena eg cue phrases plans and intentions prosody nominal reference and tense however just as with the early proposals regarding segmentation many of these proposals are based on fairly informal studiesit is only recently that attempts have been made to quantitatively evaluate how utterance features correlate with independently justified segmentationsmany of the studies discussed in the preceding subsection take this approachthe types of linguistic features investigated include prosody term repetition cue words and discourse anaphora grosz and hirschberg investigate the prosodic structuring of discoursethe correlation of various prosodic features with their independently obtained consensus codings of segmental structure is analyzed using ttests the results support the hypothesis that discourse structure is marked intonationally in read speechfor example pauses tended to precede phrases that initiated segments and to follow phrases that ended segmentssimilar results are reported in nakatani hirschberg and grosz and hirschberg and nakatani for spontaneous speech as wellgrosz and hirschberg also use the classification and regression tree system cart to automatically construct and evaluate decision trees for classifying aspects of discourse structure from intonational feature valuesthe studies of swerts and swerts and ostendorf also investigate the prosodic structuring of discoursein swerts paragraph boundaries are empirically obtained as described abovethe prosodic features pitch range pause duration and number of low boundary tones are claimed to increase continuously with boundary strength however there is no analysis of the statistical significance of these correlationsin swerts and ostendorf prosodic as well as textual features are shown to be correlated with their independently obtained discourse segmentations of travelplanning interactions with statistical significancehearst texttiling algorithm structures expository text into sequential segments based on term repetitionhearst uses information retrieval metrics to evaluate two versions of texttiling against independently derived segmentations produced by at least three of seven human judgesprecision was 66 for the best version compared with 81 for humans recall was 61 compared with 71 for humansthe use of term repetition is not unique to hearst work related studies include morris and hirst youmans kozima and reynar unlike hearst work these studies either use segmentations that are not empirically justified or present only qualitative analyses of the correlation with linguistic devicesafter identifying segments and core and contributor relations within segments moser and moore investigate whether cue words occur where they occur and what word occursin their talk they presented results showing that the occurrence and placement of a discourse usage of a cue word correlates with relative order of core versus contributor utterancesfor example a discourse cue is more likely to occur when the contributor precedes the core utterance 4 or 20 boundaries for a 100phrase narrative data set for example across conditions or across subjectsrecently discourse studies have used reliability metrics designed for evaluating classification tasks to determine whether coders can classify various phenomena in discourse corpora as discussed in section 21the segmentation task reported here is not properly a classification task in that we do not presume that there is a given set of segment boundaries that subjects are likely to identifygiven the freedom of the task and the use of untrained subjects a reliability test would be relatively uninformative it can be expected to range from very low to very highin fact sorting the 140 subjects into comparable pairs a reliability metric that ranges between 1 for perfect reliability and 1 for perfect unreliability gives a wide spread of reliability values our method aims at abstracting away from the absolute differences across multiple subjects per narrative to derive a statistically significant set of segment boundariesthus an appropriate test of whether our method is statistically reliable would be to compare two repetitions of the method on the same narratives to see if the results are reproduciblealthough we do not have enough subjects on any single narrative to compare two distinct sets of seven subjects we do have four narratives with data from eight distinct subjectsfor each set of eight subjects we created two randomly selected partitions with four distinct subjects in eachthen we assessed reliability by comparing the boundaries produced by partitions a and b on the four narratives because we only have four subjects within each partition this necessarily produces fewer significant boundaries than our methodin other words this test can only give us a conservative lower bound for reliabilitybut even with this conservative evaluation reliability is fairly good on two narratives and promising on averagea reliability measure indicates how reproducible a data set is by quantifying similarity across subjects in terms of the proportion of times that each response category occursthis differs from a significance test of the null hypothesis where observed data is compared to random distributionwe use krippendorff a to evaluate the reliability of the two data sets from partitions a and bthe general formula for a is 1 where do and de are observed disagreements and expected disagreementscomputation of a is described belowkrippendorff a reports to what degree the observed number of matches could be expected to arise by chanceagain in contrast with cochran q it is simply a ratio rather than a point on a distribution curve with known probabilitiesvalues range from 1 to 1 with 0 representing that there are no more agreements observed in the data than would happen by chancea value of 5 would indicate that the observed number of agreements is halfway between chance and perfect agreementnegative values indicate the degree to which observed disagreements differ from chancein principle a is computed from the same type of matrix shown in table 1 krippendorff a comparing boundaries derived from two sets of 4 subjects on 4 narrativesboundary threshhold narrative 2 4 7 15 average 3 50 60 73 50 58 and can be applied to multivalued variables that are quantitative or qualitativehere we summarize computation of a simplified formula for a used for comparing two data sets with a single dichotomous variableto exemplify the computation we use the first two rows of table 1 giving a matrix of size i 2 x j 11the value of do is then simply all where m is the total number of mismatches in our example do has a value of 121 where n1 is the total number of l and no is the total number of 11x21x4 the detailed formula for a then simplifies to this gives a 42 meaning that the observed case of one agreement out of two potential agreements on boundaries in our example is not quite halfway between chance and perfect agreementconsider a case where two subjects had 12 responses each each subject responded with 1 half the time and wherever one put a 1 the other did not the data contains the maximum number of disagreements yet a 092 or somewhat less than 1 meaning that a small proportion of the observed disagreement would have arisen by chancetable 2 presents the reliability results from a comparison of boundaries found by two distinct partitions of subjects responses on four narrativesan a of 80 using two partitions of seven subjects would represent very good reproducibility with values above 67 being somewhat good note that reliability on narrative 7 is good despite the small number of subjectssince as noted above we would expect reliability to be much higher if there were seven subjects we believe that values above 5 for n 4 subjects indicate reproducibilityon average a 58 and the spread is low 324 percent agreementboth significance and reliability can stand alone as evaluation metrics unlike percent agreementhowever we also report percent agreement in order to compare results with other studiesas defined in gale church and yarowsky percent agreement is the ratio of observed agreements with the majority opinion to possible agreements with the majority opinionas detailed in passonneau and litman the average percent agreement for our subjects on all 20 narratives is 89 on average percent agreement is highest on nonboundaries and lowest on boundaries reflecting the fact that nonboundaries greatly outnumber boundariesthese figures compare with other studies 325 discussionwe have shown that an atheoretical notion of speaker intention is understood sufficiently uniformly by naive subjects to yield highly significant agreement across subjects on segment boundaries in a corpus of spoken narrativesprobabilities of the observed distributions range from 6 x 109 to 1 x 106 as given by cochran qthe result is all the more striking given that we used naive coders on a loosely defined tasksubjects were free to assign any number of boundaries and to label their segments with anything they judged to be the narrator communicative intentionpartitioning cochran q shows that the proportion of boundaries identified by at least three subjects was significant across all 20 narratives and those derived using a less conservative level of 02 431 error analysisto improve performance we analyzed the two types of ir errors defined in figure 8 above made by the original np algorithm on the training data type quotbquot errors misclassification of nonboundaries were reduced by redefining the coding features pertaining to clauses and npsmost quotbquot shows that pauses preceding boundaries have average longer durationsfor tj 3 the average pause duration is 64 before boundaries and 39 before noriboundaries for t1 4 the average durations are 72 and 39 respectivelyas will be seen in section 432 this correlation does not translate into any highperforming algorithm based primarily on pause durationinferential link due to implicit argument errors correlated with one of two kinds of the information used in the np algorithm identification of clauses and of inferential linksthe redefinition of ficu motivated by error analysis led to fewer clausesfor example ficu assignment depends in part on filtering out clausal interjections utterances that have the syntactic form of clauses but that function as interjectionsthese include phrases like let us see let me see i do not know when they occur with no overt or implied verb phrase argumentthe extensional definition of clausal interjections was expanded thus certain utterances were no longer classed as ficus under the revised codingother changes to the definition of ficus pertained to sentence fragments unexpected clausal arguments and embedded speechbecause the algorithm assigns boundaries between ficus reducing the number of ficus in a narrative can reduce the number of proposed boundarieserror analysis also led to a redefinition of infer and to the inclusion of new types of inferential relations that an np referent might have to prior discoursepreviously infer was a relation between the referent of an np in one utterance and the referent of an np in a previous utterancethis was loosened to include referential links between an np referent and referents mentioned in or inferable from any part of the previous utterancefor example discourse deixis was added to the types of inferential links to code forin the second utterance of the storm is still raging and that is why the plane is grounded the demonstrative pronoun that illustrates an example of discourse deixisexpanding the definition of infer also reduces the number of proposed boundaries recall that the algorithm does not assign a boundary if there is an inferential link between an np in the current utterance unit and the prior utterance unitthree types of inference relations linking successive clauses were added now a pronoun in c referring to an action event or fact inferable from c_i provides an inferential linkso does an implicit argument as in figure 13 where the missing argument of notice is inferred to be the event of the pears fallingthe third case is where an np in c is described as part of an event that results directly from an event mentioned in c misclassification of boundaries often occurred where prosodic and cue features conflicted with np featuresthe original np algorithm assigned boundaries wherever the three values coref infer globalpro cooccurredexperiments led to the hypothesis that the most improvement came by assigning a boundary if the cueprosody feature had the value complex even if the algorithm would not otherwise assign a boundary as shown in figure 14see figure 10 for boundaries assigned by the resulting algorithm table 6 presents the average ir scores across the narratives in the training set for the np and ea algorithmsthe top half of the table reports results for boundaries that at least three subjects agreed upon and the lower half for boundaries using a threshold value of 4 where np duplicates the figures from table 4going by the summed deviations the overall performance is about the same although variation around the mean is lower for t 4the figures illustrate a typical tradeoff between np algorithmthe test results of ea are of course worse than the corresponding training results particularly for precision this confirms that the tuned algorithm is over calibrated to the training setusing summed deviations as a summary metric ea improvement is about 13 of the distance between np and human performancethe standard deviations in tables 6 and 7 are often close to 14 or 13 of the reported averagesthis indicates a large amount of variability in the data reflecting wide differences across narratives in the training set with respect to the distinctions recognized by the algorithmalthough the high standard deviations show that the tuned algorithm is not well fitted to each narrative it is likely that it is over specialized to the training sample in the sense that test narratives are likely to exhibit further variation432 machine learningwhile error analysis is a useful method for refining an existing feature representation it does not facilitate experimentation with large sets of multiple features simultaneouslyto address this we turned to machine learning to automatically develop algorithms from large numbers of both training examples and featureswe use the machine learning program c45 to automatically develop segmentation algorithms from our corpus of coded narratives where each potential boundary site has been classified and represented as a set of linguistic featuresthe first input to c45 specifies the names of the classes to be learned and the names and potential values of a fixed set of coding features the second input is the training data ie a set of examples for which the class and feature values are specifiedour training set of 10 narratives provides 1004 examples of potential boundary sitesthe output of c45 is a classification algorithm expressed as a decision tree which predicts the class of a potential boundary given its set of feature valuesbecause machine learning makes it convenient to induce decision trees under various conditions we have performed numerous experiments varying the number of features used the definitions used for classifying a potential boundary site as boundary or nonboundary and the options available for running the c45 programfigure 15 shows one of the highestperforming learned decision trees from our experimentsthis decision tree was learned under the following conditions all of the features shown in figure 6 were used to code the training data boundaries were classified using a threshold of three subjects and c45 was run using only the default optionsthe decision tree predicts the class of a potential boundary site based on the features before after duration cuei wordi coref infer and globalpronote that although not all available features are used in the tree the included features represent three of the four general types of knowledge each level of the tree specifies a test on a single feature with a branch for every possible outcome of 16 the manually derived segmentation algorithm evaluates boundary assignment incrementally ie utterancebyutterance after computing the features for the current utterance this allows relative information about previous boundaries to be used in deriving the globalpro featureby allowing machine learning to use globalpro we are testing whether characterizing the use of referring expressions in terms of relative knowledge about segments is useful for classifying the current boundary sitealthough none of the other features are derived using classification knowledge of any other potential boundary sites note that globalpro does not encode the boundarynonboundary classification of the particular site in questionfurthermore even when machine learning does not use globalpro performance does not sufferlearned decision tree for segmentation the testa branch can either lead to the assignment of a class or to another testfor example the tree initially branches based on the value of the feature beforeif the value is sentencefinalcontour then the first branch is taken and the potential boundary site is assigned the class nonboundaryif the value of before is sentencefinalcontour then the second branch is taken and the feature coref is testedfigure 10 illustrates sample output of this algorithm the performance of this learned decision tree averaged over the 10 training narratives is shown in table 8 on the line labeled quotlearning 1quotthe line labeled quotlearning 2quot shows the results from another machine learning experiment in which one of the default c45 options used in quotlearning 1quot is overriddenthe default c45 approach creates a separate subtree for each possible feature value as detailed in quinlan this approach might not be appropriate when there are many values for a feature which is true for features such as wordi and word2in quotlearning 2quot c45 allows feature values to be grouped into one branch of the decision treewhile the quotlearning 2quot tree is more complex than the tree of figure 15 it does have slightly better performancethe quotlearning 2quot decision tree predicts the class of a potential boundary site based on the features before duration cuei wordi word2 coref infer and cueprosodyat t 3 quotlearning 1quot performance is comparable to human performance and quotlearning 2quot is slightly better than humans at t 4 both learning conditions are superior to human performancethe results obtained via machine learning are also better than the results obtained using error analysis primarily due to better precisionin general the machine learning results have slightly greater variation around the averagethe performance of the learned decision trees averaged over the 5 test narratives is shown in table 9comparison of tables 8 and 9 shows that as with the error analysis results average performance is worse when applied to the testing rather than the training data particularly with respect to precisionhowever the best machine learning performance is an improvement over our previous best results for t 3 quotlearning 1quot is comparable to ea while quotlearning 2quot is betterfor t 4 ea is better than quotlearning 1quot but quotlearning 2quot is better stillhowever as with the training data ea has somewhat less variation around the averagewe also use the resampling method of crossvalidation to estimate performance which averages results over multiple partitions of a sample into test versus training datawe performed 10 runs of the learning program each using 9 of the 10 training narratives for that run training set and the remaining narrative for testingnote that for each iteration of the crossvalidation the learning process begins from scratch and thus each training and testing set are still disjointwhile this method does not make sense for humans computers can truly ignore previous iterationsfor sample sizes in the hundreds 10fold crossvalidation often provides a better performance estimate than the holdout method results using crossvalidation are shown in table 10 and are better than the estimates obtained using the holdout method with the major improvement coming from precisionfinally table 11 shows the results from a set of additional machine learning experiments in which more conservative definitions of boundary are usedfor example using a threshold of seven subjects yields the set of consensus boundaries as defined in hirschberg and nakatani comparison with table 9 shows that for t 5 quotlearning 1quot rather than quotlearning 2quot is the better performerhowever the more interesting result is that for t 6 and t 7 the learning approach has an important limitation with respect to the boundary classification taskin particular the way in which c45 minimizes error rate is not an effective strategy when the distribution of the classes is highly skewedfor both t 6 and t 7 extremely few of the 1004 training examples are classified as boundary c45 minimizes the error rate by always predicting nonboundaryfor example for t 6 because only 4 of the training examples are boundaries c45 achieves an error rate of 4 by always predicting nonboundaryhowever this low error rate is achieved at the expense of the other metricsusing the terminology of figure 8 since the algorithm never predicts the class boundary it is necessarily the case that a 0 b 0 recall 0 and precision is undefined in addition for t 7 2 of the 5 test sets happen to contain no boundaries for these cases c 0 and thus the value of recall is also sometimes undefinedthe problem of unbalanced data is not unique to the boundary classification taskcurrent work in machine learning is exploring ways to induce patterns relevant to the minority class for example by allowing users to explicitly specify different penalties for false positive and false negative errors other researchers have proposed sampling the majority class examples in a training set in order to produce a more balanced training sample potheses using multiple linguistic featuresthe first method error analysis tunes features and algorithms based on analysis of training errorsthe second method machine learning automatically induces decision trees from coded corporaboth methods rely on an enriched set of input features compared to our previous workwith each method we have achieved marked improvements in performance compared to our previous work and are approaching human performancequantitatively the machine learning versus ea methods differ only on certain metrics and bear a somewhat inverse relation to one another for boundaries defined by t 4 versus t 3table 12 which shows comparisons between ea and the two machine learning conditions indicates which differences are statistically significant by indicating the probability of a paired comparison on each of the 5 test narratives using student t testfor the t 4 boundaries the superior recall of ea compared with conditions 1 and 2 of the automated algorithms is significantconversely the superior fallout of condition 1 and superior error rate of condition 2 are significantfor the t 3 boundaries the differences are not statistically significant for condition 2 but for condition 1 precision and error rate are both superior and the difference as compared with ea is statistically significantthe largest and the most statistically significant difference is the higher precision of the condition 1 automated algorithmqualitatively the algorithms produced by error analysis are more intuitive and easier to understand than those produced by machine learningfurthermore note that the machine learning algorithm used the changes to the coding features that resulted from the error analysisthis suggests that error analysis is a useful method for understanding how to best code the data while machine learning provides a costeffective way to produce an optimally performing algorithm given a good feature representationour initial hypotheses regarding discourse segmentation were that multiutterance segment units reflect discourse coherence and that while the semantic dimensions of this coherence may vary it arises partly from consistency in the speaker communicative goals the results from the first part of our study support these hypotheseson a relatively unconstrained linear segmentation task the number of times different naive subjects identify the same segment boundaries in a given narrative transcript is extremely significantacross the 20 narratives statistical significance arises where at least three or four out of seven subjects agree on the same boundary location depending on an arbitrary choice between probabilities of 02 versus 0001 as the significance thresholdwe conclude that the segment boundaries identified by at least three or four of our subjects provide a statistically validated annotation to the narrative corpus corresponding to segments having relatively coherent communicative goalsbefore making concluding remarks on part two of our study we mention a few questions for future work on segmentationwe believe our results confirm the utility of abstracting from the responses of relatively many naive subjects and indicate a strong potential for developing coding protocols using smaller numbers of trained coders the use of an even larger number of naive subjects might yield a finergrained set of segments this is an important dimension of difference between the two sets of segments we use segments identified by a minimum of four subjects are larger and fewer in number than those identified by a minimum of threein addition performance can be improved by taking into account that some segment boundary locations may be relatively fuzzy as we discuss in passonneau and litman finally differences in segmentation may reflect different interpretations of the discourse as we pointed out in passonneau and litman based on observations of our subjects segment descriptionsthe second part of our study concerned the algorithmic identification of segment boundaries based on various combinations of three types of linguistic input referential noun phrases cue phrases and pauseswe first evaluated an initial set of three algorithms each based on a single type of linguistic input and their additive combinationsour results showed that the algorithms performed quite differently from one another on boundaries identified by at least four subjects on a test set of 10 narratives from our corpusin particular the np algorithm outperformed both the cue phrase and pause algorithms while none of the algorithms approached human performance the fact that performance improved with the number of features coded and by combining algorithms in a simple additive way suggested directions for improvementwe applied two training methods error analysis and machine learning to the previous test set of 10 narrativesricher linguistic input and more sophisticated methods of combining linguistic data led to significant improvements in performance when the new algorithms were evaluated on a test set of 5 new narrativesthe bestperforming algorithm resulted from the machine learning experiment in which certain default options were overridden for the t 4 boundary set quotlearning 2quot recall was 53 as good as humans precision was 95 as good fallout was better than humans and error was almost as low as that of humans thus the main need for improvement is in recalla comparison of results on two sets of boundaries those identified by at least three versus those identified by at least four subjects shows roughly comparable performancethe quotlearning 1quot algorithm performs better on the set defined by t 3 error analysis and quotlearning 2quot perform better on the t 4 setwe have not yet determined what causes these differences although in an early paper on our pilot study we reported that there is a strong tendency for recall to increase and precision to decrease as boundary strength increases on the one hand performance was consistently improved by enriching the linguistic inputon the other hand there is wide performance variation around the meandespite this variation as we pointed out in litman and passonneau there are certain narratives that the np ea and both machine learning algorithms perform similarly well or poorly onthese observations indicate a need for further research regarding the interaction among variation in speaker style granularity of segmentation and richness of the linguistic inputfinally while our results are quite promising how generally applicable are they and do results such as ours have any practical importas discussed in section 2 the ability both to segment discourse and to correlate segmentation with linguistic devices has been demonstrated in dialogues and monologues using both spoken and written corpora across a wide variety of genres studies such as these suggest that our methodologies andor results have the potential of being applicable to more than spontaneous narrative monologuesas for the utility of our work even though the algorithms in this paper were produced using some features that were manually coded once developed they could be used in reverse to enhance the comprehensibility of text generation systems or the naturalness of texttospeech systems that already attempt to convey discourse structure for example given the algorithm shown in figure 14 a generation system could better convey its discourse boundaries by constructing associated utterances where the values of coref infer and globalpro are as shown in the first line of the figure or for a spoken language system where the value of cueprosody is complexin related work we have tested the hypothesis that the use of a discourse focus structure based on the pear segmentation data improves performance of a generation algorithm thus providing a quantitative measure of the utility of the segmentation data there we present results of an evaluation of an np generation algorithm under various conditionsthe input to the algorithm consisted of semantic information about utterances in a pear narrative such as the referents mentioned in the utteranceoutput was evaluated against what the human narrator actually saidwhen the input to the algorithm included a grouping of discourse referents into focus spaces derived from discourse segments performance improved by 50in addition if our results were fully automated they could also be used to enhance the ability of understanding systems to recognize discourse structure which in turn improves tasks such as information retrieval and plan recognition recent results suggest that many of our manually coded features have the promise of being automatically codedgiven features largely output by a speech recognition system wightman and ostendorf automatically recognize prosodic phrasing with 8586 accuracy this accuracy is only slightly less than humanhuman accuracysimilarly although our spoken corpus was manually transcribed this could have been automated using speech recognition in aone and bennett machine learning is used to automatically derive anaphora resolution algorithms from automatically produced feature representations the learned algorithms outperform a manually derived system finally the results of litman show that there are many alternatives to the cue phrase algorithm used here including some that use feature sets that can be fully coded automaticallythe authors wish to thank j catlett w chafe k church w cohen j dubois b gale v hatzivassiloglou m hearst j hirschberg d lewis k mckeown and e siegel for helpful comments references and resourceswe wholeheartedly thank the anonymous reviewers for their very thorough commentaryboth authors work was partially supported by darpa and onr under contract n00014891782 passonneau was also partly supported by nsf grants iri9113064 and iri9528998passonneau work was not conducted under bellcore auspices
J97-1005
discourse segmentation by human and automated meansthe need to model the relation between discourse structure and linguistic features of utterances is almost universally acknowledged in the literature on discoursehowever there is only weak consensus on what the units of discourse structure are or the criteria for recognizing and generating themwe present quantitative results of a twopart study using a corpus of spontaneous narrative monologuesthe first part of our paper presents a method for empirically validating multiutterance units referred to as discourse segmentswe report highly significant results of segmentations performed by naive subjects where a commonsense notion of speaker intention is the segmentation criterionin the second part of our study data abstracted from the subjects segmentations serve as a target for evaluating two sets of algorithms that use utterance features to perform segmentationon the first algorithm set we evaluate and compare the correlation of discourse segmentation with three types of linguistic cues we then develop a second set using two methods error analysis and machine learningtesting the new algorithms on a new data set shows that when multiple sources of linguistic knowledge are used concurrently algorithm performance improveswe describe an experiment where seven untrained annotators were asked to find discourse segments in a corpus of transcribed narratives about a movie
finitestate transducers in language and speech processing finitestate machines have been used in various domains of natural language processing we consider here the use of a type of transducer that supports very efficient programs sequential transducers we recall classical theorems and give new ones characterizing sequential stringtostring transducers transducers that output weights also play an important role in language and speech processing we give a specific study of stringtoweight transducers including algorithms for determinizing and minimizing these transducers very efficiently and characterizations of the transducers admitting determinization and the corresponding algorithms some applications of these algorithms in speech recognition are described and illustrated finitestate machines have been used in various domains of natural language processingwe consider here the use of a type of transducer that supports very efficient programs sequential transducerswe recall classical theorems and give new ones characterizing sequential stringtostring transducerstransducers that output weights also play an important role in language and speech processingwe give a specific study of stringtoweight transducers including algorithms for determinizing and minimizing these transducers very efficiently and characterizations of the transducers admitting determinization and the corresponding algorithmssome applications of these algorithms in speech recognition are described and illustratedfinitestate machines have been used in many areas of computational linguisticstheir use can be justified by both linguistic and computational argumentslinguistically finite automata are convenient since they allow one to describe easily most of the relevant local phenomena encountered in the empirical study of languagethey often lead to a compact representation of lexical rules or idioms and clichés that appears natural to linguists graphic tools also allow one to visualize and modify automata which helps in correcting and completing a grammarother more general phenomena such as parsing contextfree grammars can also be dealt with using finitestate machines such as rtn moreover the underlying mechanisms in most of the methods used in parsing are related to automatafrom the computational point of view the use of finitestate machines is mainly motivated by considerations of time and space efficiencytime efficiency is usually achieved using deterministic automatathe output of deterministic machines depends in general linearly only on the input size and can therefore be considered optimal from this point of viewspace efficiency is achieved with classical minimization algorithms for deterministic automataapplications such as compiler construction have shown deterministic finite automata to be very efficient in practice finite automata now also constitute a rich chapter of theoretical computer science their recent applications in natural language processing which range from the construction of lexical analyzers and the compilation of morphological and phonological rules to speech processing show the usefulness of finitestate machines in many areasin this paper we provide theoretical and algorithmic bases for the use and application of the devices that support very efficient programs sequential transducerswe extend the idea of deterministic automata to transducers with deterministic input that is machines that produce output strings or weights in addition to accepting inputthus we describe methods consistent with the initial reasons for using finitestate machines in particular the time efficiency of deterministic machines and the space efficiency achievable with new minimization algorithms for sequential transducersboth time and space concerns are important when dealing with languageindeed one of the recent trends in language studies is a large increase in the size of data setslexical approaches have been shown to be the most appropriate in many areas of computational linguistics ranging from largescale dictionaries in morphology to large lexical grammars in syntaxthe effect of the size increase on time and space efficiency is probably the main computational problem of language processingthe use of finitestate machines in natural language processing is certainly not newthe limitations of the corresponding techniques however are pointed out more often than their advantages probably because recent work in this field is not yet described in computer science textbookssequential finitestate transducers are now used in all areas of computational linguisticsin the following sections we give an extended description of these deviceswe first consider stringtostring transducers which have been successfully used in the representation of largescale dictionaries computational morphology and local grammars and syntax and describe the theoretical bases for their usein particular we recall classical theorems and provide some new ones characterizing these transducerswe then consider the case of sequential stringtoweight transducerslanguage models phone lattices and word lattices are among the objects that can be represented by these transducers making them very interesting from the point of view of speech processingwe give new theorems extending the known characterizations of stringtostring transducers to these transducerswe define an algorithm for determinizing stringtoweight transducers characterize the unambiguous transducers admitting determinization and describe an algorithm to test determinizabilitywe also give an algorithm to minimize sequential transducers that has a complexity equivalent to that of classical automata minimization and that is very efficient in practiceunder certain restrictions the minimization of sequential stringtoweight transducers can also be performed using the determinization algorithmwe describe the corresponding algorithm and give the proof of its correctness in the appendixwe have used most of these algorithms in speech processingin the last section we describe some applications of determinization and minimization of stringtoweight transducers in speech recognition illustrating them with several results that show them to be very efficientour implementation of the determinization is such that it can be used on the fly only the necessary part of the transducer needs to be expandedthis plays an important role in the space and time efficiency of speech recognitionthe reduction in the size of word lattices that these algorithms provide sheds new light on the complexity of the networks involved in speech processingsequential stringtostring transducers are used in various areas of natural language processingboth determinization and minimization algorithms have been defined for the class of psubsequential transducers which includes sequential stringtostring transducersin this section the theoretical basis of the use of sequential transducers is describedclassical and new theorems help to indicate the usefulness of these devices as well as their characterizationwe consider here sequential transducers namely transducers with a deterministic inputat any state of such transducers at most one outgoing arc is labeled with a given element of the alphabetfigure 1 gives an example of a sequential transducernotice that output labels might be strings including the empty string e the empty string is not allowed on input howeverthe output of a sequential transducer is not necessarily deterministicthe one in figure 1 is not since for instance two distinct arcs with output labels b leave the state 0sequential transducers are computationally interesting because their use with a given input does not depend on the size of the transducer but only on the size of the inputsince using a sequential transducer with a given input consists of following the only path corresponding to the input string and in writing consecutive output labels along this path the total computational time is linear in the size of the input if we consider that the cost of copying out each output label does not depend on its lengthmore formally a sequential stringtostring transducer t is a 7tuple with the functions 6 and o are generally partial functions a state q e q does not necessarily admit outgoing transitions labeled on the input side with all elements of the alphabetthese functions can be extended to mappings from q x e by the following classical recurrence relations thus a string w e e is accepted by t iff 6 e f and in that case the output of the transducer is crsequential transducers can be generalized by introducing the possibility of generating an additional output string at final states the application of the transducer to a string can then possibly finish with the concatenation of such an output string to the usual outputsuch transducers are called subsequential transducerslanguage processing often requires a more general extensionindeed the ambiguities encountered in languageambiguity of grammars of morphological analyzers or that of pronunciation dictionaries for instancecannot be taken into account when using sequential or subsequential transducersthese devices associate at most a single output to a given inputin order to deal with ambiguities one can introduce psubsequential transducers namely transducers provided with at most p final output strings at each final statefigure 2 gives an example of a 2subsequential transducerhere the input string w aa gives two distinct outputs aaa and aabsince one cannot find any reasonable case in language in which the number of ambiguities would be infinite psubsequential transducers seem to be sufficient for describing linguistic ambiguitieshowever the number of ambiguities could be very large in some casesnotice that 1subsequential transducers are exactly the subsequential transducerstransducers can be considered to represent mappings from strings to stringsas such they admit the composition operation defined for mappings a useful operation that allows the construction of more complex transducers from simpler onesthe result of the application of t2 o ti to a string s can be computed by first considering all output strings associated with the input s in the transducer ti then applying t2 to all of these stringsthe output strings obtained after this application represent the result in fact instead of waiting for the result of the application of ti to be completely given one can gradually apply t2 to the output strings of ti yet to be completedthis is the basic idea of the composition algorithm which allows the transducer t2 0 ti to be directly constructed given ti and t2 we define sequential functions to be those functions that can be represented by sequential transducerswe noted previously that the result of the composition of two transducers is a transducer that can be directly constructedthere exists an efficient algorithm for the general case of the composition of transducers the following theorem gives a more specific result for the case of subsequential and psubsequential functions which expresses their closure under compositionwe use the expression psubsequential in two ways hereone means that a finite number of example of a subsequential transducer 72 ambiguities is admitted the second indicates that this number equals exactly p let f e a be a sequential and g a 9 be a sequential function then g of is sequential we prove the theorem in the general case of psubsequential transducersthe case of sequential transducers first proved by choffrut can be derived from the general case in a trivial waylet t1 be a psubsequential transducer representing f ti and t2 a qsubsequential transducer representing g pi and p2 denote the final output functions of t1 and 72 which map fi to p and f2 to q respectively pi represents for instance the set of final output strings at a final state r define the pqsubsequential transducer t by q qi x q2 i f e q qi e fi 82 n f2 01 with the following transition and output functions and with the final output function defined by v e f p 02 p2 clearly according to the definition of composition the transducer r realizes g ofthe definition of p shows that it admits at most pq distinct output strings for a given input onethis ends the proof of the theorem0 figure 3 gives an example of a 1subsequential or subsequential transducer t2the result of the composition of the transducers ti and 12 is shown in figure 4states in the transducer t3 correspond to pairs of states of ti and t2the composition consists essentially of making the intersection of the outputs of ti with the inputs of 72transducers admit another useful operation uniongiven an input string w a transducer union of t1 and t2 gives the set union of the strings obtained by application of ti to w and 12 to w we denote by tl t2 the union of ti and t2the following theorem specifies the type of the transducer ti 72 implying in particular the closure under union of psubsequential transducersit can be proved in a way similar to the composition theoremtheorem 2 let f e a be a sequential and g e a be a sequential function then g f is 2subsequential subsequential2subsequential transducer 73 obtained by composition of 71 and 12the union transducer ti t2 can be constructed from ti and 72 in a way close to the union of automataone can indeed introduce a new initial state connected to the old initial states of ti and t2 by transitions labeled with the empty string both on input and outputbut the transducer obtained using this construction is not sequential since it contains ctransitions on the input sidethere exists however an algorithm to construct the union of psubsequential and qsubsequential transducers directly as a p qsubsequential transducerthe direct construction consists of considering pairs of states qi being a state of ti or an additional state that we denote by an underscore q2 a state of 72 or an additional state that we denote by an underscorethe transitions leaving are obtained by taking the union of the transitions leaving qi and q2 or by keeping only those of qi if q2 is the underscore state similarly by keeping only those of q2 if qi is the underscore statethe union of the transitions is performed in such a way that if qi and q2 both have transitions labeled with the same input label a then only one transition labeled with a is associated to the output label of that transition is the longest common prefix of the output transitions labeled with a leaving qi and q2see mohri for a full description of this algorithmfigure 5 shows the 2subsequential transducer obtained by constructing the union of the transducers 71 and t2 this waynotice that according to the theorem the result could be a priori 3subsequential but these two transducers share no common accepted stringin such cases the resulting transducer is maxsubsequentialthe linear complexity of their use makes sequential or psubsequential transducers both mathematically and computationally of particular interesthowever not all transducers even when they realize functions admit an equivalent sequential or subsequential transducerconsider for instance the function f associated with the classical transducer represented in figure 6 f can be defined by1 vw e x1 f aim if i w i is even owl otherwise this function is not sequential that is it cannot be realized by any sequential transducerindeed in order to start writing the output associated to an input string w a or b according to whether n is even or odd one needs to finish reading the whole input string w which can be arbitrarily longsequential functions namely functions that can be represented by sequential transducers do not allow such unbounded delaysmore generally sequential functions can be characterized among rational functions by the following theorem let f be a rational function mapping e to l f is sequential iff there exists a positive integer k such that the fact that not all rational functions are sequential could reduce the interest of sequential transducersthe following theorem due to elgot and mezei shows however that transducers are exactly compositions of left and right sequential transducerstheorem 4 let f be a partial function mapping e to a f is rational iff there exists a left sequential function 1 e s2 and a right sequential function r s2 a such that f r 0 1left sequential functions or transducers are those we previously definedtheir application to a string proceeds from left to rightright sequential functions apply to strings from right to leftaccording to the theorem considering a new sufficiently large alphabet sz allows one to define two sequential functions 1 and r that decompose a rational function f this result considerably increases the importance of sequential functions in the theory of finitestate machines as well as in the practical use of transducersberstel gives a constructive proof of this theoremgiven a finitestate transducer t one can easily construct a left sequential transducer l and a right sequential transducer r such that r o l t intuitively the extended alphabet si keeps track of the local ambiguities encountered when applying the transducer from left to righta distinct element of the alphabet is assigned to each of these ambiguitiesthe right sequential transducer can be constructed in such a way that these ambiguities can then be resolved from right to leftfigures 7 and 8 give a decomposition of the nonsequential transducer t of figure 6the symbols of the alphabet q xl x2 store information about the size of the input string w the output of l ends with x1 iff i wl is oddthe right sequential function r is then easy to constructmohri transducers in language and speech sequential transducers offer other theoretical advantagesin particular while several important tests such as equivalence are undecidable with general transducers sequential transducers have the following decidability property theorem 5 let t be a transducer mapping e to ait is decidable whether t is sequentiala constructive proof of this theorem was given by choffrut an efficient polynomial algorithm for testing the sequentiability of transducers based on this proof was given by weber and klemm choffrut also gave a characterization of subsequential functions based on the definition of a metric on edenote by you a v the longest common prefix of two strings you and v in eit is easy to verify that the following defines a metric on e the following theorem describes this characterization of subsequential functionstheorem 6 let f be a partial function mapping e to a f is subsequential iff the notion of bounded variation can be roughly understood here as follows if d is small enough namely if the prefix that x and y share is sufficiently long compared to their lengths then the same is true of their images by f f and fthis theorem can be extended to describe the case of psubsequential functions by defining a metric do on p for any you up and v e p we define assume f psubsequential and let t be a psubsequential transducer realizing f a transducer ti 1 0 and e dom12 such that d 0 d d thus since s has bounded variation algorithm for the determinization of a transducer ti representing a power series defined on the semiring hence we describe in this section an algorithm for constructing a subsequential transducer 72 equivalent to a given nonsubsequential one ti e i f1 ei a1 p1the algorithm extends our determinization algorithm for stringtostring transducers representing psubsequential functions to the case of transducers outputting weights figure 10 gives the pseudocode of the algorithmwe present the algorithm in the general case of a semiring on which the transducer ti is definedindeed the algorithm we are describing here applies as well to transducers representing power series defined on many other semirings6 we describe the algorithm in the case of the tropical semiringfor the tropical semiring one can replace ed by min and 0 by in the pseudocode of figure 107 the algorithm is similar to the powerset construction used for the determinization of automatahowever since the outputs of two transitions bearing the same input label might differ one can only output the minimum of these outputs in the resulting transducer therefore one needs to keep track of the residual weightshence the subsets q2 that we consider here are made of pairs of states and weightsthe initial weight a2 of 72 is the minimum of all the initial weights of ti the initial state i2 is a subset made of pairs where i is an initial state of and x a1 a2 we use a queue q to maintain the set of subsets q2 yet to be examined as in the classical powerset constructioninitially q contains only the subset i2the subsets q2 are the states of the resulting transducer q2 is a final state of 72 iff it contains at least one pair with q a final state of ri the final output associated to q2 is then the minimum of the final outputs of all the final states in q2 combined with their respective residual weight for each input label a such that there exists at least one state q of the subset q2 admitting an outgoing transition labeled with a one outgoing transition leaving q2 with the input label a is constructed the output 02 of this transition is the minimum of the outputs of all the transitions with input label a that leave a state in the subset q2 when combined with the residual weight associated to that state the destination state 62 of the transition leaving q2 is a subset made of pairs where q is a state of ti that can be reached by a transition labeled with a and x the corresponding residual weight x is computed by taking the minimum of all the transitions with input label a that leave a state q of q2 and reach q when combined with the residual weight of q minus the output weight c72 finally 62 is enqueued in q iff it is a new subsetwe denote by n1 the destination state of a transition t e eihence n1 q if t e eithe sets r of and v used in the algorithm are defined by denotes the set of pairs elements of the subset q2 having transitions labeled with the input a7 denotes the set of triples where is a pair in q2 such that q admits a transition with input label a v is the set of states q that can be reached by transitions labeled with a from the states of the subset q2the algorithm is illustrated in figures 11 and 12notice that the input ab admits several outputs in pi 1 1 21 3 43 3 63 5 8only one of these outputs is kept in the determinized transducer 12 since in the tropical semiring one is only interested in the minimum outputs for any given stringnotice that several transitions might reach the same state with a priori different residual weightssince one is only interested in the best path namely the path corresponding to the minimum weight one can keep the minimum of these weights for a given state element of a subset in the next section we give a set of transducers ti for which the determinization algorithm terminatesthe following theorem shows the correctness of the algorithm when it terminatestransducer it2 obtained by power series determinization of theorem 10 assume that the determinization algorithm terminates then the resulting transducer 72 is equivalent to r1we denote by oi the minimum of the outputs of all paths from q to qby construction we have we define the residual output associated to q in the subset 62 as the weight c associated to the pair containing q in 82 it is not hard to show by induction on i wl that the subsets constructed by the algorithm are the sets 62 1q12 1 is a path in t x t with length greater than 1q12 1since t x t has exactly 1q12 states h admits at least one cycle at some state labeled with a nonempty input string u2this shows the existence of the factorization above and proves the lemma0 let ti be a stringtoweight transducer defined on the tropical semirirtgif 71 has the twins property then it is determinizableproof assume that 7 has the twins propertyif the determinization algorithm does not halt there exists at least one subset of 2 q0 q such that the algorithm generates an infinite number of distinct weighted subsets let a c e be the set of strings w such that the states of 62 be iqo 1we have vw e a 62 1 since a is infinite and since in each weighted subset there exists a null residual output there exist io 0 112112 1using the lemma 1 there exists a factorization of 7r0 and iti of the type since in and 7t are shortest paths we have o o 01 and 001 hence 0 cby induction on 170 we can therefore find shortest paths ho and 111 from 10 to qo with length less or equal to la 0 csince c cr e r c e r and c is finitethis ends the proof of the theorem0 there are transducers that do not have the twins property and that are still determinizableto characterize such transducers more complex conditions that we will not describe here are requiredhowever in the case of trim unambiguous transducers the twins property provides a characterization of determinizable transducerslet ti be a trim unambiguous stringtoweight transducer defined on the tropical semiringthen ti is determirtizable iff it has the twins propertyproof according to the previous theorem if t1 has the twins property then it is determinizableassume now that t does not have the twins property then there exist at least two states q and q in q that are not twinsthere exists e e such that q e 61q e 81 and 01 01consider the weighted subsets 52 with k e ai constructed by the determinization algorithma subset 62 contains the pairs and we will show that these subsets are all distinctthis will prove that the determinization algorithm does not terminate if ti does not have the twins propertysince ri is a trim unambiguous transducer there exits only one path in ti from i to q or to q with input string yousimilarly the cycles at q and q labeled with v are uniquethus there exist i e i and i e i such that since 0 0 0 equation 20 shows that the subsets 62 are all distinct0 the characterization of determinizable transducers provided by theorem 12 leads to the definition of an algorithm for testing the determinizability of trim unambiguous transducersbefore describing the algorithm we introduce a lemma that shows that it suffices to examine a finite number of paths to test the twins propertylemma 2 let ti be a trim unambiguous stringtoweight transducer defined on the tropical serniringti has the twins property iff v e 21uvi 21 be a trim unambiguous stringtoweight transducer defined on the tropical semiringthere exists an algorithm to test the determinizability of tiproof according to theorem 12 testing the determinizability of ti is equivalent to testing for the twins property we define an algorithm to test this property our algorithm is close to that of weber and klemm for testing the sequentiability of stringtostring transducersit is based on the construction of an automaton a similar to the cross product of ti with itselflet k c 7 be the finite set of real numbers defined by mohri transducers in language and speech by construction two states qi and q2 of q can be reached by the same string you lul 0 such that we prove that weights are also the same in si and 52let hy be the set of strings labeling the paths from i3 to qi in t1 cffi is the weight output corresponding to a string w e r consider the accumulated weights c11 1 i 2 0 j k in determinization of t each cl for instance corresponds to the weight not yet output in the paths reaching siit needs to be added to the weights of any path from qj e s1 to a final state in revin other terms the determinization algorithm will assign the weight c w ai to a path labeled with wr reaching a final state of t from sitquot is obtained by pushing from ttherefore the weight of such we noticed in the proof of the determinization theorem that the minimum weight of the pairs of any subset is 0therefore vj e o k c11 c21 and s2 s1this ends the proof of the theorem0 figures 2325 illustrate the minimization of stringtoweight transducers using the determinization algorithmthe transducer 02 of figure 23 is obtained from that of figure 15 ia by reversing itthe application of determinization to 02 results in 03 notice that since 01 is subsequential according to the theorem the transducer 03 is minimal too03 is then reversed and determinized the resulting transducer 04 is minimal and equivalent to 01comparing the transducer 04 to the transducer of figure 17 si we note that both are minimal and realize the same functionsi provides output weights as soon as possible it can be obtained from 04 by pushingi thank michael riley and also cl reviewers for their comments on earlier versions of this paper fernando pereira and michael riley for discussions andrej ljolje for providing the word lattices cited herein phil terscaphen for useful advice and dominique perrin for his help in finding references relating to the minimization of automata by determinization
J97-2003
finitestate transducers in language and speech processingfinitestate machines have been used in various domains of natural language processingwe consider here the use of a type of transducer that supports very efficient programs sequential transducerswe recall classical theorems and give new ones characterizing sequential stringtostring transducerstransducers that output weights also play an important role in language and speech processingwe give a specific study of stringtoweight transducers including algorithms for determinizing and minimizing these transducers very efficiently and characterizations of the transducers admitting determinization and the corresponding algorithmssome applications of these algorithms in speech recognition are described and illustratedapplication of cascades of weighted string transducers has been wellstudied in this work
stochastic inversion transduction grammars and bilingual parsing of parallel corpora technology introduce a novel inversion transduction formalism bilingual modeling of sentencepairs and the concept of parsing a variety of parallel corpus analysis applications aside from the bilingual orientation three major features distinguish the formalism from the finitestate transducers more traditionally found in computational linguistics it skips directly to a contextfree rather than finitestate base it permits a minimal extra degree of ordering flexibility and its probabilistic formulation admits an efficient maximumlikelihood bilingual parsing algorithm a convenient normal form is shown to exist analysis of the formalism expressiveness suggests that it is particularly well suited to modeling ordering shifts between languages balancing needed flexibility against complexity constraints we discuss a number of examples of how stochastic inversion transduction grammars bring bilingual constraints to bear upon problematic corpus analysis tasks such as segmentation bracketing phrasal alignment and parsing we introduce a novel stochastic inversion transduction grammar formalism for bilingual language modeling of sentencepairs and the concept of bilingual parsing with a variety of parallel corpus analysis applicationsaside from the bilingual orientation three major features distinguish the formalism from the finitestate transducers more traditionally found in computational linguistics it skips directly to a contextfree rather than finitestate base it permits a minimal extra degree of ordering flexibility and its probabilistic formulation admits an efficient maximumlikelihood bilingual parsing algorithma convenient normal form is shown to existanalysis of the formalism expressiveness suggests that it is particularly well suited to modeling ordering shifts between languages balancing needed flexibility against complexity constraintswe discuss a number of examples of how stochastic inversion transduction grammars bring bilingual constraints to bear upon problematic corpus analysis tasks such as segmentation bracketing phrasal alignment and parsingwe introduce a general formalism for modeling of bilingual sentence pairs known as an inversion transduction grammar with potential application in a variety of corpus analysis areastransduction grammar models especially of the finitestate family have long been knownhowever the imposition of identical ordering constraints upon both streams severely restricts their applicability and thus transduction grammars have received relatively little attention in languagemodeling researchthe inversion transduction grammar formalism skips directly to a contextfree rather than finitestate base and permits one extra degree of ordering flexibility while retaining properties necessary for efficient computation thereby sidestepping the limitations of traditional transduction grammarsin tandem with the concept of bilingual languagemodeling we propose the concept of bilingual parsing where the input is a sentencepair rather than a sentencethough inversion transduction grammars remain inadequate as fullfledged translation models bilingual parsing with simple inversion transduction grammars turns out to be very useful for parallel corpus analysis when the true grammar is not fully knownparallel bilingual corpora have been shown to provide a rich source of constraints for statistical analysis the primary purpose of bilingual parsing with inversion transduction grammars is not to flag ungrammatical inputs rather the aim is to extract structure from the input data which is assumed to be grammatical in keeping with the spirit of robust parsingthe formalism uniform integration of various types of bracketing and alignment constraints is one of its chief strengthsthe paper is divided into two main partswe begin in the first part below by laying out the basic formalism then show that reduction to a normal form is possiblewe then raise several desiderata for the expressiveness of any bilingual languagemodeling formalism in terms of its constituentmatching flexibility and discuss how the characteristics of the inversion transduction formalism are particularly suited to address these criteriaafterwards we introduce a stochastic version and give an algorithm for finding the optimal bilingual parse of a sentencepairthe formalism is independent of the languages we give examples and applications using chinese and english because languages from different families provide a more rigorous testing groundin the second part we survey a number of sample applications and extensions of bilingual parsing for segmentation bracketing phrasal alignment and other parsing tasksa transduction grammar describes a structurally correlated pair of languagesfor our purposes the generative view is most convenient the grammar generates transductions so that two output streams are simultaneously generated one for each languagethis contrasts with the common inputoutput view popularized by both syntaxdirected transduction grammars and finitestate transducersthe generative view is more appropriate for our applications because the roles of the two languages are symmetrical in contrast to the usual applications of syntaxdirected transduction grammarsmoreover the inputoutput view works better when a machine for accepting one of the languages has a high degree of determinism which is not the case hereour transduction model is contextfree rather than finitestatefinitestate transducers or fsts are well known to be useful for specific tasks such as analysis of inflectional morphology texttospeech conversion and nominal number and temporal phrase normalization fsts may also be used to parse restricted classes of contextfree grammars however the bilingual corpus analysis tasks we consider in this paper are quite different from the tasks for which fsts are apparently well suitedour domain is broader and the model possesses very little a priori specific structural knowledge of the languageas a stepping stone to inversion transduction grammars we first consider what a contextfree model known as a simple transduction grammar would look likesimple transduction grammars are restricted cases of the general class of contextfree syntaxdirected transduction grammars however we will avoid the term syntaxdirected here so as to deemphasize the inputoutput connotation as discussed abovea simple transduction grammar can be written by marking every terminal symbol for a particular output streamthus each rewrite rule emits not one but two streamsfor example a rewrite rule of the form a bxiy2czi means that the terminal symbols x and z are symbols of the language l1 emitted on stream 1 while y is a symbol of a simple transduction grammar and an invertedorientation production the language l2 emitted on stream 2it follows that every nonterminal stands for a class of derivable substring pairswe can use a simple transduction grammar to model the generation of bilingual sentence pairsas a mnemonic convention we usually use the alternative notation a bxlyczle to associate matching output tokensthough this additional information has no formal generative effect it reminds us that xly must be a valid entry in the translation lexiconwe call a matched terminal symbol pair such as xly a couplethe null symbol e means that no output token is generatedwe call x an lisingleton and cly an l2singletonconsider the simple transduction grammar fragment shown in figure 1the simple transduction grammar can generate for instance the following pair of english and chinese sentences in translation notice that each nonterminal derives two substrings one in each languagethe two substrings are counterparts of each otherin fact it is natural to write the parse trees together eethee financial14v secretaryni nn ni and fl firinp inp be accountableftrtivv vp sr is of course in general simple transduction grammars are not very useful precisely because they require the two languages to share exactly the same grammatical structure for example the following sentence pair from our corpus cannot be generated to make transduction grammars truly useful for bilingual tasks we must escape the rigid parallel ordering constraint of simple transduction grammarsat the same time any relaxation of constraints must be traded off against increases in the computational complexity of parsing which may easily become exponentialthe key is to make the relaxation relatively modest but still handle a wide range of ordering variationsthe inversion transduction grammar formalism only minimally extends the generative power of a simple transduction grammar yet turns out to be surprisingly effective1 like simple transduction grammars itgs remain a subset of contextfree transduction grammars but this view is too general to be of much helpthe productions of an inversion transduction grammar are interpreted just as in a simple transduction grammar except that two possible orientations are allowedpure simple transduction grammars have the implicit characteristic that for both output streams the symbols generated by the righthandside constituents of a production are concatenated in the same lefttoright orderinversion transduction grammars also allow such productions which are said to have straight orientationin addition however inversion transduction grammars allow productions with inverted orientation which generate output for stream 2 by emitting the constituents on a production righthand side in righttoleft orderwe indicate a production orientation with explicit notation for the two varieties of concatenation operators on stringpairsthe operator 1 performs the quotusualquot pairwise concatenation so that ab yields the stringpair where c1 bi and c2 a2b2but the operator concatenates constituents on output stream 1 while reversing them on stream 2 so that c1 aibi but c2 b2a2since inversion is permitted at any level of rule expansion a derivation may intermix productions of either orientation within the parse treefor example if the invertedorientation production of figure 1 is added to the earlier simple transduction grammar sentencepair can then be generated as follows we can show the common structure of the two sentences more clearly and compactly with the aid of the notation inversion transduction grammar parse treealternatively a graphical parse tree notation is shown in figure 2 where the level of bracketing is indicated by a horizontal linethe english is read in the usual depthfirst lefttoright order but for the chinese a horizontal line means the right subtree is traversed before the leftparsing in the case of an itg means building matched constituents for input sentencepairs rather than sentencesthis means that the adjacency constraints given by the nested levels must be obeyed in the bracketings of both languagesthe result of the parse yields labeled bracketings for both sentences as well as a bracket alignment indicating the parallel constituents between the sentencesthe constituent alignment includes a word alignment as a byproductthe nonterminals may not always look like those of an ordinary cfgclearly the nonterminals of an itg must be chosen in a somewhat different manner than for a monolingual grammar since they must simultaneously account for syntactic patterns of both languagesone might even decide to choose nonterminals for an itg that do not match linguistic categories sacrificing this to the goal of ensuring that all corresponding substrings can be alignedan itg can accommodate a wider range of ordering variation between the ianan extremely distorted alignment that can be accommodated by an itg guages than might appear at first blush through appropriate decomposition of productions in conjuction with introduction of new auxiliary nonterminals where neededfor instance even messy alignments such as that in figure 3 can be handled by interleaving orientations this bracketing is of course linguistically implausible so whether such parses are acceptable depends on one objectivemoreover it may even remain possible to align constituents for phenomena whose underlying structure is not contextfreesay ellipsis or coordinationas long as the surface structures of the two languages fortuitously parallel each other we will return to the subject of itgs ordering flexibility in section 4we stress again that the primary purpose of itgs is to maximize robustness for parallel corpus analysis rather than to verify grammaticality and therefore writing grammars is made much easier since the grammars can be minimal and very leakywe consider elsewhere an extreme special case of leaky itgs inversioninvariant transduction grammars in which all productions occur with both orientations as the applications below demonstrate the bilingual lexical constraints carry greater importance than the tightness of the grammarformally an inversion transduction grammar or itg is denoted by g where at is a finite set of nonterminals wi is a finite set of words of language 1 w2 is a finite set of words of language 2 r is a finite set of rewrite rules and s e ai is the start symbolthe space of wordpairs x x contains lexical translations denoted xy and singletons denoted x or cy where x e wi and y e w2each production is either of straight orientation written a aia2 ad or of inverted orientation written a where ai e alu x and r is the rank of the productionthe set of transductions generated by g is denoted tthe sets of strings generated by g for the first and second output languages are denoted l1 and l2 respectivelywe now show that every itg can be expressed as an equivalent itg in a 2normal form that simplifies algorithms and analyses on itgsin particular the parsing algorithm of the next section operates on itgs in normal formthe availability of a 2normal form is a noteworthy characteristic of itgs no such normal form is available for unrestricted contextfree transduction grammars the proof closely follows that for standard cfgs and the proofs of the lemmas are omittedlemma 1 for any inversion transduction grammar g there exists an equivalent inversion transduction grammar g where t t such that for any inversion transduction grammar g there exists an equivalent inversion transduction grammar g where t t such that the righthand side of any production of g contains either a single terminalpair or a list of nonterminalslemma 3 for any inversion transduction grammar g there exists an equivalent inversion transduction grammar g where t t such that g does not contain any productions of the form a bfor any inversion transduction grammar g there exists an equivalent inversion transduction grammar g in which every production takes one of the following forms a yn3 yn2 let additional stringpairs are generated due to the new productions0 henceforth all transduction grammars will be assumed to be in normal formwe now turn to the expressiveness desiderata for a matching formalismit is of course difficult to make precise claims as to what characteristics are necessary andor sufficient for such a model since no cognitive studies that are directly pertinent to bilingual constituent alignment are availablenonetheless most related previous parallel corpus analysis models share certain conceptual approaches with ours loosely based on crosslinguistic theories related to constituency case frames or thematic roles as well as computational feasibility needsbelow we survey the most common constraints and discuss their relation to itgscrossing constraintsarrangements where the matchings between subtrees cross each another are prohibited by crossing constraints unless the subtrees immediate parent constituents are also matched to each otherfor example given the constituent matchings depicted as solid lines in figure 4 the dottedline matchings corresponding to potential lexical translations would be ruled illegalcrossing constraints are implicit in many phrasal matching approaches both constituencyoriented and dependencyoriented the theoretical crosslinguistic hypothesis here is that the core arguments of frames tend to stay together over different languagesthe constraint is also useful for computational reasons since it helps avoid exponential bilingual matching timesitgs inherently implement a crossing constraint in fact the version enforced by itgs is even strongerthis is because even within a single constituent immediate subtrees are only permitted to cross in exact inverted orderas we shall argue below this restriction reduces matching flexibility in a desirable fashionrank constraintsthe second expressiveness desideratum for a matching formalism is to somehow limit the rank of constituents which dictates the span over which matchings may crossas the number of subtrees of an liconstituent grows the number of possible matchings to subtrees of the corresponding l2constituent grows combinatorially with corresponding time complexity growth on the matching processmoreover if constituents can immediately dominate too many tokens of the sentences the crossing constraint loses effectivenessin the extreme if a single constituent immediately dominates the entire sentencepair then any permutation is permissible without violating the crossing constraintthus we would like to constrain the rank as much as possible while still permitting some reasonable degree of permutation flexibilityrecasting this issue in terms of the general class of contextfree transduction grammars the number of possible subtree matchings for a single constituent grows combinatorially with the number of symbols on a production righthand sidehowever it turns out that the jig restriction of allowing only matchings with straight or inverted orientation effectively cuts the combinatorial growth while still maintaining flexibility where neededto see how itgs maintain needed flexibility consider figure 5 which shows all 24 possible complete matchings between two constituents of length four eachnearly all of these22 out of 24can be generated by an itg as shown by the parse trees 3 the 22 permitted matchings are representative of real transpositions in word order between the englishchinese sentences in our datathe only two matchings that cannot be generated are very distorted transpositions that we might call quotinsideoutquot matchingswe have been unable to find real examples in our data of constituent arguments undergoing quotinsideoutquot transpositionnote that this hypothesis is for fixedwordorder languages that are lightly inflected such as english and chineseit would not be expected to hold for socalled scrambling or freewordorder languages or heavily inflected languageshowever inflections provide alternative surface cues for determining constituent roles so it would not be necessary to apply the itg model to such languageson the other hand to see how itgs cut combinatorial growth consider the table in figure 6 which compares growth in the number of legal complete matchings on a pair of subconstituent sequencesthe third column shows the number of all possible complete matchings between two constituents with a rank of r subconstituents each transduction grammarscompare this against the second column which shows the number of complete matchings that can be accepted by an itg between a pair of lengthr sequences of subconstituentsthe fourth column shows the proportion of matchings that itgs can acceptflexibility is nearly total for sequences of up to are 2 are not needed we show in the subsections below that this minimal transduction grammar in normal form is generatively equivalent to any reasonable bracketing transduction grammarmoreover we also show how postprocessing using rotation and flattening operations restores the rank flexibility so that an output bracketing can hold more than two immediate constituents as shown in figure 11the bu distribution actually encodes the englishchinese translation lexicon with degrees of probability on each potential word translationwe have been using a lexicon that was automatically learned from the hkust englishchinese parallel bilingual corpus via statistical sentence alignment and statistical chinese word and collocation extraction followed by an them wordtranslationlearning procedure the latter stage gives us the probabilities directlyfor the two singleton productions which permit any word in either sentence to be unmatched a small constant can be chosen for the probabilities b1 and b1 so that the optimal bracketing resorts to these productions only when it is otherwise impossible to match the singletonsthe parameter a here is of no practical effect and is chosen to be very small relative to the by probabilities of lexical translation pairsthe result is that the maximumlikelihood parser selects the parse tree that best meets the combined lexical translation preferences as expressed by the by probabilitiesprepost positional biasesmany bracketing errors are caused by singletonswith singletons there is no crosslingual discrimination to increase the certainty between alternative bracketingsa heuristic to deal with this is to specify for each of the two languages whether prepositions or postpositions are more common where quotprepositionquot here is meant not in the usual partofspeech sense but rather in a broad sense of the tendency of function words to attach left or rightthis simple strategem is effective because the majority of unmatched singletons are function words that lack counterparts in the other languagethis observation holds assuming that the translation lexicon coverage is reasonably goodfor both english and chinese we specify a prepositional bias which means that singletons are attached to the right whenever possiblea singletonrebalancing algorithmwe give here an algorithm for further improving the bracketing accuracy in cases of singletonsconsider the following bracketing produced by the algorithm of the previous section the prepositional bias has already correctly restricted the singleton thee to attach to the right but of course the does not belong outside the rest of the sentence but rather with authoritythe problem is that singletons have no discriminative power between alternative bracket matchingsthey only contribute to the ambiguitywe can minimize the impact by moving singletons as deep as possible closer to the individual word they precede or succeed or in other words we can widen the scope of the brackets immediately following the singletonin general this improves precision since widescope brackets are less constrainingthe algorithm employs a rebalancing strategy reminiscent of balanced tree structures using left and right rotationsa left rotation changes a structure to a c structure and vice versa for a right rotationthe task is complicated by the presence of both and brackets with both l1 and l2singletons since each combination presents different interactionsto be legal a rotation must preserve symbol order on both output streamshowever the following lemma shows that any subtree can always be rebalanced at its root if either of its children is a singleton of either languagelet x be an lisingleton y be an l2singleton and a b c be arbitrary terminal or nonterminal symbolsthen the following properties hold for the and operators where the relation means that the same two output strings are generated and the matching of the symbols is preserved the method of figure 8 modifies the input tree to attach singletons as closely as possible to couples but remaining consistent with the input tree in the following sense singletons cannot quotescapequot their immediately surrounding bracketsthe key is that for any given subtree if the outermost bracket involves a singleton that should be rotated into a subtree then exactly one of the singleton rotation properties will applythe method proceeds depthfirst sinking each singleton as deeply as possiblealternative itg parse trees for the same matchingfor example after rebalancing sentence is bracketed as follows rthee authorityt will14f 11 flattening the bracketingin the worst case both sentences might have perfectly aligned words lending no discriminative leverage whatsoever to the bracketerthis leaves a very large number of choices if both sentences are of length 1 then there are possible bracketings with rank 2 none of which is better justified than any otherthus to improve accuracy we should reduce the specificity of the bracketing commitment in such casesan inconvenient problem with ambiguity arises in the simple bracketing grammar above illustrated by figure 9 there is no justification for preferring either or over the otherin general the problem is that both the straight and inverted concatenation operations are associativethat is aaa and aaa generate the same two output strings which are also generated by aaa and similarly with and a which can also be generated by thus the parse shown in is preferable to either or since it does not make an unjustifiable commitment either wayproductions in the form of however are not permitted by the normal form we use in which each bracket can only hold two constituentsparsing must overcommit since the algorithm is always forced to choose between and c structures even when no choice is clearly betterwe could relax the normal form constraint but longer productions clutter the grammar unnecessarily and in the case of generic bracketing grammars reduce parsing efficiency considerablyinstead we employ a more complicated but betterconstrained grammar as shown in figure 10 designed to produce only canonical tailrecursive parseswe differentiate type a and b constituents representing subtrees whose roots have straight and inverted orientation respectivelyunder this grammar a series of nested constituents with the same orientation will always have a leftheavy derivationthe guarantee that parsing will produce a tailrecursive tree facilitates easily identification of those nesting levels that are associative so that those levels can be quotflattenedquot by a postprocessing stage after parsing into nonnormal form trees like the one in figure 9the algorithm proceeds bottomup eliminating as many brackets as possible by making use of the associativity equivalences abc s abc and c the singleton bidirectionality and flipping commutativity equivalences can also be applied whenever they render the associativity equivalences applicableexperimentapproximately 2000 sentencepairs with both english and chinese lengths of 30 words or less were extracted from our corpus and bracketed using the algorithm describedseveral additional criteria were used to filter out unsuitable sentencepairsif the lengths of the pair of sentences differed by more than a 21 ratio the pair was rejected such a difference usually arises as the result of an earlier error in automatic sentence alignmentsentences containing more than one word absent from the translation lexicon were also rejected the bracketing method is not intended to be robust against lexicon inadequacieswe also rejected sentencepairs with fewer than two matching words since this gives the bracketing algorithm no discriminative leverage such pairs accounted for less than 2 of the input dataa random sample of the bracketed sentencepairs was then drawn and the bracket precision was computed under each criterion for correctnessexamples are shown in figure 11the bracket precision was 80 for the english sentences and 78 for the chinese sentences as judged against manual bracketingsinspection showed the errors to be due largely to imperfections of our translation lexicon which contains approximately 6500 english words and 5500 chinese words with about 86 translation accuracy so a better lexicon should yield substantial performance improvementmoreover if the resources for a good monolingual partofspeech or grammarbased bracketer such as that of magerman and marcus are available its output can readily be incorporated in complementary fashion as discussed in section 9bracketing output examplesphrasal translation examples at the subsentential level are an essential resource for many mt and mat architecturesthis requirement is becoming increasingly direct for the examplebased machine translation paradigm whose translation flexibility is strongly restricted if the examples are only at the sentential levelit can now be assumed that a parallel bilingual corpus may be aligned to the sentence level with reasonable accuracy even for languages as disparate as chinese and english algorithms for subsentential alignment have been developed as well as granularities of the character word collocation and specially segmented levelshowever the identification of subsentential nested phrasal translations within the parallel texts remains a nontrivial problem due to the added complexity of dealing with constituent structuremanual phrasal matching is feasible only for small corpora either for toyprototype testing or for narrowly restricted applicationsautomatic approaches to identification of subsentential translation units have largely followed what we might call a quotparseparsematchquot procedureeach half of the parallel corpus is first parsed individually using a monolingual grammarsubsequently the constituents of each sentencepair are matched according to some heuristic procedurea number of recent proposals can be cast in this framework the parseparsematch procedure is susceptible to three weaknesses the grammars may be incompatible across languagesthe bestmatching constituent types between the two languages may not include the same core argumentswhile grammatical differences can make this problem unavoidable there is often a degree of arbitrariness in a grammar chosen set of syntactic categories particularly if the grammar is designed to be robustthe mismatch can be exacerbated when the monolingual grammars are designed independently or under different theoretical considerations corpus we mean a set of matchings between the constituents of the sentencesthe problem is that in some cases a constituent in one sentence may have several potential matches in the other and the matching heuristic may be unable to discriminate between the optionsin the sentence pair of figure 4 for example both security bureau and police station are potential lexical matches to _vnto choose the best set of matchings an optimization over some measure of overlap between the structural analysis of the two sentences is neededprevious approaches to phrasal matching employ arbitrary heuristic functions on say the number of matched subconstituentsour method attacks the weaknesses of the parseparsematch procedure by using only a translation lexicon with no languagespecific grammar a bilingual rather than monolingual formalism and a probabilistic formulation for resolving the choice between candidate arrangementsthe approach differs in its singlestage operation that simultaneously chooses the constituents of each sentence and the matchings between themthe raw phrasal translations suggested by the parse output were then filtered to remove those pairs containing more than 50 singletons since such pairs are likely to be poor translation examplesexamples that occurred more than once in the corpus were also filtered out since repetitive sequences in our corpus tend to be nongrammatical markupthis yielded approximately 2800 filtered phrasal translations some examples of which are shown in figure 12a random sample of the phrasal translation pairs was then drawn giving a precision estimate of 815although this already represents a useful level of accuracy it does not in our opinion reflect the full potential of the formalisminspection revealed that performance was greatly hampered by our noisy translation lexicon which was automatically learned it could be manually postedited to reduce errorscommercial online translation lexicons could also be employed if availablehigher precision could be also achieved without great effort by engineering a small number of broad nonterminal categoriesthis would reduce errors for known idiosyncratic patterns at the cost of manual rule buildingthe automatically extracted phrasal translation examples are especially useful where the phrases in the two languages are not compositionally derivable solely from obvious word translationsan example is have acquirede oinij newt skillsit lig in figure 11the same principle applies to nested structures also such as on up to the sentence level have the right to decide our vttfil in what way the government would increase elf mg tff a ft 111 erg ft g nif rt their job opportunities and last month all a never to say quotnever quot twvquottquot reserves and surpluses me rimr starting point for this new policy aitimaaurgcm there will be many practical difficulties in terms nu 14 p 11 x w fg inn of implementation year ended 3 1 march 1 9 9 1 1 aa a11fe1 under the itg model word alignment becomes simply the special case of phrasal alignment at the parse tree leavesthis gives us an interesting alternative perspective from the standpoint of algorithms that match the words between parallel sentencesby themselves word alignments are of little use but they provide potential anchor points for other applications or for subsequent learning stages to acquire more interesting structuresword alignment is difficult because correct matchings are not usually linearly ordered ie there are crossingswithout some additional constraints any word position in the source sentence can be matched to any position in the target sentence an assumption that leads to high error ratesmore sophisticated word alignment algorithms therefore attempt to model the intuition that proximate constituents in close relationships in one language remain proximate in the otherthe later ibm models are formulated to prefer collocations in the case of word_align a penalty is imposed according to the deviation from an ideal matching as constructed by linear interpolationfrom this point of view the proposed technique is a word alignment method that imposes a more realistic distortion penaltythe tree structure reflects the assumption that crossings should not be penalized as long as they are consistent with constituent structurefigure 7 gives theoretical upper bounds on the matching flexibility as the lengths of the sequences increase where the constituent structure constraints are reflected by high flexibility up to length4 sequences and a rapid dropoff thereafterin other words itgs appeal to a language universals hypothesis that the core arguments of frames which exhibit great ordering variation between languages are relatively few and surface in syntactic proximityof course this assumption oversimplistically wu bilingual parsing blends syntactic and semantic notionsthat semantic frames for different languages share common core arguments is more plausible than that syntactic frames doin effect we are relying on the tendency of syntactic arguments to correlate closely with semanticsif in particular cases this assumption does not hold however the damage is not too greatthe model will simply drop the offending word matchings in experiments with the minimal bracketing transduction grammar the large majority of errors in word alignment were caused by two outside factorsfirst word matchings can be overlooked simply due to deficiencies in our translation lexiconthis accounted for approximately 42 of the errorssecond sentences containing nonliteral translations obviously cannot be aligned down to the word levelthis accounted for another approximate 50 of the errorsexcluding these two types of errors accuracy on word alignment was 963in other words the tree structure constraint is strong enough to prevent most false matches but almost never inhibits correct word matches when they exista parse may be available for one of the languages especially for wellstudied languages such as englishsince this eliminates all degrees of freedom in the english sentence structure the parse of the chinese sentence must conform with that given for the englishknowledge of english bracketing is thus used to help parse the chinese sentence this method facilitates a kind of transfer of grammatical expertise in one language toward bootstrapping grammar acquisition in anothera parsing algorithm for this case can be implemented very efficientlynote that the english parse tree already determines the split point s for breaking eo t into two constituent subtrees deriving eo s and es t respectively as well as the nonterminal labels j and k for each subtreethe same then applies recursively to each subtreewe indicate this by turning s j and k into deterministic functions on the english constituents writing sst jst and kt to denote the split point and the subtree labels for any constituent e t the following simplifications can then be made to the parsing algorithmfor all english constituents est and all i you v such that o17111v the time complexity for this constrained version of the algorithm drops from e to ea more realistic inbetween scenario occurs when partial parse information is available for one or both of the languagesspecial cases of particular interest include applications where bracketing or word alignment constraints may be derived from external sources beforehandfor example a broadcoverage english bracketer may be availableif such constraints are reliable it would be wasteful to ignore thema straightforward extension to the original algorithm inhibits hypotheses that are inconsistent with given constraintsany entries in the dynamic programming table corresponding to illegal subhypothesesie those that would violate the given bracketnesting or word alignment conditionsare preassigned negative infinity values during initialization indicating impossibilityduring the recursion phase computation of these entries is skippedsince their probabilities remain impossible throughout the illegal subhypotheses will never participate in any ml bibracketingthe running time reduction in this case depends heavily on the domain constraintswe have found this strategy to be useful for incorporating punctuation constraintscertain punctuation characters give constituency indications with high reliability quotperfect separatorsquot include colons and chinese full stops while quotperfect delimitersquot include parentheses and quotation marksit is possible to construct a parser that accepts unrestrictedform rather than normalform grammarsin this case an earleystyle scheme employing an active chart can be usedthe time complexity remains the same as the normalform casewe have found this to be useful in practicefor bracketing grammars of the type considered in this paper there is no advantagehowever for more complex linguistically structured grammars the more flexible parser does not require the unreasonable numbers of productions that can easily arise from normalform requirementsfor most grammars we have found performance to be comparable or faster than the normalform parserthe twin concepts of bilingual language modeling and bilingual parsing have been proposedwe have introduced a new formalism the inversion transduction grammar and surveyed a variety of its applications to extracting linguistic information from parallel corporaits amenability to stochastic formulation useful flexibility with leaky and minimal grammars and tractability for practical applications are desirable propertiesvarious tasks such as segmentation word alignment and bracket annotation are naturally incorporated as subproblems and a high degree of compatibility with conventional monolingual methods is retainedin conjunction with automatic procedures for learning word translation lexicons sitgs bring relatively underexploited bilingual wu bilingual parsing correlations to bear on the task of extracting linguistic information for languages less studied than englishwe are currently pursuing several directionswe are developing an iterative training method based on expectationmaximization for estimating the probabilities from parallel training corporaalso in contrast to the applications discussed here which deal with analysis and annotation of parallel corpora we are working on incorporating the sitg model directly into our runtime translation architecturethe initial results indicate excellent performance gainsi would like to thank xuanyin xia eva waiman fong pascale fung and derick wood as well as an anonymous reviewer whose comments were of great value
J97-3002
stochastic inversion transduction grammars and bilingual parsing of parallel corporawe introduce a novel stochastic inversion transduction grammar formalism for bilingual language modeling of sentencepairs and the concept of bilingual parsing with a variety of parallel corpus analysis applicationsaside from the bilingual orientation three major features distinguish the formalism from the finitestate transducers more traditionally found in computational linguistics it skips directly to a contextfree rather than finitestate base it permits a minimal extra degree of ordering flexibility and its probabilistic formulation admits an efficient maximumlikelihood bilingual parsing algorithma convenient normal form is shown to existanalysis of the formalism expressiveness suggests that it is particularly well suited to modeling ordering shifts between languages balancing needed flexibility against complexity constraintswe discuss a number of examples of how stochastic inversion transduction grammars bring bilingual constraints to bear upon problematic corpus analysis tasks such as segmentation bracketing phrasal alignment and parsingwe use an insideoutside type of training algorithm to learn statistical context free transductionour bilingual bracketing is one of the bilingual shallow parsing approaches studied for chineseenglish word alignmentwe introduce a polynomialtime solution for the alignment problem based on synchronous binary trees
automatic rule induction for unknownword guessing words unknown to the lexicon present a substantial problem to nlp modules that rely on morphosyntactic information such as partofspeech taggers or syntactic parsers in this paper we present a technique for fully automatic acquisition of rules that guess possible partofspeech tags for unknown words using their starting and ending segments the learning is performed from a generalpurpose lexicon and word frequencies collected from a raw corpus three complimentary sets of wordguessing rules are statistically induced prefix morphological rules suffix morphological rules and endingguessing rules using the proposed technique unknownwordguessing rule sets were induced and integrated into a stochastic tagger and a rulebased tagger which were then applied to texts with unknown words words unknown to the lexicon present a substantial problem to nlp modules that rely on morphosyntactic information such as partofspeech taggers or syntactic parsersin this paper we present a technique for fully automatic acquisition of rules that guess possible partofspeech tags for unknown words using their starting and ending segmentsthe learning is performed from a generalpurpose lexicon and word frequencies collected from a raw corpusthree complimentary sets of wordguessing rules are statistically induced prefix morphological rules suffix morphological rules and endingguessing rulesusing the proposed technique unknownwordguessing rule sets were induced and integrated into a stochastic tagger and a rulebased tagger which were then applied to texts with unknown wordswords unknown to the lexicon present a substantial problem to nlp modules taggers that rely on information about words such as their part of speech number gender or casetaggers assign a single postag to a wordtoken provided that it is known what postags this word can take on in general and the context in which this word was useda postag stands for a unique set of morphosyntactic features as exemplified in table 1 and a word can take several postags which constitute an ambiguity class or posclass for this wordwords with their posclasses are usually kept in a lexiconfor every input wordtoken the tagger accesses the lexicon determines possible postags this word can take on and then chooses the most appropriate onehowever some domainspecific words or infrequently used morphological variants of generalpurpose words can be missing from the lexicon and thus their posclasses should be guessed by the system and only then sent to the disambiguation modulethe simplest approach to posclass guessing is either to assign all possible tags to an unknown word or to assign the most probable one which is proper singular noun for capitalized words and common singular noun otherwisethe appealing feature of these approaches is their extreme simplicitynot surprisingly their performance is quite poor if a word is assigned all possible tags the search space for the disambiguation of a single postag increases and makes it fragile if every unknown word is classified as a noun there will be no difficulties for disambiguation but accuracy will suffersuch a guess is not reliable enoughto assign capitalized unknown words the category proper noun seems a good heuristic but may not always workas argued in church who proposes a more elaborated heuristic dermatas and kokkinakis proposed a simple probabilistic approach to unknownword guessing verb present 3d person verb present non3d example take took taking taken takes take meaning example tag the probability that an unknown word has a particular postag is estimated from the probability distribution of hapax words in the previously seen textswhereas such a guesser is more accurate than the naive assignments and easily trainable the tagging performance on unknown words is reported to be only about 66 correct for english2 more advanced wordguessing methods use word features such as leading and trailing word segments to determine possible tags for unknown wordssuch methods can achieve better performance reaching tagging accuracy of up to 85 on unknown words for english the xerox tagger comes with a set of rules that assign an unknown word a set of possible postags on the basis of its ending segmentwe call such rules endingguessing rules because they rely only on ending segments in their predictionsfor example an endingguessing rule can predict that a word is a gerund or an adjective if it ends with ingthe endingguessing approach was elaborated in weischedel et al where an unknown word was guessed by using the probability for an unknown word to be of a particular postag given its capitalization feature and its endingbrill describes a system of rules that uses both endingguessing and more morphologically motivated rulesa morphological rule unlike an endingguessing rule uses information about morphologically related words already known to the lexicon in its predictionfor instance a morphologically motivated guessing rule can say that a word is an adjective if adding the suffix y to it will result in a wordclearly endingguessing rules have wider coverage than morphologically oriented ones but their predictions can be less accuratethe major topic in the development of wordpos guessers is the strategy used for the acquisition of the guessing rulesa rulebased tagger described in voutilainen was equipped with a set of guessing rules that had been handcrafted using knowledge of english morphology and intuitionsa more appealing approach is automatic acquisition of such rules from available lexical resources since it is usually less laborintensive and less errorpronezhang and kim developed a system for automated learning of morphological word formation rulesthis system divides a string into three regions and infers from training examples their correspondence to underlying morphological featureskupiec describes a guessing component that uses a prespecified list of suffixes and then statistically learns the predictive properties of those endings from an untagged corpusin brill a transformationbased learner that learns guessing rules from a pretagged training corpus is outlined first the unknown words are labeled as common nouns and a list of generic transformations is definedthen the learner tries to instantiate the generic transformations with word features observed in the texta statisticalbased suffix learner is presented in schmid from a training corpus it constructs a suffix tree where every suffix is associated with its information measure to emit a particular postagalthough the learning process in these systems is fully automated and the accuracy of obtained guessing rules reaches current stateoftheart levels for estimation of their parameters they require significant amounts of specially prepared training dataa large training corpus training examples and so onin this paper we describe a novel fully automatic technique for the induction of posclassguessing rules for unknown wordsthis technique has been partially outlined in and along with a level of accuracy for the induced rules that is higher than any previously quoted it has an advantage in terms of quantity and simplicity of annotation of data for trainingunlike many other approaches which implicitly or explicitly assume that the surface manifestations of morphosyntactic features of unknown words are different from those of general language we argue that within the same language unknown words obey general morphological regularitiesin our approach we do not require large amounts of annotated text but employ fully automatic statistical learning using a preexisting generalpurpose lexicon mapped to a particular tag set and wordfrequency distribution collected from a raw corpusthe proposed technique is targeted to the acquisition of both morphological and endingguessing rules which then can be applied cascadingly using the most accurate guessing rules firstthe rule induction process is guided by a thorough guessingrule evaluation methodology that employs precision recall and coverage as evaluation metricsin the rest of the paper we first introduce the kinds of guessing rules to be induced and then present a semiunsupervised statistical rule induction technique using data derived from the celex lexical database finally we evaluate the induced guessing rules by removing all the hapax words from the lexicon and tagging the brown corpus by a stochastic tagger and a rulebased taggerthere are two kinds of wordguessing rules employed by our cascading guesser morphological rules and nonmorphological endingguessing rulesmorphological wordguessing rules describe how one word can be guessed given that another word is knownunlike morphological guessing rules nonmorphological rules do not require the base form of an unknown word to be listed in the lexiconsuch rules guess the posclass for a word on the basis of its ending or leading segments alonethis is especially important when dealing with uninflected words and domainspecific sublanguages where many highly specialized words can be encounteredin english as in many other languages morphological word formation is realized by affixation prefixation and suffixationthus in general each kind of guessing rule can be further subcategorized depending on whether it is applied to the beginning or tail of an unknown wordto mirror this classification we will introduce a general schemata for guessing rules and a guessing rule will be seen as a particular instantiation of this schemataa guessingrule schemata is a structure g xbe s m iclass 4zclass where for example the rule eied y says that if there is an unknown word which ends with ied we should strip this ending from it and append the string y to the remaining partif we then find this word in the lexicon as we conclude that the unknown word is of the category thus for instance if the word specified was unknown to the lexicon this rule first would try to segment the required ending ied then add to the result the mutative segment y and if the word specify was found in the lexicon as the unknown word specified would be classified as since the mutative segment can be an empty string regular morphological formations can be captured as wellfor instance the rule says that if segmenting the prefix un from an unknown word results in a word that is found in the lexicon as a past verb and participle we conclude that the unknown word is an adjective this rule will for instance correctly classify the word unscrewed if the word screwed is listed in the lexicon as when setting the s segment to an empty string and the m segment to a nonempty string the schemata allows for cases when a secondary form is listed in the lexicon and the base form is notfor instance the rule equotquot ed says that if adding the segment ed to the end of an unknown word results in a word that is found in the lexicon as a past verb and participle then the unknown word is a base or non3d present verb the general schemata can also capture endingguessing rules if the class is set to be quotvoidquot this indicates that no stem lookup is requirednaturally the mutative segment of such rules is always set to an empty stringfor example an endingguessing rule eing quot says that if a word ends with ing it can be an adjective a noun or a gerundunlike a morphological rule this rule does not check whether the substring preceding the ingending is listed in the lexicon with a particular posclassthe proposed guessingrule schemata is in fact quite similar to the set of generic transformations for unknownword guessing developed by brill there are however three major differences brill system has two transformations that our schemata do not capture when a particular character appears in a word and when a word appears in a particular contextthe latter transformation is in fact due to the peculiarities of brill tagging algorithm and in other approaches is captured at the disambiguation phase of the tagger itselfthe former feature is indirectly captured in our approachit has been noticed that capitalized and hyphenated words have a different distribution from other wordsour morphological rules account for this difference by checking the stem of the wordthe endingguessing rules on the other hand do not use information about stemsthus if the ending s predicts that a word can be a plural noun or a 3d form of a verb the information that this word was capitalized can narrow the considered set of postags to plural proper nounwe therefore decided to collect endingguessing rules separately for capitalized words hyphenated words and all other wordsin our experiments we restricted ourselves to the production of six different guessingrule sets which seemed most appropriate for english as already mentioned we see features that our guessingrule schemata is intended to capture as general language regularities rather than properties of rare or corpusspecific words onlythis significantly simplifies training data requirements we can induce guessing rules from a generalpurpose lexiconfirst we no longer depend on the size or even existence of an annotated training corpussecond we do not require any annotation to be done for the training instead we reuse the information stated in the lexicon which we can automatically map to a particular tag set that a tagger is trained towe also use the actual frequencies of word usage collected from a raw corpusthis allows for the discrimination between rules that are no longer productive and rules that are productive in reallife textsfor guessing rules to capture general language regularities the lexicon should be as general as possible and largethe corresponding corpus should also be large enough to obtain reliable estimates of wordfrequency distribution for at least 1000015000 wordssince a word can take on several different postags in the lexicon it can be represented as a stringposclass record where the posclass is a set of one or more postagsfor instance the entry for the word book which can be a noun or a verb would look like book thus the nth entry of the lexicon can be represented as w c where w is the surface lexical form and c is its posclassdifferent lexicon entries can share the same posclass but they cannot share the same surface lexical formin our experiments we used a lexicon derived from celex a large multilingual database that includes extensive lexicons of english dutch and germanwe constructed an english lexicon of 72136 word forms with morphological features which we then mapped into the penn treebank tag set the most frequent openclass tags of this tag set are shown in table 1wordfrequency distribution was estimated from the brown corpus which reflects multidomain language useas usual we separated the test sample from the training samplehere we followed the suggestion that the unknown words actually are quite similar to words that occur only once in the corpus we put all the hapax words from the brown corpus that were found in the celexderived lexicon into the test collection and all other words from the celexderived lexicon into the training lexiconin the test lexicon we also included the hapax words not found in the celexderived lexicon assigning them the postags they had in the brown corpusthen we filtered out words shorter than four characters nonwords such as numbers or alphanumerals which usually are handled at the tokenization phase and all closedclass words which we assume will always be present in the lexiconthus after all these transformations we obtained a lexicon of 59268 entries for training and the test lexicon of 17868 entriesour guessingrule induction technique uses the training and test data prepared as described above and can be seen as a sampling for the best performing rule set from a collection of automatically produced rule setshere is a brief outline of its major phases for the extraction of the initial sets of prefix and suffix morphological guessing rules we define the operator vn where the index n specifies the length of the mutative ending of the main wordthus when the index n is set to 0 the result of the application of the vo operator will be a morphological rule with no mutative segmentthe vi operator will extract the rules with the alterations in the last letter of the main wordwhen the v operator is applied to a pair of entries from the lexicon first it segments the last n characters of the shorter word and stores this in the m element of the rulethen it tries to segment an affix by subtracting the shorter word without the mutative ending from the longer word if the subtraction results in an nonempty string and the mutative segment is not duplicated in the affix the system creates a morphological rule with the posclass of the shorter word as the iclass the posclass of the longer word as the rclass and the segmented affix itself in the s fieldfor example booked vo book eed quotquot advisable vi advise elable quotequot the v operator is applied to all possible pairs of lexical entries sequentially and if a rule produced by such an application has already been extracted from another pair its frequency count is incrementedthus prefix and suffix morphological rules together with their frequencies are producednext we cut out the most infrequent rules which might bias further learningto do that we eliminate all the rules with frequency f less than a certain threshold 0 which usually is set quite low 24such filtering reduces the rule sets more than tenfoldto collect the endingguessing rules we set the upper limit on the ending length equal to five characters and thus collect from the lexicon all possible wordendings of length 1 2 3 4 and 5 together with the posclasses of the words in which these endings appearedwe also set the minimum length of the remaining substring to three characterswe define the unary operator a which produces a set of endingguessing rules from a word in the lexicon for instance from a lexicon entry different is incrementedthen the infrequent rules with f 19 are eliminated from the endingguessing rule setafter applying the a and v operations to the training lexicon we obtained rule collections of 4000050000 entriesfiltering out the rules with frequency counts of 1 reduced the collections to 50007000 entriesof course not all acquired rules are equally good at predicting word classes some rules are more accurate in their guesses and some rules are more frequent in their applicationfor every rule acquired we need to estimate whether it is an effective rule worth retaining in the working rule setto do so we perform a statistical experiment as follows we take each rule from the extracted rule sets one by one take each wordtype from the training lexicon and guess its posclass using the rule if the rule is applicable to the wordfor example if a guessing rule strips off a particular suffix and a current word from the lexicon does not have this suffix we classify that word and the rule as incompatible and the rule as not applicable to that wordif a rule is applicable to a word we compare the result of the guess with the information listed in the lexiconif the guessed class is the same as the class stated in the lexicon we count it as a hit or success otherwise it is a failurethen since we are interested in the application of the rules to wordtokens in the corpus we multiply the result of the guess by the corpus frequency of the wordif we keep the sample space for each rule separate from the others we have a binomial experimentthe value of a guessing rule closely correlates with its estimated proportion of success which is the proportion of all positive outcomes of the rule application to the total number of the trials which are in fact the number of all the word tokens that are compatible to the rule in the corpus x number of successful guesses the p estimate is a good indicator of the rule accuracy but it frequently suffers from large estimation error due to insufficient training datafor example if a rule was found to apply just once and the total number of observations was also one its estimate p has the maximal value but clearly this is not a very reliable estimatewe tackle this problem by calculating the lower confidence limit 711 for the rule estimate which can be seen as the minimal expected value of p for the rule if we were to draw a large number of samplesthus with a certain confidence a we can assume that if we used more training data the rule estimate 19 would be not worse than the irkthe rule estimate then will be taken at its lowest possible value which is the lrl limit itselffirst we adjust the rule estimate so that we have no zeros in positive or negative outcome probabilities by adding some floor values to the numerator and denominator where t2 is a coefficient of the tdistributionit has two parameters a the level of confidence and df the number of degrees of freedom which is one less than the sample size td2 can be looked up in the tables for the tdistribution listed in every textbook on statisticswe adopted 90 confidence for which tdcc_0902todf05 takes values depending on the sample size as in figure 1using irl instead of 5 for rule scoring favors higher estimates obtained over larger samples even if one rule has a high estimate value but that estimate was obtained over a small sample another rule with a lower estimate value but obtained over a large sample might be valued higher by rlthis rulescoring function resembles the one used by tzoukermann radev and gale for scoring posdisambiguation rules for the french taggerthe main difference between the two functions is that there the t value was implicitly assumed to be 1 which corresponds to a confidence level of 68 on a very large sampleanother important consideration for rating a wordguessing rule is that the longer the affix or ending of this rule the more confident we are that it is not a coincidental one even on small samplesfor example if the estimate for the wordending o was obtained over a sample of five words and the estimate for the wordending fulness was also obtained over a sample of five words the latter is more representative even though the sample size is the samethus we need to adjust the estimation error in accordance with the length of the affix or endinga good way to do this is to decrease it proportionally to a value that increases along with the increase of the lengtha suitable solution is to use the logarithm of the affix length when the length of s is 1 the estimation error is not changed since log is 0for the rules with an affix or ending length of 2 the estimation error is reduced by 1 log 13 for the length 3 this will be 1 log 148 etcthe longer the length the smaller the sample that will be considered representative enough for a confident rule estimationsetting the threshold at a certain level we include in the working rule sets only those rules whose scores are higher than the thresholdthe method for finding the optimal threshold is based on empirical evaluations of the rule sets and is described in section 34usually the threshold is set in the range of 6580 points and the rule sets are reduced down to a few hundred entriesfor example when we set the threshold to 75 points the obtained endingguessing rule collection comprised 1876 rules the suffix rule collection without mutation comprised 591 rules the suffix rule collection with mutation comprised 912 entries and the prefix rule collection comprised 235 rulestable 2 shows the highestrated rules from the induced prefix and suffix rule setsin general it looks as though the induced morphological guessing rules largely consist of the standard rules of english morphology and also include a small proportion of rules that do not belong to the known morphology of englishfor instance the suffix rule a et quot does not stand for any wellknown morphological rule but its prediction is as good as those of the standard morphological rulesthe same situation can be seen with the prefix rule b st quotquot which is quite predictive but at the same time is not a standard english morphological rulethe endingguessing rules naturally include some proper english suffixes but mostly they are simply highly predictive ending segments of wordsrules which have scored lower than the threshold are merged together into more general rulesthese new rules if they score above the threshold can also be included in the working rule setswe merge together two rules if they scored below the threshold and have the same affix mutative segment and initial class mutative segment and the initial class into one rule with the resulting class being the union of the two merged resulting classesfor example lexicon entry and guesser categorization for developed the score of the resulting rule will be higher than the scores of the individual rules since the number of positive observations increases and the number of the trials remains the sameafter a successful application of the ed operator the resulting general rule is substituted for the two merged onesto perform such rule merging over a rule set the rules that have not been included into the working rule set are first sorted by their score and the rules with the best scores are merged firstafter each successful merging the resulting rule is rescoredthis is done recursively until the score of the resulting rule does not exceed the threshold at which point it is added to the working rule setsthis process is applied until no merges can be done to the rules that scored poorlyin our experiment we noticed that the merging added 3040 new rules to the working rule sets and therefore the final number of rules for the induced sets were prefix 348 suffix 975 suffixl 1263 and ending 2196there are two important questions that arise at the rule acquisition stage how to choose the scoring threshold 0 and what the performance of the rule sets produced with different thresholds isthe task of assigning a set of pustags to a word is actually quite similar to the task of document categorization where a document is assigned a set of descriptors that represent its contentsthere are a number of standard parameters used for measuring performance on this kind of taskfor example suppose that a word can take on one or more pustags from the set of openclass postags to see how well the guesser performs we can compare the results of the guessing with the pustags known to be true for the word let us take for instance a lexicon entry developed suppose that the guesser categorized it as developed we can represent this situation as in figure 2the performance of the guesser can be measured in the interpretation of these percentages is by no means straightforward as there is no straightforward way of combining these different measures into a single onefor example these measures assume that all combinations of pipstags will be equally hard to disambiguate for the tagger which is not necessarily the caseobviously the most important measure is recall since we want all possible categories for a word to be guessedprecision seems to be slightly less important since the disambiguator should be able to handle additional noise but obviously not in large amountscoverage is a very important measure for a rule set since a rule set that can guess very accurately but only for a tiny proportion of words is of questionable valuethus we will try to maximize recall first then coverage and finally precisionwe will measure the aggregate by averaging over measures per word ie for every single word from the test collection the precision and recall of the guesses are calculated and then we average over these valuesto find the optimal threshold for the production of a guessing rule set we generated a number of similar rule sets using different thresholds and evaluated them against the training lexicon and the test lexicon of unseen 17868 hapax wordsevery word from the two lexicons was guessed by a rule set and the results were compared with the information the word had in the lexiconfor every application of a rule set to a word we computed the precision and recall and then using the total number of guessed words we computed the coveragewe noticed certain regularities in the behavior of the metrics in response to the change of the threshold recall improves as the threshold increases while coverage drops proportionallythis is not surprising the higher the threshold the fewer the inaccurate rules included in the rule set but at the same time the fewer the words that can be handledan interesting behavior is shown by precision first it grows proportionally along with the increase of the threshold but then at high thresholds it decreasesthis means that among very confident rules with very high scores there are many quite general onesthe best thresholds were obtained in the range of 7080 pointstable 3 displays the metrics for the bestscored rule setsas the baseline standard we took the endingguessing rule set supplied with the xerox tagger when we compared the xerox ending guesser with the induced endingguessing rule set we saw that its precision was about 6 poorer and most importantly it could handle 6 fewer unknown wordsfinally we measured the performance of the cascading application of the induced rule sets when the morphological guessing rules were applied before the endingguessing rules we detected that the cascading application of the morphological rule sets together with the endingguessing rules increases the overall precision of the guessing by about 8this made the improvement over the baseline xerox guesser 13 in precision and 7 in coverage on the test samplethe direct evaluation phase gave us a basis for setting the threshold to produce the bestperforming rule setsthe task of unknownword guessing is however a subtask of the overall partofspeech tagging processour main interest is in how the advantage of one rule set over another will affect the tagging performancetherefore we performed an evaluation of the impact of the word guessers on tagging accuracyin this evaluation we used the cascading guesser with two different taggers a c implemented bigram hmm tagger akin to one described in kupiec and the rulebased tagger of brill because of the similarities in the algorithms with the lisp implemented xerox tagger we could directly use the xerox guessing rule set with the hmm taggerbrill tagger came pretrained on the brown corpus and had a corresponding guessing componentthis gave us a searchspace of four basic combinations the hmm tagger equipped with the xerox guesser the brill tagger with its original guesser the hmm tagger with our cascading guesser and the brill tagger with the cascading guesserwe also tried hybrid tagging using the output of the hmm tagger as the input to brill final state tagger but it gave poorer results than either of the taggers and we decided not to consider this tagging optionwe evaluated the taggers with the guessing components on all fifteen subcorpora of the brown corpus one after anotherthe hmm tagger was trained on the brown corpus in such a way that the subcorpus used for the evaluation was not seen at the training phaseall the hapax words and capitalized words with frequency less than 20 were not seen at the training of the cascading guesserthese words were not used in the training of the tagger eitherthis means that neither the hmm tagger nor the cascading guesser had been trained on the texts and words used for evaluationwe do not know whether the same holds for the brill tagger and the brill and xerox guessers since we took them pretrainedfor words that the guessing components failed to guess we applied the standard method of classifying them as common nouns if they were not capitalized inside a sentence and proper nouns otherwisewhen we used the cascading guesser with the brill tagger we interfaced them on the level of the lexicon we guessed the unknown words before the tagging and added them to the lexicon listing the most likely tags first as requiredhere we want to clarify that we evaluated the overall results of the brill tagger rather than just its unknownword tagging componentanother point to mention is that since we included the guessed words in the lexicon the brill tagger could use for the transformations all relevant postags for unknown wordsthis is quite different from the output of the original brill guesser which provides only one postag for an unknown wordin our tagging experiments we measured the error rate of tagging on unknown words using different guesserssince arguably the guessing of proper nouns is easier than is the guessing of other categories we also measured the error rate for the subcategory of capitalized unknown words separatelythe error rate for a category of words was calculated as follows total _words _in _set _x wrongly _tagged _words _from _set _x thus for instance the error rate of tagging the unknown words is the proportion of the mistagged unknown words to all unknown wordsto see the distribution of the workload between different guessing rule sets we also measured the coverage of a guessing rule set we collected the error and coverage measures for each of the fifteen subcorpora8 of the brown corpus separately and using the bootstrap replicate technique we calculated the mean and the standard error for each combination of the taggers with the guessing componentsfor the fifteen accuracy means cii j12 in the brown corpus supplied with the penn treebankquite often obvious proper nouns as for instance summerdale russia or rochester were marked as common nouns and sometimes lowercased common nouns such as business or church were marked as proper nounsthus we decided not to count as an error the mismatch of the nnnnp tagsusing the hmm tagger with the lexicon containing all the words from the brown corpus we obtained the error rate 0 4003093 with the standard error seb0155599this agrees with the results on the closed dictionary obtained by other researchers for this class of the model on the same corpus the brill tagger showed some better results error rate o3327366 with the standard error 630123903although our primary goal was not to compare the taggers themselves but rather their performance with the guessing components we attribute the difference in their performance to the fact that brill tagger uses the information about the most likely tag for a word whereas the hmm tagger did not have this information and instead used the priors for a set of postags when we removed from the lexicon all the hapax words and following the recommendation of church all the capitalized words with frequency less than 20 we obtained some 51522 unknown wordtokens out of more than a million wordtokens in the brown corpuswe tagged the fifteen subcorpora of the brown corpus by the four combinations of the taggers and the guessers using the lexicon of 22260 wordtypestable 4 displays the tagging results on the unknown words obtained by the four different combinations of taggers and guessersit shows the overall error rate on unknown words and also displays the distribution of the error rate and the coverage between unknown proper nouns and the other unknown wordsindeed the error rate on the proper nouns was much smaller than on the rest of the unknown words which means that they are much easier to guesswe can also see a difference in the distribution of the unknown words using different taggersthis can be accounted for by the fact that the unguessed capitalized words were taken by default to be proper nouns and that the brill tagger and the hmm tagger had slightly different strategies to apply to the first word of a sentencethe cascading guesser outperformed the other two guessers in general and most importantly in the nonproper noun category where it had an advantage of 65 over brill guesser and about 87 over xerox guesserin our experiments the category of unknown proper nouns had a larger share than we expect in real life because all the capitalized words with frequency less than 20 were taken out of the lexiconthe cascading guesser also helped to improve the accuracy on unknown proper nouns by about 1 in comparison to brill guesser and about 30 in comparison to xerox guesserthe cascading guesser outperformed the other two guessers on every subcorpus of the brown corpustable 5 shows the distribution of the workload and the tagging accuracy among the different rule sets of the cascading guesserthe default assignment of the nn tag to unguessed words performed very poorly having the error rate of 44when we compared this distribution to that of the xerox guesser we saw that the accuracy of the xerox guesser itself was only about 65 lower than that of the cascading guesser and the fact that it could handle 6 fewer unknown words than the cascading guesser resulted in the increase of incorrect assignments by the default strategythere were three types of mistaggings on unknown words detected in our experimentsmistagging of the first type occurred when a guesser provided a broader posclass for an unknown word than a lexicon would and the tagger had difficulties with its disambiguationthis was especially the case with the words that were guessed as nounadjective but in fact act only as one of them another highly ambiguous group is the ing words which in general can act as nouns adjectives and gerunds and only direct lexicalization can restrict the searchspace as in the case of the word seeing which cannot act as an adjectivethe second type of mistagging was caused by incorrect assignments by the guesserusually this was the case with irregular words such as cattle or data which were wrongly guessed to be singular nouns but in fact were plural nouns we also did not include the quotforeign wordquot category in the set of tags to guess but this did not do too much harm because these words were very infrequent in the textsand the third type of mistagging occurred when the wordpos guesser assigned the correct posclass to a word but the tagger still disambiguated this class incorrectlythis was the most frequent type of error which accounted for more than 60 of the mistaggings on unknown wordswe have presented a technique for fully automated statistical acquisition of rules that guess possible postags for words unknown to the lexiconthis technique does not require specially prepared training data and uses for training a preexisting generalpurpose lexicon and word frequencies collected from a raw corpususing such training data three types of guessing rules are induced prefix morphological rules suffix morphological rules and endingguessing rulesevaluation of tagging accuracy on unknown words using texts and words unseen at the training phase showed that tagging with the automatically induced cascading guesser was consistently more accurate than previously quoted results known to the author tagging accuracy on unknown words using the cascading guesser was 877887the cascading guesser outperformed the guesser supplied with the xerox tagger and the guesser supplied with brill tagger both on unknown proper nouns and on the rest of the unknown words where it had an advantage of 6585when the unknown words were made known to the lexicon the accuracy of tagging was 936943 which makes the accuracy drop caused by the cascading guesser to be less than 6 in generalanother important conclusion from the evaluation experiments is that the morphological guessing rules do improve guessing performancesince they are more accurate than endingguessing rules they were applied first and improved the precision of the guesses by about 8this resulted in about 2 higher accuracy in the tagging of unknown wordsthe endingguessing rules constitute the backbone of the guesser and cope with unknown words without clear morphological structurefor instance discussing the problem of unknown words for the robust parsing bod writes quotnotice that richer morphological annotation would not be of any help here the words quotreturnquot quotstopquot and quotcostquot do not have a morphological structure on the basis of which their possible lexical categories can be predictedquot when we applied the endingguessing rules to these words the words return and stop were correctly classified as nounverbs and only the word cost failed to be guessed by the rulesthe acquired guessing rules employed in our cascading guesser are in fact of a standard nature which in some form or other is present in other wordpos guessersfor instance our endingguessing rules are akin to those of xerox and the morphological rules resemble some rules of brill but ours use more constraints and provide a set of all possible tags for a word rather than a single best tagthe two additional types of features used by brill guesser are implicitly represented in our approach as well one of the brill schemata checks the context of an unknown wordin our approach we guess the words using their features only and provide several possibilities for a word then at the disambiguation phase the context is used to choose the right tagas for brill schemata that checks the presence of a particular character in an unknown word we capture a similar feature by collecting the endingguessing rules for proper nouns and hyphenated words separatelywe believe that the technique for the induction of the endingguessing rules is quite similar to that of xeroxl or schmid but differs in the scoring and pruning methodsthe major advantage of the proposed technique can be seen in the cascading application of the different sets of guessing rules and in far superior training datawe use for training a preexisting generalpurpose lexiconthis has three advantages the size of the training lexicon is large and does not depend on the size or even the existence of the annotated corpusthis allows for the induction of more rules than from a lexicon derived from an annotated corpusfor instance the ending guesser of xerox includes 536 rules whereas our ending guesser includes 2196 guessing rules the information listed in a generalpurpose lexicon can be considered to be of better quality than that derived from an annotated corpus since it lists all possible readings for a word rather than only those that happen to occur in the corpuswe also believe that generalpurpose lexicons contain less erroneous information than those derived from annotated corpora the amount of work required to prepare the training lexicon is minimal and does not require any additional manual annotationour experiments with the lexicon derived from the celex lexical database and word frequencies derived from the brown corpus resulted in guessing rule sets that proved to be domain and corpusindependent producing similar results on texts of different originsan interesting byproduct of the proposed ruleinduction technique is the automatic discovery of the template morphological rules advocated in mikheev and liubushkina the induced morphological guessing rules turned out to consist mostly of the expected prefixes and suffixes of english and closely resemble the rules employed by the ispell unix spellcheckerthe rule acquisition and evaluation methods described here are implemented as a modular set of c and awk tools and the guesser is easily extendible to sublanguagespecific regularities and retrainable to new tag sets and other languages provided that these languages have affixational morphologyi would like to thank the anonymous referees for helpful comments on an earlier draft of this paper
J97-3003
automatic rule induction for unknownword guessingwords unknown to the lexicon present a substantial problem to nlp modules that rely on morphosyntactic information such as partofspeech taggers or syntactic parsersin this paper we present a technique for fully automatic acquisition of rules that guess possible partofspeech tags for unknown words using their starting and ending segmentsthe learning is performed from a generalpurpose lexicon and word frequencies collected from a raw corpusthree complimentary sets of wordguessing rules are statistically induced prefix morphological rules suffix morphological rules and endingguessing rulesusing the proposed technique unknownwordguessing rule sets were induced and integrated into a stochastic tagger and a rulebased tagger which were then applied to texts with unknown wordsour model ltpos performs both sentence identification and pos taggingour ltpos is a statistical combined partofspeech tagger and sentence boundary disambiguation moduleour ltpos is a statistical combined partofspeech tagger and sentence boundary disambiguation module
stochastic attributevalue grammars probabilistic analogues of regular and contextfree grammars are well known in computational linguistics and currently the subject of intensive research to date however no satisfactory probabilistic analogue of attributevalue grammars has been proposed previous attempts have failed to define an adequate parameterestimation algorithm in the present paper i define stochastic attributevalue grammars and give an algorithm for computing the maximumlikelihood estimate of their parameters the estimation algorithm is adapted from della pietra della pietra and lafferty to estimate model parameters it is necessary to compute the expectations of certain functions under random fields in the application discussed by della pietra della pietra and lafferty gibbs sampling can be used to estimate the needed expectations the fact that attributevalue grammars generate constrained languages makes gibbs sampling inapplicable but i show that sampling can be done using the more general metropolishastings algorithm probabilistic analogues of regular and contextfree grammars are well known in computational linguistics and currently the subject of intensive researchto date however no satisfactory probabilistic analogue of attributevalue grammars has been proposed previous attempts have failed to define an adequate parameterestimation algorithmin the present paper i define stochastic attributevalue grammars and give an algorithm for computing the maximumlikelihood estimate of their parametersthe estimation algorithm is adapted from della pietra della pietra and lafferty to estimate model parameters it is necessary to compute the expectations of certain functions under random fieldsin the application discussed by della pietra della pietra and lafferty gibbs sampling can be used to estimate the needed expectationsthe fact that attributevalue grammars generate constrained languages makes gibbs sampling inapplicable but i show that sampling can be done using the more general metropolishastings algorithmstochastic versions of regular grammars and contextfree grammars have received a great deal of attention in computational linguistics for the last several years and basic techniques of stochastic parsing and parameter estimation have been known for decadeshowever regular and contextfree grammars are widely deemed linguistically inadequate standard grammars in computational linguistics are attributevalue grammars of some varietybefore the advent of statistical methods regular and contextfree grammars were considered too inexpressive for serious consideration and even now the reliance on stochastic versions of the lessexpressive grammars is often seen as an expedient necessitated by the lack of an adequate stochastic version of attributevalue grammarsproposals have been made for extending stochastic models developed for the regular and contextfree cases to grammars with constraintsbrew sketches a probabilistic version of headdriven phrase structure grammar he proposes a stochastic process for generating attributevalue structures that is directed acyclic graphs a dag is generated starting from a single node labeled with the most general typeeach type s has a set of maximal subtypes t1 tnto expand a node labeled s one chooses a maximal subtype t stochasticallyone then considers equating the current node with other nodes of type t making a stochastic yesno decision for eachequating two nodes creates a reentrancyif the current node is equated with no other node one proceeds to expand iteach maximal type introduces types u1un corresponding to values of attributes one creates a child node for each introduced type and then expands each child in turna limitation of this approach is that it permits one to specify only the average rate of reentrancies it does not permit one to specify more complex context dependencieseisele takes a logicprogramming approach to constraint grammarshe assigns probabilities to proof trees by attaching parameters to logic program clauseshe presents the following logic program as an example the probability of a proof tree is defined to be proportional to the product of the probabilities of clauses used in the proofnormalization is necessary because some derivations lead to invalid proof treesfor example the derivation is invalid because of the illegal assignment b c both brew and eisele associate weights with analogues of rewrite rulesin brew case we can view type expansion as a stochastic choice from a finite set of rules of form x where x is the type to expand and each 6 is a sequence of introduced child typesa reentrancy decision is a stochastic choice between two rules x yes and x no where x is the type of the node being considered for reentrancyin eisele case expanding a goal term can be viewed as a stochastic choice among a finite set of rules x where x is the predicate of the goal term and each 6 is a program clause whose head has predicate xthe parameters of the models are essentially weights on such rules representing the probability of choosing 6 when making a choice of type xin these terms brew and eisele propose estimating parameters as the empirical relative frequency of the corresponding rulesthat is the weight of the rule x is obtained by counting the number of times x rewrites as 6 in the training corpus divided by the total number of times x is rewritten in the training corpusfor want of a standard term let us call these estimates empirical relative frequency estimatesto deal with incomplete data both brew and eisele appeal to the expectationmaximization algorithm applied however to erf rather than maximumlikelihood estimatesunder certain independence conditions erf estimates are maximumlikelihood estimatesunfortunately these conditions are violated when there are context dependencies of the sort found in attributevalue grammars as will be shown belowas a consequence applying the erf method to attributevalue grammars does not generally yield maximumlikelihood estimatesthis is true whether one uses them or nota method that yields the quotwrongquot estimates on complete data does not improve when them is used to extend the method to incomplete dataeisele identifies an important symptom that something is amiss with erf estimates the probability distribution over proof trees that one obtains does not agree with the frequency of proof trees in the training corpuseisele recognizes that this problem arises only where there are context dependenciesfortunately solutions to the contextdependency problem have been described in statistics machine learning and statistical pattern recognition particularly image processingthe models of interest are known as random fieldsrandom fields can be seen as a generalization of markov chains and stochastic branching processesmarkov chains are stochastic processes corresponding to regular grammars and random branching processes are stochastic processes corresponding to contextfree grammarsthe evolution of a markov chain describes a line in which each stochastic choice depends only on the state at the immediately preceding timepointthe evolution of a random branching process describes a tree in which a finitestate process may spawn multiple child processes at the next timestep but the number of processes and their states depend only on the state of the unique parent process at the preceding timestepin particular stochastic choices are independent of other choices at the same timestep each process evolves independentlyif we permit reentrancies that is if we permit processes to remerge we generally introduce contextsensitivityin order to remerge processes must be quotin synchquot which is to say they cannot evolve in complete independence of one anotherrandom fields are a particular class of multidimensional random processes that is processes corresponding to probability distributions over an arbitrary graphthe theory of random fields can be traced back to gibbs indeed the probability distributions involved are known as gibbs distributionsto my knowledge the first application of random fields to natural language was mark et al the problem of interest was how to combine a stochastic contextfree grammar with ngram language modelsin the resulting structures the probability of choosing a particular word is constrained simultaneously by the syntactic tree in which it appears and the choices of words at the n preceding positionsthe contextsensitive constraints introduced by the ngram model are reflected in reentrancies in the structure of statistical dependencies as in figure 1statistical dependencies under the model of mark et al in this diagram the choice of label on a node z with parent x and preceding word y is dependent on the label of x and y but conditionally independent of the label on any other nodedella pietra della pietra and lafferty also apply random fields to natural language processingthe application they consider is the induction of english orthographic constraintsinducing a grammar of possible english wordsddl describe an algorithm called improved iterative scaling for selecting informative features of words to construct a random field and for setting the parameters of the field optimally for a given set of features to model an empirical word distributionit is not immediately obvious how to use the its algorithm to equip attributevalue grammars with probabilitiesin brief the difficulty is that the its algorithm requires the computation of the expectations under random fields of certain functions in general computing these expectations involves summing over all configurations which is not possible when the configuration space is largeinstead ddl use gibbs sampling to estimate the needed expectationsgibbs sampling is possible for the application that ddl considera prerequisite for gibbs sampling is that the configuration space be closed under relabeling of graph nodesin the orthography application the configuration space is the set of possible english words represented as finite linear graphs labeled with ascii charactersevery way of changing a label that is every substitution of one ascii character for a different one yields a possible english wordby contrast the set of graphs admitted by an attributevalue grammar g is highly constrainedif one changes an arbitrary node label in a dag admitted by g one does not necessarily obtain a new dag admitted by g hence gibbs sampling is not applicablehowever i will show that a more general sampling method the metropolishastings algorithm can be used to compute the maximumlikelihood estimate of the parameters of av grammarslet us begin by examining stochastic contextfree grammars and asking why the natural extension of scfg parameter estimation to attributevalue grammars failsa point of terminology i will use the term grammar to refer to an unweighted grammar be it a contextfree grammar or attributevalue grammara grammar equipped with weights i will refer to as a modeloccasionally i will also use model to refer to the weights themselves or the probability distribution they definethroughout we will use the following stochastic contextfree grammar for illustrative purposeslet us call the underlying grammar g1 and the grammar equipped with weights as shown mi the probability of a given tree is computed as the product of probabilities of rules used in itfor example let x be the tree in figure 2 and let qi be the probability distribution over trees defined by model m1then in parsing we use the probability distribution qi defined by model m1 to disambiguate the grammar assigns some set of trees xi x to a sentence a and we choose that tree xi that has greatest probability qi the issue of efficiently computing the mostprobable parse for a given sentence has been thoroughly addressed in the literaturethe standard parsing techniques can be readily adapted to the randomfield models to be discussed below so i simply refer the reader to the literatureinstead i concentrate on parameter estimation which for attributevalue grammars cannot be accomplished by standard techniquesby parameter estimation we mean determining values for the weights 0in order for a stochastic grammar to be useful we must be able to compute the correct weights where by correct weights we mean the weights that best account for a training corpusthe degree to which a given set of weights accounts for a training corpus is measured by the similarity between the distribution q determined by the weights 3 and the distribution of trees x in the training corpusthe distribution determined by the training corpus is known as the empirical distributionfor example suppose we have a training corpus containing twelve trees of the four types from l shown in figure 3 where c is the count of how often the in comparing a distribution q to the empirical distribution 3 we shall actually measure dissimilarity rather than similarityour measure for dissimilarity of distributions the divergence between 5 and q at point x is the log of the ratio of p to qthe overall divergence between p and q is the average divergence where the averaging is over tree in the corpus ie point divergences 1n03q are weighted by 5 and summedfor example let qi be as before the distribution determined by model m1table 1 shows qi 17 the ratio qi 13 and the weighted point divergence 3 ln q1the sum of the fourth column is the kl divergence d between 3 and qithe third column contains qi rather than 17qi so that one can see at a glance whether qi is too large or too small but not all of l appears in the corpustwo trees are missing and they account for the missing massthese two trees are given in figure 5each of these trees has the trees from l that are missing in the training corpus probability 0 according to 5 but probability 19 according to qiintuitively the problem is this the distribution qi assigns too little weight to trees x1 and x2 and too much weight to the quotmissingquot trees call them x5 and x6yet exactly the same rules are used in x5 and x6 as are used in x1 and x2hence there is no way to increase the weight for trees x1 and x2 improving their fit to 5 without simultaneously increasing the weight for x5 and x6 making their fit to 13 worsethe distribution qi is the best compromise possibleto say it another way our assumption that the corpus was generated by a contextfree grammar means that any context dependencies in the corpus must be accidental the result of sampling noisethere is indeed a dependency in the corpus in figure 3 in the trees where there are two a the a always rewrite the same wayif the corpus was generated by a stochastic contextfree grammar then this dependency is accidentalthis does not mean that the contextfree assumption is wrongif we generate twelve trees at random from qi it would not be too surprising if we got the corpus in figure 3more extremely if we generate a random corpus of size 1 from qi it is quite impossible for the resulting empirical distribution to match the distribution qibut as the corpus size increases the fit between 15 and qi becomes ever betterbut what if the dependency in corpus is not accidentalwhat if we wish to adopt a grammar that imposes the constraint that both a rewrite the same waywe can impose such a constraint by means of an attributevalue grammarwe may formalize an attributevalue grammar as a contextfree grammar with attribute labels and path equationsan example is the following grammar let us call it g2 generating a dagthe grammar used is g2 node labeled with the start category of g2 namely s a node x is expanded by choosing a rule that rewrites the category of xin this case we choose rule 1 to expand the root noderule 1 instructs us to create two children both labeled athe edge to the first child is labeled and the edge to the second child is labeled 2the constraint indicates that the child of the child of x is identical to the 1 child of the 2 child of xwe create an unlabeled node to represent this grandchild of x and direct appropriately labeled edges from the children yielding we proceed to expand the newly introduced nodeswe choose rule 3 to expand the first a nodein this case a child with edge labeled i already exists so we use it rather than creating a new onerule 3 instructs us to label this child a yielding now we expand the second a nodeagain we choose rule 3we are instructed to label the 1 child a but it already has that label so we do not need to do anythingfinally in the only remaining node is the bottommost node labeled asince its label is a terminal category it does not need to be expanded and we are donelet us back up to againhere we were free to choose rule 4 instead of rule 3 to expand the righthand a noderule 4 instructs us to label the 1 child b but we cannot inasmuch as it is already labeled athe derivation fails and no dag is generatedthe language l is the set of dags produced by successful derivations as shown in figure 7now we face the question of how to attach probabilities to grammar g2the natural extension of the method we used for contextfree grammars is the following associate a weight with each of the six rules of grammar g2for example let m2 be the model consisting of g2 plus weights let 02 be the weight that m2 assigns to dag x it is defined to be the product of the weights of the rules used to generate xfor example the weight 02 assigned to tree xi of rule applications in a dag generated by g2the weight of the dag is the product of the weights of rule applications hence 02 010303 12 23 23 29observe that 02 ov3i which is to say 134moreover since 3 1 it does not hurt to include additional factors ex for those i where f 0that is we can define the dag weight 0 corresponding to rule weights on generally asi1 the next question is how to estimate weightslet us consider what happens when we use the erf methodlet us assume a corpus distribution for the dags in figure 7 analogous to the distribution in figure 3 using the erf method we estimate rule weights as in table 4this table is identical to the one given earlier in the contextfree casewe arrive at the same weights m2 we considered above defining dag weights 2but at this point a problem arises 02 is not a probability distributionunlike in the contextfree case the four dags in figure 7 constitute the entirety of lthis time there are no missing dags to account for the missing probability massthere is an obvious quotfixquot for this problem we can simply normalize has an obvious fix however something has actually gone very wrongthe erf method yields the best weights only under certain conditions that we inadvertently violated by changing l and reapportioning probability via normalizationin point of fact we can easily see that the erf weights in table 4 are not the best weights for our example grammarconsider the alternative model mk given in figure 9 defining probability distribution 11an alternative model mthese weights are proper in the sense that weights for rules with the same lefthand side sum to onethe reader can verify that 0 sums to z 33n and that q is in short in the av case the erf weights do not yield the best weightsthis means that the erf method does not converge to the correct weights as the corpus size increasesif there are genuine dependencies in the grammar the erf method converges systematically to the wrong weightsfortunately there are methods that do converge to the right weightsthese are methods that have been developed for random fieldsa random field defines a probability distribution over a set of labeled graphs sz called configurationsin our case the configurations are the dags generated by the grammar ie c2 lthe weight assigned to a configuration is the product of the weights assigned to selected features of the configurationwe use the notation where is its frequency function that is f is the number of times that feature i occurs in configuration xi use the term feature here as it is used in the machine learning and statistical pattern recognition literature not as in the constraint grammar literature where feature is synonymous with attributein my usage dag edges are labeled with attributes not featuresfeatures are rather like geographic features of dags a feature is some larger or smaller piece of structure that occurspossibly at more than one placein a dagthe probability of a configuration is proportional to its weight and is obtained by normalizing the weight distributionif we identify the features of a configuration with local treesequivalently with applications of rewrite rulesthe random field model is almost identical to the model we considered in the previous sectionthere are two important differencesfirst we no longer require weights to sum to one for rules with the same lefthand sidesecond the model does not require features to be identified with rewrite ruleswe use the grammar to define the set of configurations s2 l but in defining a probability distribution over l we can choose features of dags however we wishlet us consider an examplelet us continue to assume grammar g2 generating the language in figure 7 and let us continue to assume the empirical distribution in but now rather than taking rule applications to be features let us adopt the two features in figure 10for purpose of illustration take feature 1 to have weight of features 1 and 2 in dags generated by g2 and the computation of dag weights 0 and dag probabilities q recreate the empirical distribution using fewer features than beforeintuitively we need only use as many features as are necessary to distinguish among trees that have different empirical probabilitiesthis added flexibility is welcome but it does make parameter estimation more involvednow we must not only choose values for weights we must also choose the features that weights are to be associated withwe would like to do both in a way that permits us to find the best model in the sense of the model that minimizes the kullbackleibler distance with respect to the empirical distributionthe its algorithm provides a method to do precisely thatin outline the its algorithm is as follows for the sake of concreteness let us take features to be labeled subdagsin step 2 of the algorithm we do not consider every conceivable labeled subdag but only the atomic subdags and those complex subdags that can be constructed by combining features already in the field or by combining a feature in the field with some atomic featurewe also limit our attention to features that actually occur in the training corpusin our running example the atomic features are as shown in figure 12features can be combined by adding connecting arcs as shown in figure 13 for examplecombining features to create more complex featuresfield induction begins with the null fieldwith the corpus we have been assuming the null field takes the form in figure 14no dag x has any features so 0 n i3 is 003the aim of feature selection is to choose a feature that reduces this divergence as much as possiblethe astute reader will note that there is a problem with the null field if l is infinitenamely it is not possible to have a uniform probability mass distribution over an infinite setif each dag in an infinite set of dags is assigned a constant nonzero probability e then the total probability is infinite no matter how small e isthere are a couple of ways of dealing with the problemthe approach that ddl adopt is to assume a consistent prior distribution p over graph sizes k and a family of random fields qk representing the conditional probability q the probability of a tree is then pqall the random fields have the same features and weights differing only in their normalizing constantsi will take a somewhat different approach hereas sketched at the beginning of section 3 we can generate dags from an av grammar much as proposed by brew and eiseleif we ignore failed derivations the process of dag generation is completely analogous to the process of tree generation from a stochastic cfgindeed in the limiting case in which none of the rules contain constraints the grammar is a cfgto obtain an initial distribution we associate a weight with each rule the weights for rules with a common lefthand side summing to onethe probability of a dag is proportional to the product of weights of rules used to generate itwe estimate weights using the erf method we estimate the weight of a rule as the relative frequency of the rule in the training corpus among rules with the same lefthand sidethe resulting initial distribution is not the maximumlikelihood distribution as we knowbut it can be taken as a useful first approximationintuitively we begin with the erf distribution and construct a random field to take account of context dependencies that the erf distribution fails to capture incrementally improving the fit to the empirical distributionin this framework a model consists of an av grammar g whose purpose is to define a set of dags l a set of initial weights 0 attached to the rules of g the weight of a dag is the product of weights of rules used in generating itdiscarding failed derivations and renormalizing yields the initial distribution po at each iteration we select a new feature f by considering all atomic features and all complex features that can be constructed from features already in the fieldholding the weights constant for all old features in the field we choose the best weight 0 for f yielding a new distribution q1the score for feature f is the reduction it permits in d where gold is the old fieldthat is the score for f is d dwe compute the score for each candidate feature and add to the field that feature with the highest scoreto illustrate consider the two atomic features a and bgiven the null field as old field the best weight for a is 0 75 and the best weight for b is 0 1this yields q and d as in figure 15the better feature is a and a would be added to the field comparing features qa is the best distribution that can be generated by adding the feature quotaquot to the field and qb is the best distribution generable by adding the feature quotbquot if these were the only two choicesintuitively a is better than b because a permits us to distinguish the set xi x3 from the set x2 x4 the empirical probability of the former is 1314 712 whereas the empirical probability of the latter is 512distinguishing these sets permits us to model the empirical distribution better by contrast the feature b distinguishes the set xi x2 from x3 x4the empirical probability of the former is 1316 12 and the empirical probability of the latter is also 12the old field models these probabilities exactly correctly so making the distinction does not permit us to improve on the old fieldas a result the best weight we can choose for b is 1 which is equivalent to not having the feature b at allddl show that there is a unique weight a that maximizes the score for a new feature f writing qo for the distribution that results from assigning weight 13 to feature f j is the solution to the equation intuitively we choose the weight such that the expectation of f under the resulting new field is equal to its empirical expectationsolving equation for 3 is easy if l is small enough to enumeratethen the sum over l that is implicit in q3 f can be expanded out and solving for 3 is simply a matter of arithmeticthings are a bit trickier if l is too large to enumerateddl show that we can solve equation if we can estimate gold f ic for k from 0 to the maximum value of f in the training corpuswe can estimate gold if k by means of random samplingthe idea is actually rather simple to estimate how often the feature appears in quotthe average dagquot we generate a representative minicorpus from the distribution iloid and countthat is we generate dags at random in such a way that the relative frequency of dag x is gold and we count how often the feature of interest appears in dags in our generated minicorpusthe application that ddl consider is the induction of english orthographic constraints that is inducing a field that assigns high probability to quotenglishsoundingquot words and low probability to nonenglishsounding wordsfor this application gibbs sampling is appropriategibbs sampling does not work for the application to av grammars howeverfortunately there is an alternative random sampling method we can use metropolishastings samplingwe will discuss the issue in some detail shortlywhen a new feature is added to the field the best value for its initial weight is chosen but the weights for the old features are held constantin general however adding the new feature may make it necessary to readjust weights for all featuresthe second half of the iis algorithm involves finding the best weights for a given set of featuresthe method is very similar to the method for selecting the initial weight for a new featurelet be the old weights for the featureswe wish to compute quotincrementsquot consider the equation gold efi pfil where f epx is the total number of features of dag xthe reason for the factor et is a bit involvedvery roughly we would like to choose weights so that the expectation of f under the new field is equal to pfnow qn is where we factor z as z6zo for zo the normalization constant in qmhence qnew f 45j for all the features simultaneously not just the weight 6 for feature iwe might consider approximating qnew efj by ignoring the normalization factor and assuming that all features have the same weight as feature isince ft 66 64 we arrive at the expression on the lefthand side of equation one might expect the approximation just described to be rather poor but it is proven in della pietra della pietra and lafferty that solving equation for 6 and setting the new weight for feature i to sioi is guaranteed to improve the modelthis is the real justification for equation and the reader is referred to della pietra della pietra and lafferty for detailssolving yields improved weights but it does not necessarily immediately yield the globally best weightswe can obtain the globally best weights by iteratingset a 6a for all i and solve equation againrepeat until the weights no longer changeas with equation solving equation is straightforward if l is small enough to enumerate but not if l is largein that case we must use random samplingwe generate a representative minicorpus and estimate expectations by counting in the minicorpuswe have seen that random sampling is necessary both to set the initial weight for features under consideration and to adjust all weights after a new feature is adoptedrandom sampling involves creating a corpus that is representative of a given model distribution qto take a very simple example a fair coin can be seen as a method for sampling from the distribution q in which q 12 q 12saying that a corpus is representative is actually not a comment about the corpus itself but the method by which it was generated a corpus representative of distribution q is one generated by a process that samples from qsaying that a process m samples from q is to say that the empirical distributions of corpora generated by m converge to q in the limitfor example if we flip a fair coin once the resulting empirical distribution over is either or not the faircoin distribution but as we take larger and larger corpora the resulting empirical distributions converge to an advantage of scfgs that random fields lack is the transparent relationship between an scfg defining a distribution q and a sampler for qwe can sample from q by performing stochastic derivations each time we have a choice among rules expanding a category x we choose rule x with probability 13 where a is the weight of rule enow we can sample from the initial distribution pa by performing stochastic derivationsat the beginning of section 3 we sketched how to generate dags from an av grammar g via nondeterministic derivationswe defined the initial distribution in terms of weights 0 attached to the rules of g we can convert the nondeterministic derivations discussed at the beginning of section 3 into stochastic derivations by choosing rule x with probability 0 when expanding a node labeled xsome derivations fail but throwing away failed derivations has the effect of renormalizing the weight function so that we generate a dag x with probability po as desiredthe metropolishastings algorithm provides us with a means of converting the sampler for the initial distribution po into a sampler for the field distribution qgenerally let p be a distribution for which we have a samplerwe wish to construct a sample xl xn from a different distribution qassume that items xl xt are already in the sample and we wish to choose x1the sampler for p proposes a new item ywe do not simply add y to the samplethat would give us a sample from pbut rather we make a stochastic decision whether to accept the proposal y or reject itif we accept y it is added to the sample and if we reject y then xn is repeated the acceptance decision is made as follows if p q then y is overrepresented among the proposalswe can quantify the degree of overrepresentation as pqthe idea is to reject y with a probability corresponding to its degree of overrepresentationhowever we do not consider the absolute degree of overrepresentation but rather the degree of overrepresentation relative to xnthat is we consider the value if are 1 then we accept y with a probability that diminishes as r increases specifically with probability 1rin brief the acceptance probability of y is a minit can be shown that proposing items with probability ph and accepting them with probability a yields a sampler for q2 the acceptance probability a reduces in our case to a particularly simple formif are 1 then a 1otherwise writing 0 for the quotfield weightquot ni 0 we havein summary we cannot simply transplant cf methods to the av grammar casein particular the erf method yields correct weights only for scfgs not for av grammarswe can define a probabilistic version of av grammars with a correct weightselection method by going to random fieldsfeature selection and weight adjustment can be accomplished using the iis algorithmin feature selection we need to use random sampling to find the initial weight for a candidate feature and in weight adjustment we need to use random sampling to solve the weight equationthe random sampling method that ddl used is not appropriate for sets of dags but we can solve that problem by using the metropolishastings method insteadopen questions remainfirst random sampling is notorious for being slow and it remains to be shown whether the approach proposed here will be practicablei expect practicability to be quite sensitive to the choice of grammarthe more the grammarin which 71 is the distribution we wish to sample from and g is the proposal probability the probability that the input sampler will propose y if the previous configuration was xthe case we consider is a special case in which the proposal probability is independent of x the proposal probability g is in our notation pthe original metropolis algorithm is also a special case of the metropolishastings algorithm in which the proposal probability is symmetric that is g gthe acceptance function then reduces to minir which is minq in our notationi mention this only to point out that it is a different special caseour proposal probability is not symmetric but rather independent of the previous configuration and though our acceptance function reduces to a form that is similar to the original metropolis acceptance function it is not the same in general 49 0 qq distribution diverges from the initial contextfree approximation the more features will be necessary to quotcorrectquot it and the more random sampling will be called ona second issue is incomplete datathe approach described here assumes complete data fortunately an extension of the method to handle incomplete data is described in riezler and i refer readers to that paperas a closing note it should be pointed out explicitly that the random field techniques described here can be profitably applied to contextfree grammars as wellas stanley peters nicely put it there is a distinction between possibilistic and probabilistic contextsensitivityeven if the language described by the grammar of interestthat is the set of possible treesis contextfree there may well be contextsensitive statistical dependenciesrandom fields can be readily applied to capture such statistical dependencies whether or not l is contextsensitivein the feature selection step we choose an initial weight 3 for each candidate feature f so as to maximize the gain g d11 gold d11cfro of adding f to the fieldit is actually more convenient to consider log weights a ln 3for a given feature f the log weight et that maximizes gain is the solution to the equation where q is the distribution that results from adding f to the field with log weight athis equation can be solved using newton methoddefine to find the value of a for which f 0 we begin at a convenient point ao and iteratively compute f della pietra della pietra and lafferty show that f is equal to the negative of the variance off under the new field which i will write 17fto compute the iteration we need to be able to compute f and ffor f we require pf and qf and f can be expressed as f simply the average value of f in the training corpusthe remaining terms are all of the form qa fl we can reexpress this expectation in terms of the old field gold the expectations qoid fief can be obtained by generating a random sample of size n from gold and computing the average value of leafthat is gold ref and the newton iteration reduces to to compare candidates we also need to know the gain d dq311c16 for each candidatethis can be expressed as follows putting everything together the algorithm for feature selection has the following formthe array ef is assumed to have been initialized with the empirical expectationsthe procedure for adjusting field weights has much the same structure as the procedure for choosing initial weightsin terms of log weights we wish to compute increments such that the new field with log weights has a lower divergence than the old field we choose each 6 as the solution to the equation again we use newton methodwe wish to find 6 such that f1 0 where we see that the expectations we need to compute by sampling from gold are of form q0ldffie6fwe generate a random sample and define as we generate the sample we update the array ci mi ek m 1 we estimate qo1cffte6f as the average value of ftae6f in the sample namely 5rthis permits us to compute f and fthe resulting newton iteration is the estimation procedure is procedure adjust weights begin until the field converges dothis work has greatly profited from the comments criticism and suggestions of a number of people including yoav freund john lafferty stanley peters hans uszkoreit and members of the audience at talks i gave at saarbrucken and tubingenmichael miller and kevin mark introduced me to random fields as a way of dealing with contextsensitivities in language planting the idea that led to this paperfinally i would especially like to thank marc light and stefan riezler for extended discussions of the issues addressed here and helpful criticism of my first attempts to present this materialall responsibility for flaws and errors of course remains with me
J97-4005
stochastic attributevalue grammarsprobabilistic analogues of regular and contextfree grammars are well known in computational linguistics and currently the subject of intensive researchto date however no satisfactory probabilistic analogue of attributevalue grammars has been proposed previous attempts have failed to define an adequate parameterestimation algorithmin the present paper i define stochastic attributevalue grammars and give an algorithm for computing the maximumlikelihood estimate of their parametersthe estimation algorithm is adapted from della pietra della pietra and lafferty to estimate model parameters it is necessary to compute the expectations of certain functions under random fieldsin the application discussed by della pietra della pietra and lafferty gibbs sampling can be used to estimate the needed expectationsthe fact that attributevalue grammars generate constrained languages makes gibbs sampling inapplicable but i show that sampling can be done using the more general metropolishastings algorithmwe proposes a markov random field or log linear model for subgs
introduction to the special issue on word sense disambiguation the state of the art the automatic disambiguation of word senses has been an interest and concern since the earliest days of computer treatment of language in the 1950ssense disambiguation is an quotintermediate taskquot which is not an end in itself but rather is necessary at one level or another to accomplish most natural language processing tasksit is obviously essential for language understanding applications such as message understanding and manmachine communication it is at least helpful and in some instances required for applications whose aim is not language understanding analysis is to analyze the distribution of predefined categories of wordsie words indicative of a given concept idea theme etcacross a textthe need for sense disambiguation in such analysis in order to include only those instances of a word in its proper sense has long been recognized and is masculine in the former sense feminine in the latter to properly tag it as a masculine nounsense disambiguation is also necessary for certain syntactic analyses such as prepositional phrase attachment and in general restricts the space of competing parses the problem of word sense disambiguation has been described as quotaicompletequot that is a problem which can be solved only by first resolving all the difficult problems in artificial intelligence such as the representation of common sense and encyclopedic knowledgethe inherent difficulty of sense disambiguation was a central point in barhillel wellknown treatise on machine translation where he asserted that he saw no means by which the sense of the word pen in the sentence the box is in the pen could be determined automaticallybarhillel argument laid the groundwork for the alpac report which is generally regarded as the direct because for the abandonment of most research on machine translation in the early 1960sat about the same time considerable progress was being made in the area of knowledge representation especially the emergence of semantic networks which were immediately applied to sense disambiguationwork on word sense disambiguation continued throughout the next two decades in the framework of aibased natural language understanding research as well as in the fields of content analysis stylistic and literary analysis and information retrievalin the past ten years attempts to automatically disambiguate word senses have multiplied due like much other similar activity in the field of computational linguistics to the availability of large amounts of machinereadable text and the corresponding development of statistical methods to identify and apply information about regularities in this datanow that other problems amenable to these methods such as partofspeech disambiguation and alignment of parallel translations have been fairly thoroughly addressed the problem of word sense disambiguation has taken center stage and it is frequently cited as one of the most important problems in natural language processing research todaygiven the progress that has been recently made in wsd research and the rapid development of methods for solving the problem it is appropriate at this time to stand back and assess the state of the field and to consider the next steps that need to be takento this end this paper surveys the major wellknown approaches to word sense disambiguation and considers the open problems and directions of future researchin general terms word sense disambiguation involves the association of a given word in a text or discourse with a definition or meaning which is distinguishable from other meanings potentially attributable to that wordthe task therefore necessarily involves two steps the determination of all the different senses for every word relevant to the text or discourse under consideration and a means to assign each occurrence of a word to the appropriate sensemuch recent work on wsd relies on predefined senses for step including the precise definition of a sense is however a matter of considerable debate within the communitythe variety of approaches to defining senses has raised concern about the comparability of much wsd work and given the difficulty of the problem of sense definition no definitive solution is likely to be found soon however since the earliest days of wsd work there has been general agreement that the problems of morphosyntactic disambiguation and sense disambiguation can be disentangled that is for homographs with different parts of speech morphosyntactic disambiguation accomplishes sense disambiguation and therefore wsd work has focused largely on distinguishing senses among homographs belonging to the same syntactic categorystep the assignment of words to senses is accomplished by reliance on two major sources of information all disambiguation work involves matching the context of the instance of the word to be disambiguated with either information from an external knowledge source or information about the contexts of previously disambiguated instances of the word derived from corpora any of a variety of association methods is used to determine the best match between the current context and one of these sources of information in order to assign a sense to each word occurrencethe following sections survey the approaches applied to datethe first attempts at automated sense disambiguation were made in the context of machine translation in his famous memorandum weaver discusses the need for wsd in machine translation and outlines the basis of an approach to wsd that underlies all subsequent work on the topic if one examines the words in a book one at a time as through an opaque mask with a hole in it one word wide then it is obviously impossible to determine one at a time the meaning of the wordsbut if one lengthens the slit in the opaque mask until one can see not only the central word in question but also say n words on either side then if n is large enough one can unambiguously decide the meaning of the central wordthe practical question is quotwhat minimum value of n will at least in a tolerable fraction of cases lead to the correct choice of meaning for the central wordquot a wellknown early experiment by kaplan attempted to answer this question at least in part by presenting ambiguous words in their original context and in a variant context providing one or two words on either side to seven translatorskaplan observed that sense resolution given two words on either side of the word was not significantly better or worse than when given the entire sentencethe same phenomenon has been reported by several researchers since kaplan work appeared eg masterman koutsoudas and korthage on russian and gougenheim and michea and choueka and lusignan on frenchreifler quotsemantic coincidencesquot between a word and its context quickly became the determining factor in wsdthe complexity of the context and in particular the role of syntactic relations was also recognized for example reifler says grammatical structure can also help disambiguate as for instance the word keep which can be disambiguated by determining whether its object is gerund adjectival phrase or noun phrase the goal of mt was initially modest focused primarily on the translation of technical texts and in all cases dealing with texts from particular domainsweaver discusses the role of the domain in sense disambiguation making a point that was reiterated several decades later by gale church and yarowsky in mathematics to take what is probably the easiest example one can very nearly say that each word within the general context of a mathematical article has one and only one meaning following directly from this observation much effort in the early days of machine translation was devoted to the development of specialized dictionaries or quotmicroglossariesquot such microglossaries contain only the meaning of a given word relevant for texts in a particular domain of discourse eg a microglossary for the domain of mathematics would contain only the relevant definition of triangle and not the definition of triangle as a musical instrumentthe need for knowledge representation for wsd was also acknowledged from the outset weaver concludes by noting the quottremendous amount of work needed in the logical structure of languagesquot several researchers attempted to devise ide and veronis introduction an quotinterlinguaquot based on logical and mathematical principles that would solve the disambiguation problem by mapping words in any language to a common semanticconceptual representationamong these efforts those of richens and masterman eventually led to the notion of the quotsemantic networkquot following on this the first machineimplemented knowledge base was constructed from roget thesaurus masterman applied this knowledge base to the problem of wsd in an attempt to translate virgil georgics by machine she looked up for each latin word stem the translation in a latinenglish dictionary and then looked up this word in the wordtohead index of rogetin this way each latin word stem was associated with a list of roget head numbers associated with its english equivalentsthe numbers for words appearing in the same sentence were then examined for overlapsfinally english words appearing under the multiplyoccurring head categories were chosen for the translationmasterman methodology is strikingly similar to that underlying much of the knowledgebased wsd accomplished recently it is interesting to note that weaver text also outlined the statistical approach to language analysis prevalent now nearly fifty years later this approach brings into the foreground an aspect of the matter that probably is absolutely basicnamely the statistical character of the problem and it is one of the chief purposes of this memorandum to emphasize that statistical semantic studies should be undertaken as a necessary primary step several authors followed this approach in the early days of machine translation estimations of the degree of polysemy in texts and dictionaries were made harper working on russian texts determined the number of polysemous words in an article on physics to be approximately 30 and 43 in another sample of scientific writing he also found that callaham russianenglish dictionary provides on average 86 english equivalents for each russian word of which 56 are quasisynonyms thus yielding approximately three distinct english equivalents for each russian wordbelkaja reports that in the first computerized russian dictionary 500 out of 2000 words are polysemouspimsleur introduced the notion of levels of depth for a translation level 1 uses the most frequent equivalent producing a text where 80 of the words are correctly translated level 2 distinguishes additional meanings producing a translation which is 90 correct etcalthough the terminology is different this is very similar to the notion of baseline tagging used in modern work a convincing implementation of many of these ideas was made several years later paradoxically at the moment when mt began its declinemadhu and lytle working from the observation that domain constrains sense calculated sense frequency for texts in different domains and applied a bayesian formula to determine the probability of each sense in a given contexta technique similar to that applied in much later work and which yielded a similar 90 correct disambiguation result the striking fact about this early work on wsd is the degree to which the fundamental problems and approaches to the problem were foreseen and developed at that timehowever without largescale resources most of these ideas remained untested and to a large extent forgotten until several decades lateral methods began to flourish in the early 1960s and began to attack the problem of language understandingas a result wsd in al work was typically accomplished in the context of larger systems intended for full language understandingin the spirit of the times such systems were almost always grounded in some theory of human language understanding that they attempted to model and often involved the use of detailed knowledge about syntax and semantics to perform their task which was exploited for wsd in the late 1950s and were immediately applied to the problem of representing word meanings2 masterman working in the area of machine translation used a semantic network to derive the representation of sentences in an interlingua comprised of fundamental language concepts sense distinctions are implicitly made by choosing representations that reflect groups of closely related nodes in the networkshe developed a set of 100 primitive concept types in terms of which her group built a 15000entry concept dictionary where concept types are organized in a lattice with inheritance of properties from superconcepts to subconceptsbuilding on this and on work on semantic networks by richens quillian built a network that includes links among words and concepts in which links are labeled with various semantic relations or simply indicate associations between wordsthe network is created starting from dictionary definitions but is enhanced by human knowledge that is handencodedwhen two words are presented to the network quillian program simulates the gradual activation of concept nodes along a path of links originating from each input word by means of marker passing disambiguation is accomplished because only one concept node associated with a given input word is likely to be involved in the most direct path found between the two input wordsquillian work informed later dictionarybased approaches to wsd subsequent aibased approaches exploited the use of frames containing information about words and their roles and relations to other words in individual sentencesfor example hayes uses a combination of a semantic network and case framesthe network consists of nodes representing noun senses and links represented by verb senses case frames impose isa and partof relations on the networkas in quillian system the network is traversed to find chains of connections between wordshayes work shows that homonyms can be fairly accurately disambiguated using this approach but it is less successful for other kinds of polysemyhirst also uses a network of frames and again following quillian marker passing to find minimumlength paths of association between frames for senses of words in context in order to choose among themhe introduces quotpolaroid wordsquot a mechanism which progressively eliminates inappropriate senses based on syntactic evidence provided by the parser together with semantic relations found in the frame networkeventually only one sense remains however hirst reports that in cases where some word in the sentence is used metaphorically metonymically or in an unknown sense the polaroids often end by eliminating all possible senses and failwilks preference semantics which uses masterman primitives is essentially a casebased approach to natural language understanding and one of the first specifically designed to deal with the problem of sense disambiguationpreference semantics specifies selectional restrictions for combinations of lexical items in a sentence that can be relaxed when a word with the preferred restrictions does not appear thus enabling especially the handling of metaphor boguraev shows that preference semantics is inadequate to deal with polysemous verbs and attempts to improve on wilks method by using a combination of evidence including selectional restrictions preferences case frames etche integrates semantic disambiguation with structural disambiguation to enable judgments about the semantic coherence of a given sense assignmentlike many other systems of the era these systems are sentencebased and do not account for phenomena at other levels of discourse such as topical and domain informationthe result is that some kinds of disambiguation are difficult or impossible to accomplisha rather different approach to language understanding which contains a substantial sense discrimination component is the word expert parser the approach derives from the somewhat unconventional theory that human knowledge about language is organized primarily as knowledge about words rather than rulestheir system models what its authors feel is the human language understanding process a coordination of information exchange among word experts about syntax and semantics as each determines its involvement in the environment under questioneach expert contains a discrimination net for all senses of the word which is traversed on the basis of information supplied by the context and other word experts ultimately arriving at a unique sense which is then added to a semantic representation of the sentencethe wellknown drawback of the system is that the word experts need to be extremely large and complex to accomplish the goal which is admittedly greater than sense disambiguationdahlgren language understanding system includes a sense disambiguation component that uses a variety of types of information fixed phrases syntactic information and commonsense reasoningthe reasoning module because it is computationally intensive is invoked only in cases where the other two methods fail to yield a resultalthough her original assumption was that much disambiguation could be accomplished based on paragraph topic she found that half of the disambiguation was actually accomplished using fixed phrase and syntactic information while the other half was accomplished using commonsense reasoningreasoning often involves traversing an ontology to find common ancestors for words in context her work anticipates resnik results by determining that ontological similarity involving a common ancestor in the ontology is a powerful disambiguatorshe also notices that verb selectional restrictions are an lished that semantic priminga process in which the introduction of a certain concept will influence and facilitate the processing of subsequently introduced concepts that are semantically relatedplays a role in disambiguation by humans this idea is realized in spreading activation models where concepts in a semantic network are activated upon use and activation spreads to connected nodesactivation is weakened as it spreads but certain nodes may receive activation from several sources and be progressively reinforcedmcclelland and rumelhart added to the model by introducing the notion of inhibition among nodes where the activation of a node might suppress rather than activate certain of its neighbors applied to lexical disambiguation this approach assumes that activating a node corresponding to say the concept throw will activate the quotphysical objectquot sense of ball whose activation would in turn inhibit the activation of other senses of ball such as quotsocial eventquot quillian semantic network described above is the earliest implementation of a spreading activation network used for word sense disambiguationa similar model is implemented by cottrell and small see also cottrell in both of these models each node in the network represents a specific word or conceptwaltz and pollack and bookman handencode sets of semantic quotmicrofeaturesquot corresponding to fundamental semantic distinctions characteristic durations of events locations and other similar distinctions in their networksin waltz and pollack sets of microfeatures have to be manually primed by a user to activate a context for disambiguating a subsequent input word but bookman describes a dynamic process in which the microfeatures are automatically activated by the preceding text thus acting as a shortterm context memoryin addition to these local models distributed models have also been proposed however whereas local models can be constructed a priori distributed models require a learning phase using disambiguated examples which limits their practicalitythe difficulty of handcrafting the knowledge sources required for aibased systems restricted them to quottoyquot implementations handling only a tiny fraction of the languageconsequently disambiguation procedures embedded in such systems are most usually tested on only a very small test set in a limited context making it impossible to determine their effectiveness on real textsfor less obvious reasons many of the aibased disambiguation results involve highly ambiguous words and fine sense distinctions and unlikely test sentences which make the results even less easy to evaluate in the light of the nowknown difficulties of discriminating even gross sense distinctionsthe aibased work of the 1970s and 1980s was theoretically interesting but not at all practical for language understanding in any but extremely limited domainsa significant roadblock to generalizing wsd work was the difficulty and cost of handcrafting the enormous amounts of knowledge required for wsd the socalled quotknowledge acquisition bottleneckquot work on wsd reached a turning point in the 1980s when largescale lexical resources such as dictionaries thesauri and corpora became widely availableattempts were made to automatically extract knowledge from these sources and more recently to construct largescale knowledge bases by hand a corresponding shift away from methods based in linguistic theories and towards empirical methods also occurred at this time as well as a decrease in emphasis on doall systems in favor of quotintermediatequot tasks such as wsdheidon 1985 markowitz ahlswede and evens 1986 byrd et al 1987 nakamura and nagao 1988 klavans chodorow and wacholder 1990 wilks et al1990this work contributed significantly to lexical semantic studies but it appears that the initial goalthe automatic extraction of large knowledge baseswas not fully achieved the only currently widely available largescale lexical knowledge base was created by handwe have elsewhere demonstrated the difficulties of automatically extracting relations as simple as hyperonymy in large part due to the inconsistencies in dictionaries themselves as well as the fact that dictionaries are created for human use and not for machine exploitationdespite its shortcomings the machinereadable dictionary provides a readymade source of information about word senses and therefore rapidly became a staple of wsd researchthe methods employed attempt to avoid the problems cited above by using the text of dictionary definitions directly together with methods sufficiently robust to reduce or eliminate the effects of a given dictionary inconsistenciesall of these methods rely on the notion that the most plausible sense to assign to multiple cooccurring words is the one that maximizes the relatedness among the chosen senseslesk created a knowledge base that associated with each sense in a dictionary a quotsignaturequot6 composed of the list of words appearing in the definition of that sensedisambiguation was accomplished by selecting the sense of the target word whose signature contained the greatest number of overlaps with the signatures of neighboring words in its contextthe method achieved 5070 correct disambiguation using a relatively fine set of sense distinctions such as those found in a typical learner dictionarylesk method is very sensitive to the exact wording of each definition the presence or absence of a given word can radically alter the resultshowever lesk method has served as the basis for most subsequent mrdbased disambiguation workwilks et al attempted to improve the knowledge associated with each sense by calculating the frequency of cooccurrence for the words in definition texts from which they derive several measures of the degree of relatedness among wordsthis metric is then used with the help of a vector method that relates each word and its contextin experiments on a single word the method achieved 45 accuracy on sense identification and 90 accuracy on homograph identificationlesk method has been extended by creating a neural network from definition texts in the collins english dictionary in which each word is linked to its senses which are themselves linked to the words in their definitions which are in turn linked to their senses etc7 experiments on 23 ambiguous words each in six contexts produced correct disambiguation using the relatively fine sense distinctions in the ced in 717 of the cases in later experiments improving the parameters and only distinguishing homographs enabled a rate of 85 applied to the task of mapping the senses of the ced and oald for the same 23 words this method obtained a correct correspondence in 90 of the cases at the sense level and 97 at the level of homographs sutcliffe and slater replicated this method on full text and found similar results several authors have attempted to improve results by using supplementary fields of information in the electronic version of the longman dictionary of contemporary english in particular the box codes and subject codes provided for each sensebox codes include primitives such as abstract animate human etc and encode type restrictions on nouns and adjectives and on the arguments of verbssubject codes use another set of primitives to classify senses of words by subject guthrie et al demonstrate a typical use of this information in addition to using the leskbased method of counting overlaps between definitions and contexts they impose a correspondence of subject codes in an iterative processno quantitative evaluation of this method is available but cowie guthrie and guthrie improve the method using simulated annealing and report results of 47 for sense distinctions and 72 for homographsthe use of ldoce box codes however is problematic the codes are not systematic in later work bradenharder showed that simply matching box or subject codes is not sufficient for disambiguationfor example in i tipped the driver the codes for several senses of the words in the sentence satisfy the necessary constraints 7 note that the assumptions underlying this method are very similar to quillian thus one may think of a full concept analogically as consisting of all the information one would have if he looked up what will be called the quotpatriarchquot word in a dictionary then looked up every word in each of its definitions then looked up every word found in each of these and so on continually branching outward however quillian network also keeps track of semantic relationships among the words encountered along the path between two words which are encoded in his semantic network the neural network avoids the overhead of creating the semantic network but loses this relational informationide and wronis introduction in many ways the supplementary information in the ldoce and in particular the subject codes is similar to that in a thesaurus which however is more systematically structuredinconsistencies in dictionaries noted earlier are not the only and perhaps not the major source of their limitations for wsdwhile dictionaries provide detailed information at the lexical level they lack pragmatic information that enters into sense determination for example the link between ash and tobacco cigarette or tray in a network such as quillian is very indirect whereas in the brown corpus the word ash cooccurs frequently with one of these wordsit is therefore not surprising that corpora have become a primary source of information for wsd this development is outlined below in section 23232 thesaurithesauri provide information about relationships among words most notably synonymyroget international thesaurus which was put into machinetractable form in the 1950s and has been used in a variety of applications including machine translation information retrieval and content analysis also supplies an explicit concept hierarchy consisting of up to eight increasingly refined levels8 typically each occurrence of the same word under different categories of the thesaurus represents different senses of that word ie the categories correspond roughly to word senses a set of words in the same category are semantically relatedthe earliest known use of roget for wsd is the work of masterman described above in section 21several years later patrick used roget to discriminate among verb senses by examining semantic clusters formed by quotechainsquot derived from the thesaurus he uses quotwordstrong neighborhoodsquot comprising word groups in lowlevel semicolon groups which are the most closely related semantically in the thesaurus and words connected to the group via chainshe is able he claims to discriminate the correct sense of verbs such as inspire and question with high reliabilitybryan earlier work had already demonstrated that homographs can be distinguished by applying a metric based on relationships defined by his chains similar work is described in sedelow and mooney yarowsky derives classes of words by starting with words in common categories in roget a 100word context of each word in the category is extracted from a corpus and a mutualinformationlike statistic is used to identify words most likely to cooccur with the category membersthe resulting classes are used to disambiguate new occurrences of a polysemous word the 100word context of the polysemous occurrence is examined for words in various classes and bayes rule is applied to determine the class most likely to be that of the polysemous wordsince class is assumed by yarowsky to represent a particular sense of a word assignment to a class identifies the sensehe reports 92 accuracy on a mean threeway sense distinctionyarowsky notes that his method is best for extracting topical information which is in turn most successful for disambiguating nouns he uses the broad category distinctions supplied by roget although he points out that the lowerlevel information may provide rich information for disambiguationpatrick much earlier study on the other hand exploits the lower levels of the concept hierarchy in which words are more closely related semantically as well as connections among words within the thesaurus itself however despite its promise this work has not been built upon sincelike machinereadable dictionaries a thesaurus is a resource created for humans and is therefore not a source of perfect information about word relationsit is widely recognized that the upper levels of its concept hierarchy are open to disagreement and that they are so broad as to be of little use in establishing meaningful semantic categoriesnonetheless thesauri provide a rich network of word associations and a set of semantic categories potentially valuable for languageprocessing work however roget and other thesauri have not been used extensively for wsd9 wordnet combines the features of many of the other resources commonly exploited in disambiguation work it includes definitions for individual senses of words within it as in a dictionary it defines quotsynsetsquot of synonymous words representing a single lexical concept and organizes them into a conceptual hierarchy1 like a thesaurus and it includes other links among words according to several semantic relations including hyponymyhyperonymy antonymy and meronymyas such it currently provides the broadest set of lexical information in a single resourceanother possibly more compelling reason for wordnet widespread use is that it is the first broadcoverage lexical resource that is freely and widely available as a result whatever its limitations wordnet sense divisions and lexical relations are likely to impact the field for several years to come11 some of the earliest attempts to exploit wordnet for sense disambiguation are in the field of information retrievalusing the hyponomy links for nouns in wordnet voorhees defines a construct called a hood in order to represent sense categories much as roget categories are used in the methods outlined abovea hood for a given word w is defined as the largest connected subgraph that contains w for each content ide and veronis introduction word in a document collection voorhees computes the number of times each synset appears above that word in the wordnet noun hierarchy which gives a measure of the expected activity she then performs the same computation for words occurring in a particular document or query the sense corresponding to the hood root for which the difference between the global and local counts is the greatest is chosen for that wordher results however indicate that her technique is not a reliable method for distinguishing wordnet finegrained sense distinctionsin a similar study richardson and smeaton create a knowledge base from wordnet hierarchy and apply a semantic similarity function to accomplish disambiguation also for the purposes of information retrievalthey provide no formal evaluation but indicate that their results are quotpromisingquot sussna computes a semantic distance metric for each of a set of input text terms in order to disambiguate themhe assigns weights based on the relation type to wordnet links and defines a metric that takes account of the number of arcs of the same type leaving a node and the depth of a given edge in the overall quottreequot this metric is applied to arcs in the shortest path between nodes to compute semantic distancethe hypothesis is that for a given set of terms occurring near each other in a text choosing the senses that minimize the distance among them selects the correct sensessussna disambiguation results are demonstrated to be significantly better than chancehis work is particularly interesting because it is one of the few to date that utilizes not only wordnet isa hierarchy but other relational links as wellresnik draws on his body of earlier work on wordnet in which he explores a measure of semantic similarity for words in the wordnet hierarchy he computes the shared information content of words which is a measure of the specificity of the concept that subsumes the words in the wordnet isa hierarchythe more specific the concept that subsumes two or more words the more semantically related they are assumed to beresnik contrasts his method of computing similarity to those which compute path length arguing that the links in the wordnet taxonomy do not represent uniform distances resnik method applied using wordnet finegrained sense distinctions and measured against the performance of human judges approaches human accuracylike the other studies cited here his work considers only nounswordnet is not a perfect resource for word sense disambiguationthe most frequently cited problem is the finegrainedness of wordnet sense distinctions which are often well beyond what may be needed in many languageprocessing applications voorhees hood construct is an attempt to access sense distinctions that are less finegrained than wordnet synsets and less coarsegrained than the 10 wordnet noun hierarchies resnik method allows for detecting sense distinctions at any level of the wordnet hierarchyhowever it is not clear what the desired level of sense distinction should be for wsd or if this level is even captured in wordnet hierarchydiscussion within the languageprocessing community is beginning to address these issues including the most difficult one of defining what we mean by quotsensequot as outlined in buitelaar sense disambiguation in the generative context starts with a semantic tagging that points to a complex knowledge representation reflecting all of a word systematically related senses after which semantic processing may derive a discoursedependent interpretation containing more precise sense information about the occurrencebuitelaar describes the use of corelex for underspecified semantic tagging viegas mahesh and nirenburg describe a similar approach to wsd undertaken in the context of their work on machine translation they access a large syntactic and semantic lexicon that provides detailed information about constraints such as selectional restrictions for words in a sentence and then search a richly connected ontology to determine which senses of the target word best satisfy these constraintsthey report a success rate of 97like corelex both the lexicon and the ontology are manually constructed and therefore still limited although much larger than the resources used in earlier workhowever buitelaar describes means to automatically generate corelex entries from corpora in order to create domainspecific semantic lexicons thus demonstrating the potential to access largerscale resources of this kindthe nineteenth century the manual analysis of corpora has enabled the study of words and graphemes and the extraction of lists of words and collocations for the study of language acquisition or language teaching corpora have been used in linguistics since the first half of the twentieth century some of this work concerns word senses and it is often strikingly modern for example palmer studied collocations in english lorge computed sense frequency information for the 570 most common english words eaton compared the frequency of senses in four languages and thorndike and zipf determined that there is a positive correlation between the frequency and the number of synonyms of a word the latter of which is an indication of semantic richness a corpus provides a bank of samples that enable the development of numerical language models and thus the use of corpora goes handinhand with empirical methodsalthough quantitativestatistical methods were embraced in early mt work in the mid1960s interest in statistical treatment of language waned among linguists due to the trend toward the discovery of formal linguistic rules sparked by the theories of zellig harris and bolstered most notably by the transformational theories of noam chomsky 12 instead attention turned toward full linguistic analysis and hence toward sentences rather than texts and toward contrived examples and artificially limited domains instead of general languageduring the following 10 to it would be difficult indeed in the face of today activity not to acknowledge the triumph of the theoretical approach more precisely of formal rules as the preferred successor of lexical and syntactic search algorithms in linguistic descriptionat the same time common sense should remind us that hypothesismaking is not the whole of science and that discipline will be needed if the victory is to contribute more than a haven from the rigors of experimentation ide and voronis introduction 15 years only a handful of linguists continued to work with corpora most often for pedagogical or lexicographic ends despite this several important corpora were developed during this period including the brown corpus the tresor de la lan gue francaise and the lancasteroslobergen corpus in the area of natural language processing the alpac report recommended intensification of corpusbased research for the creation of broadcoverage grammars and lexicons but because of the shift away from empiricism little work was done in this area until the 1980suntil then the use of statistics for language analysis was almost the exclusive property of researchers in the fields of literary and humanities computing information retrieval and the social scienceswithin these fields work on wsd continued most notably in the harvard quotdisambiguation projectquot for content analysis and also in the work of iker choueka and dreizin and choueka and goldberg in the context of the shift away from the use of corpora and empirical methods the work of weiss and kelley and stone on the automatic extraction of knowledge for word sense disambiguation seems especially innovativeweiss demonstrated that disambiguation rules can be learned from a manually sensetagged corpusdespite the small size of his study weiss results are encouraging kelley and stone work which grew out of the harvard quotdisambiguation projectquot for content analysis is on a much larger scale they extract kwic concordances for 1800 ambiguous words from a corpus of a halfmillion wordsthe concordances serve as a basis for the manual creation of disambiguation rules for each sense of the 1800 wordsthe testsalso very sophisticated for the timeexamine the target word context for clues on the basis of collocational information syntactic relations with context words and membership in common semantic categoriestheir rules perform even better than weiss achieving 92 accuracy for gross homographic sense distinctionsin the 1980s interest in corpus linguistics was revived advances in technology enabled the creation and storage of corpora larger than had been previously possible enabling the development of new models most often utilizing statistical methodsthese methods were rediscovered first in speech processing and were immediately applied to written language analysis for a discussion see ide and walker in the area of word sense disambiguation black developed a model based on decision trees using a corpus of 22 million tokens after manually sensetagging approximately 2000 concordance lines for five test wordssince then supervised learning from sensetagged corpora has since been used by several researchers zernik hearst leacock towell and voorhees gale church and yarowsky bruce and wiebe miller et al niwa and nitta lehman among othershowever despite the availability of increasingly large corpora two major obstacles impede the acquisition of lexical knowledge from corpora the difficulties of manually sensetagging a training corpus and data sparseness distributes a corpus of approximately 200000 sentences from the brown corpus and the wall street journal in which all occurrences of 191 words are handtagged with their wordnet senses also the cognitive science laboratory at princeton has undertaken the handtagging of 1000 words from the brown corpus with wordnet senses and handtagging of 25 verbs in a small segment of the wall street journal is also underway however these corpora are far smaller than those typically used with statistical methodsseveral efforts have been made to automatically sensetag a training corpus via bootstrapping methodshearst proposed an algorithm that includes a training phase during which each occurrence of a set of nouns to be disambiguated is manually sensetagged in several occurrencesstatistical information extracted from the context of these occurrences is then used to disambiguate other occurrencesif another occurrence can be disambiguated with certitude the system automatically acquires additional statistical information from these newly disambiguated occurrences thus improving its knowledge incrementallyhearst indicates that an initial set of at least 10 occurrences is necessary for the procedure and that 20 or 30 occurrences are necessary for high precisionthis overall strategy is more or less that of most subsequent work on bootstrappingrecently a classbased bootstrapping method for semantic tagging in specific domains has been proposed schiitze proposes a method that avoids tagging each occurrence in the training corpususing letter fourgrams within a 1001character window his method building on the vectorspace model from information retrieval automatically clusters the words in the text a sense is then assigned manually to each cluster rather than to each occurrenceassigning a sense demands examining 10 to 20 members of each cluster and each sense may be represented by several clustersthis method reduces the amount of manual intervention but still requires the examination of a hundred or so occurrences for each ambiguous worda more serious issue for this method is that it is not clear what the senses derived from the clusters correspond to moreover the senses are not directly usable by other systems since they are derived from the corpus itselfbrown et al and gale church and yarowsky propose the use of bilingual corpora to avoid handtagging of training datatheir premise is that different senses of a given word often translate differently in another language by using a parallel aligned corpus the translation of each occurrence of a word such as pen can be used to automatically determine its sensethis method has some limitations since many ambiguities are preserved in the target language furthermore the few available largescale parallel corpora are very specialized which skews the sense representationdagan itai and schwall and dagan and itai propose a similar method but instead of a parallel corpus use two monolingual corpora and a bilingual dictionarythis solves in part the problems of availability and specificity of domain that plague the parallel corpus approach since monolingual corpora including corpora from diverse domains and genres are much easier to obtain than parallel corporaide and veronis introduction other methods attempt to avoid entirely the need for a tagged corpus such as many of those cited in the section below however it is likely that as noted for grammatical tagging even a minimal phase of supervised learning improves radically on the results of unsupervised methodsresearch into means to facilitate and optimize tagging is ongoing for example an optimization technique called committeebased sample selection has recently been proposed which based on the observation that a substantial portion of manually tagged examples contribute little to performance enables avoiding the tagging of examples that carry more or less the same informationsuch methods are promising although to our knowledge they have not been applied to the problem of lexical disambiguation for much corpusbased work is especially severe for work in wsdfirst enormous amounts of text are required to ensure that all senses of a polysemous word are represented given the vast disparity in frequency among sensesfor example in the brown corpus the relatively common word ash occurs only eight times and only once in its sense as treethe sense ashes remains of cremated body although common enough to be included in learner dictionaries such as the ldoce and the oald does not appear and it would be nearly impossible to find the dozen or so senses in many everyday dictionaries such as the cedin addition the many possible cooccurrences for a given polysemous word are unlikely to be found in even a very large corpus or they occur too infrequently to be significantsmoothing is used to get around the problem of infrequently occurring events and in particular to ensure that nonobserved events are not assumed to have a probability of zerothe bestknown smoothing methods are that of turinggood which hypothesizes a binomial distribution of events and that of jelinek and mercer which combines estimated parameters on distinct subparts of the training corpushowever these methods do not enable distinguishing between events with the same frequency such as the ashcigarette and ashroom example given in footnote 15church and gale have proposed a means to improve methods for the estimation of bigrams which could be extended to cooccurrences they take into account the frequency of the individual words that compose the bigram and make the hypothesis that each word appears independently of the othershowever this hypothesis contradicts hypotheses of disambiguation based on cooccurrence which rightly assume that some associations are more probable than othersclassbased models attempt to obtain the best estimates by combining observations of classes of words considered to belong to a common categorybrown et al pereira and tishby and pereira tishby and lee propose methods that derive classes from the distributional properties of the corpus itself while other authors use external information sources to define classes resnik uses the taxonomy of wordnet yarowsky uses the categories of roget thesaurus slator and liddy and paik use the subject codes in the ldoce luk uses conceptual sets built from the ldoce definitionsclassbased methods answer in part the problem of data sparseness and eliminate the need for pretagged 15 for example in a window of five words to each side of the word ash in the brown corpus commonly associated words such as fire cigar volcano etc do not appearthe words cigarette and tobacco cooccur with ash only once with the same frequency as words such as room bubble and house16 see the survey of methods in chen and goodman datahowever there is some information loss with these methods because the hypothesis that all words in the same class behave in a similar fashion is too strongfor example residue is a hypernym of ash in wordnet its hyponyms form the class ash cotton cake dottle obviously the members of this set of words behave very differently in context volcano is strongly related to ash but has little or no relation to the other words in the setsimilaritybased methods dagan marcus and markovitch 1993 dagan pereira and lee 1994 and grishman and sterling 1993 exploit the same idea of grouping observations for similar words but without regrouping them into fixed classeseach word has a potentially different set of similar wordslike many classbased methods such as brown et al similaritybased methods exploit a similarity metric between patterns of cooccurrencedagan marcus and markovitch give the following example the pair does not appear in their corpus however chapter is similar to book introduction and section which are paired with describes in the corpuson the other hand the words similar to book are books documentation and manuals dagan marcus and markovitch evaluation seems to show that similaritybased methods perform better than classbased methodskarov and edelman propose an extension to similaritybased methods by means of an iterative process at the learning stage which gives results that are 92 accurate on four test wordsapproximately the same as the best results cited in the literature to datethese results are particularly impressive given that the training corpus contains only a handful of examples for each word rather than the hundreds of examples required by most methodswe have already noted various problems faced in current wsd research related to specific methodologieshere we discuss issues and problems that all approaches to wsd must face and suggest some directions for further workcontext is the only means to identify the meaning of a polysemous wordtherefore all work on sense disambiguation relies on the context of the target word to provide information to be used for its disambiguationfor datadriven methods context also provides the prior knowledge with which current context is compared to achieve disambiguationbroadly speaking context is used in two ways information from microcontext topical context and domain contributes to sense selection but the relative roles and importance of information from the different contexts and their interrelations are not well understoodvery few studies have used ide and veronis introduction information of all three types and the focus in much recent work is on microcontext alonethis is another area where systematic study is needed for wsd311 microcontextmost disambiguation work uses the local context of a word occurrence as a primary information source for wsdlocal or quotmicroquot context is generally considered to be some small window of words surrounding a word occurrence in a text or discourse from a few words of context to the entire sentence in which the target word appearscontext is very often regarded as all words or characters falling within some window of the target with no regard for distance syntactic structure or other relationsearly corpusbased work such as that of weiss used this approach spreading activation and dictionarybased approaches also do not usually differentiate context input on any basis other than occurrence in a windowschtitze vector space method is a recent example of an approach that ignores adjacency informationoverall the bagofwords approach has been shown to work better for nouns than for verbs and to be in general less effective than methods that take other relations into considerationhowever as demonstrated in yarowsky work the approach is cheaper than those requiring more complex processing and can achieve sufficient disambiguation for some applicationswe examine below some of the other parametersdistanceit is obvious from the quotation in section 21 from weaver memorandum that the notion of examining a context of a few words around the target to disambiguate has been fundamental to wsd work since its beginnings it has been the basis of wsd work in mt content analysis aibased disambiguation and dictionarybased wsd as well as the more recent statistical neural network and symbolic machine learning approacheshowever following kaplan early experiments there have been few systematic attempts to answer weaver question concerning the optimal value of n a notable exception is the study of choueka and lusignan who verified kaplan finding that 2contexts are highly reliable for disambiguation and even 1contexts are reliable in 8 out of 10 caseshowever despite these findings the value of n has continued to vary over the course of wsd work more or less arbitrarilyyarowsky examines different windows of microcontext including 1contexts kcontexts and words pairs at offsets 1 and 2 1 and 1 and 1 and 2 and sorts them using a loglikelihood ratio to find the most reliable evidence for disambiguationyarowsky makes the observation that the optimal value of k varies with the kind of ambiguity he suggests that local ambiguities need only a window of k 3 or 4 while semantic or topicbased ambiguities require a larger window of 2050 words no single best measure is reported suggesting that for different ambiguous words different distance relations are more efficientfurthermore because yarowsky also uses other information it is difficult to isolate the impact of windowsize aloneleacock chodorow and miller use a local window of 3 openclass words arguing that this number showed best performance in previous testscollocationthe term quotcollocationquot has been used variously in wsd workthe term was popularized by j r firth in his 1951 paper quotmodes of meaningquot quotone of the meanings of ass is its habitual collocation with an immediately preceding you sillyquot he emphasizes that collocation is not simple cooccurrence but is quothabitualquot or quotusualquot17 halliday definition of collocation as quotthe syntagmatic association of lexical items quantifiable textually as the probability that there will occur at n removes from an item x the items a b c quot is more workable in computational termsbased on this definition a significant collocation can be defined as a syntagmatic association among lexical items where the probability of item x cooccurring with items a b c is greater than chance it is in this sense that most wsd work uses the termthere is some psychological evidence that collocations are treated differently from other cooccurrencesfor example kintsch and mross show that priming words that enter frequent collocations with test words activate these test words in lexical decision tasksconversely priming words that are in the thematic context do not facilitate the subjects lexical decisions yarowsky explicitly addresses the use of collocations in wsd work but admittedly adapts the definition to his purpose as quotthe cooccurrence of two words in some defined relationquot as noted above he examines a variety of distance relations but also considers adjacency by part of speech he determines that in cases of binary ambiguity there exists one sense per collocation that is in a given collocation a word is used with only one sense with 9099 probabilitysyntactic relationsearl used syntax exclusively for disambiguation in machine translationin most wsd work to date syntactic information is used in conjunction with other informationthe use of selectional restrictions weighs heavily in aibased work that relies on full parsing frames semantic networks the application of selectional preferences etcin other work syntax is combined with frequent collocation information kelley and stone dahlgren and atkins combine collocation information with rules for determining for example the presence or absence of determiners pronouns noun complements as well as prepositions subjectverb and verbobject relationsmore recently researchers have avoided complex processing by using shallow or partial parsingin her disambiguation work on nouns hearst segments text into noun phrases prepositional phrases and verb groups and discards all other syntactic informationshe examines items that are within 3 phrase segments from the target and combines syntactic evidence with other kinds of evidence such as capitalizationyarowsky determines various behaviors based on syntactic category for example that verbs derive more disambiguating information from their objects than from their subjects adjectives derive almost all disambiguating information from the nouns they modify and nouns are best disambiguated by directly adjacent adjectives or nounsin recent work syntactic information most often is simply part of speech used invariably in conjunction with other kinds of information evidence suggests that different kinds of disambiguation procedures are needed depending on the syntactic category and other characteristics of the target word an idea reminiscent of the word expert approachhowever to date there has been little systematic study ide and wronis introduction of the contribution of different information types for different types of target wordsit is likely that this is a next necessary step in wsd work a given sense of a word usually within a window of several sentencesunlike microcontext which has played a role in disambiguation work since the early 1950s topical context has been less consistently usedmethods relying on topical context exploit redundancy in a textthat is the repeated use of words that are semantically related throughout a text on a given topicthus base is ambiguous but its appearance in a document containing words such as pitcher and ball is likely to isolate a given sense for that word work involving topical context typically uses the bagofwords approach in which words in the context are regarded as an unordered setthe use of topical context has been discussed in the field of information retrieval for several years recent wsd work has exploited topical context yarowsky uses a 100word window both to derive classes of related words and as context surrounding the polysemous target in his experiments using roget thesaurus voorhees leacock and towell experiment with several statistical methods using a twosentence window leacock towell and voorhees have similarly explored topical context for wsdgale church and yarowsky looking at a context of 50 words indicate that while words closest to the target contribute most to disambiguation they improved their results from 86 to 90 by expanding context from 6 to 50 words around the targetin a related study they make a claim that for a given discourse ambiguous words are used in a single sense with high probability leacock chodorow and miller challenge this claim in their work combining topical and local context which shows that both topical and local context are required to achieve consistent results across polysemous words in a text yarowsky study indicates that while information within a large window can be used to disambiguate nouns for verbs and adjectives the size of the usable window drops off dramatically with distance from the target wordthis supports the claim that both local and topical context are required for disambiguation and points to the increasingly accepted notion that different disambiguation methods are appropriate for different kinds of wordsmethods utilizing topical context can be ameliorated by dividing the text under analysis into subtopicsthe most obvious way to divide a text is by sections but this is only a gross division subtopics evolve inside sections often in unified groups of several paragraphsautomatic segmentation of texts into such units would obviously be helpful for wsd methods that use topical contextit has been noted that the repetition of words within successive segments or sentences is a strong indicator of the structure of discourse methods exploiting this observation to segment a text into subtopics are beginning to emerge in this volume leacock chodorow and miller consider the role of microcontext vs topical context and attempt to assess the contribution of eachtheir results indicate that for a statistical classifier microcontext is superior to topical context as an indicator of sensehowever although a distinction is made between microcontext and topical context in current wsd work it is not clear that this distinction is meaningfulit may be more useful to regard the two as lying along a continuum and to consider the role 313 domainthe use of domain for wsd is first evident in the microglossaries developed in early mt work the notion of disambiguating senses based on domain is implicit in various aibased approaches such as schank script approach to natural language processing which matched words to senses based on the context or quotscriptquot activated by the general topic of the discoursethis approach which activates only the sense of a word relevant to the current discourse domain demonstrates its limitations of this approach when used in isolation in the famous example the lawyer stopped at the bar for a drink the incorrect sense of bar will be assumed if one relies only on the information in a script concerned with law18 gale church and yarowsky claim for one sense per discourse is disputabledahlgren observes that domain does not eliminate ambiguity for some words she remarks that the noun hand has 16 senses and retains 10 of them in almost any textthe influence of domain likely depends on factors such as the type of text the relation among the senses of the target word for example in the french encyclopaedia universalis the word interet appears 62 times in the article on interestfinance in all cases in its financial sense the word appears 139 times in the article interestphilosophy and humanities in its common nonfinancial sensehowever in the article third world the word interet appears two times in each of these senses321 the bank modelmost researchers in wsd are currently relying on the sense distinctions provided by established lexical resources such as machinereadable dictionaries or wordnet because they are widely availablethe dominant model in these studies is the quotbankquot model which attempts to extend the clear delineation between bankmoney and bankriverside to all sense distinctionshowever it is clear that this convenient delineation is by no means applicable to all or even most other wordsalthough there is some psychological validity to the notion of senses lexicographers themselves are well aware of the lack of agreement on senses and sense divisions the problem of sense division has been an object of discussion since antiquity aristotle devoted a section of his topics to this subject in 350 bcsince then philosophers and linguists have continued to discuss the topic at length but the lack of resolution over 2000 years is striking322 granularityone of the foremost problems for wsd is to determine the appropriate degree of sense granularityseveral authors have remarked that the sense divisions one finds in dictionaries are often too fine for the purposes of nlp workoverly fine sense distinctions create practical difficulide and veronis introduction ties for automated wsd they introduce significant combinatorial effects they require making sense choices that are extremely difficult even for expert lexicographers and they increase the amount of data required for supervised methods to unrealistic proportionsin addition the sense distinctions made in many dictionaries are sometimes beyond those which human readers themselves are capable of makingin a wellknown study kilgarriff shows that it is impossible for human readers to assign many words to a unique sense in ldoce recognizing this dolan proposes a method for quotambiguatingquot dictionary senses by combining them to create grosser sense distinctionsothers have used the grosser sense divisions of thesauri such as roget however it is often difficult to assign a unique sense or even find an appropriate one among the options chen and chang propose an algorithm that combines senses in a dictionary and links them to the categories of a thesaurus combining dictionary senses does not solve the problemfirst of all the degree of granularity required is task dependentonly homograph distinction is necessary for tasks such as speech synthesis or restoration of accents in text while tasks such as machine translation require fine sense distinctionsin some cases finer than what monolingual dictionaries provide for example the english word river is translated as fleuve in french when the river flows into the ocean and otherwise as rivierethere is not however a strict correspondence between a given task and the degree of granularity requiredfor example as noted earlier the word mouse although it has two distinct senses translates into french in both cases to sourison the other hand for information retrieval the distinction between these two senses of mouse is important whereas it is difficult to imagine a reason to distinguish river river second and more generally it is unclear when senses should be combined or spliteven lexicographers do not agree fillmore and atkins identify three senses of the word risk but find that most dictionaries fail to list at least one of themin many cases meaning is best considered as a continuum along which shades of meaning fall and the points at which senses are combined or split can vary dramatically323 senses or usagesthe aristotelian idea that words correspond to specific objects and concepts was displaced in the twentieth century by the ideas of saussure and others for antoine meillet for example the sense of a word is defined only by the average of its linguistic useswittgenstein takes a similar position in his philosophische utersuchungen in asserting that there are no senses but only usages quotfor a large class of casesthough not for allin which we employ the word meaning it can be defined thus the meaning of a word is its use in the languagequot similar views are apparent in more recent theories of meaning for example bloomfield and harris for whom meaning is a function of distribution and in barwise and perry situation semantics where the sense or senses of a word are seen as an abstraction of the role that it plays systematically in the discoursethe cobuild project adopts this view of meaning by attempting to anchor dictionary senses in current usage by creating sense divisions on the basis of clusters of citations in a corpusatkins and kilgarriff also implicitly adopt the view of harris according to which each sense distinction is reflected in a distinct contexta similar view underlies the classbased methods cited in section 243 in this volume schiitze continues in this vein and proposes a technique that avoids the problem of sense distinction altogether he creates sense clusters from a corpus rather than relying on a preestablished sense list324 enumeration or generationthe development of generative lexicons provides a view of word senses that is very different from that of almost all wsd work to datethe enumerative approach assumes an a priori established set of senses that exist independent of contextfundamentally the aristotelian viewthe generative approach develops a discoursedependent representation of sense assuming only underspecified sense assignments until context is taken into account and bears closer relation to distributional and situational views of meaningconsidering the difficulties of determining an adequate and appropriate set of senses for wsd it is surprising that little attention has been paid to the potential of the generative view in wsd researchas larger and more complete generative lexicons become available there is merit to exploring this approach to sense assignmentgiven the variety in the studies cited throughout the previous survey it is obvious that it is very difficult to compare one set of results and consequently one method with anotherthe lack of comparability results from substantial differences in test conditions from study to studyfor instance different types of texts are involved including both highly technical or domainspecific texts where sense use is limited and general texts where sense use may be more variableit has been noted that in a commonly used corpus such as the wall street journal certain senses of typical test words such as line are absent entirelywhen different corpora containing different sense inventories and very different levels of frequency for a given word andor sense are used it becomes futile to attempt to compare resultstest words themselves differ from study to study including not only words whose assignment to clearly distinguishable senses varies considerably or which exhibit very different degrees of ambiguity but also words across different parts of speech and words that tend to appear more frequently in metaphoric metonymic and other nonliteral usages more seriously the criteria for evaluating the correctness of sense assignment varydifferent studies employ different degrees of sense granularity ranging from identification of homographs to fine sense distinctionsin addition the means by which correct sense assignment is finally judged are typically unclearhuman judges must ultimately decide but the lack of agreement among human judges is well documented amsler and white indicate that while there is reasonable consistency in sense assignment for a given expert on successive sense assignments agreement is significantly lower among expertsahlswede reports between 633 and 902 agreement among judges on his ambiguity questionnaire when faced with online sense assignment in a large corpus agreement among judges is far less and in some cases worse than chance jorgensen found the level of agreement in her experiment using data from the brown corpus to be about 68the difficulty of comparing results in wsd research has recently become a concern within the community and efforts are underway to develop strategies for evaluation of wsdgale church and yarowsky attempt to establish lower and upper bounds for evaluating the performance of wsd systems their proposal for overcoming the problem of agreement among human judges in order to establish an upper bound provides a starting point but it has not been widely discussed or implementeda recent discussion at a workshop sponsored by the acl special interest group on the lexicon on quotevaluating automatic semantic taggersquot has sparked the formation of an evaluation effort for wsd in the spirit of previous evaluation efforts such as the arpasponsored message understanding conferences and text retrieval conferences senseval will see its first results at a subsequent siglex workshop to be held at herstmonceux castle england in september 1998as noted above wsd is not an end in itself but rather an quotintermediate taskquot that contributes to an overall task such as information retrieval or machine translationthis opens the possibility of two types of evaluation for wsd work in vitro evaluation where wsd systems are tested independent of a given application using specially constructed benchmarks and evaluation in vivo where rather than being evaluated in isolation results are evaluated in terms of their contribution to the overall performance of a system designed for a particular application such as machine translation331 evaluation in vitroin vitro evaluation despite its artificiality enables close examination of the problems plaguing a given taskin its most basic form this type of evaluation involves comparison of the output of a system for a given input using measures such as precision and recallsenseval currently envisages this type of evaluation for wsd resultsalternatively in vitro evaluation can focus on study of the behavior and performance of systems on a series of test suites representing the range of linguistic problems likely to arise in attempting wsd considerably deeper understanding of the factors involved in the disambiguation task is required before appropriate test suites for typological evaluation of wsd results can be devisedbasic questions such as the role of part of speech in wsd the treatment of metaphor metonymy and the like in evaluation and how to deal with words of differing degrees and types of polysemy must first be resolvedsenseval will likely take us a step closer to this understanding at the least it will force consideration of what can be meaningfully regarded as an isolatable sense distinction and provide some measure of the distance between the performance of current systems and a predefined standardthe in vitro evaluation envisaged for senseval demands the creation of a manually sensetagged reference corpus containing an agreedupon set of sense distinctionsthe difficulties of attaining sense agreement even among experts have already been outlinedresnik and yarowsky have proposed that for wsd evaluation it may be practical to retain only those sense distinctions that are lexicalized crosslinguisticallythis proposal has the merit of being immediately usable but in view of the types of problems cited in the previous section systematic study of interlanguage relations will be required to determine its viability and generalityat present the apparent best source of sense distinctions is assumed to be online resources such as ldoce or wordnet although the problems of utilizing such resources are well known and their use does not address issues of more complex semantic tagging that goes beyond the typical distinctions made in dictionaries and thesauriresnik and yarowsky also point out that a binary evaluation for wsd is not sufficient and propose that errors be penalized according to a distance matrix among senses based on a hierarchical organizationfor example failure to identify homographs of bank would be penalized more severely than failure to distinguish bank as an institution from bank as a building however despite the obvious appeal of this approach it runs up against the same problem of the lack of an established agreedupon hierarchy of sensesaware of this problem resnik and yarowsky suggest creating the sense distance matrix based on results in experimental psychology such as miller and charles or resnik even ignoring the cost of creating such a matrix the psycholinguistic literature has made clear that these results are highly influenced by experimental conditions and the task imposed on the subjects in addition it is not clear that psycholinguistic data can be of help in wsd aimed toward practical use in nlp systemsin general wsd evaluation confronts difficulties of criteria that are similar to but orders of magnitude greater than those facing other tasks such as partofspeech tagging due to the elusive nature of semantic distinctionsit may be that at best we can hope to find practical solutions that will serve particular needs this is considered more fully in the next section332 evaluation in vivoanother approach to evaluation is to consider results insofar as they contribute to the overall performance in a particular application such as machine translation information retrieval or speech recognitionthis approach although it does not assure the general applicability of a method nor contribute to a detailed understanding of problems does not demand agreement on sense distinctions or the establishment of a pretagged corpusonly the final result is taken into consideration subjected to evaluation appropriate to the task at handmethods for wsd have evolved largely independently of particular applications especially in the recent pastit is interesting to note that few if any systems for machine translation have incorporated recent methods developed for wsd despite the importance of wsd for mt noted by weaver almost 50 years agothe most obvious effort to incorporate wsd methods into larger applications is in the field of information retrieval and the results are ambiguous krovetz and croft report only a slight improvement in retrieval using wsd methods voorhees and sanderson indicate that retrieval degrades if disambiguation is not sufficiently precisesparckjones questions the utility of any nlp technique for document retrievalon the other hand schtitze and pedersen show a marked improvement in retrieval using a method that combines searchbyword and searchbysenseit remains to be seen to what extent wsd can improve results in particular applicationshowever if meaning is largely a function of use it may be that the only relevant evaluation of wsd results is achievable in the context of specific taskswork on automatic wsd has a history as long as automated language processing generallylooking back it is striking to note that most of the problems and the basic approaches to solving them were recognized at the outsetsince so much of the early work on wsd is reported in relatively obscure books and articles across several fields and disciplines it is not surprising that recent authors are often unaware of itwhat is surprising is that in the broad sense relatively little progress seems to have been made in nearly 50 yearseven though much recent work cites results at the 90 level or better these studies typically involve very few words most often only nouns and frequently concern only broad sense distinctionsin a sense wsd work has come full circle returning most recently to empirical methods and corpusbased analyses that characterize some of the earliest attempts to solve the problemwith sufficiently greater resources and enhanced statistical methods at their disposal researchers in the 1990s have obviously improved on earlier results but it appears that we may nearly have reached the limit of what can be achieved in the current frameworkfor this reason it is especially timely to assess the state of wsd and consider in the context of its entire history the next directions of researchthis paper is an attempt to provide that context at least in part by bringing wsd into the perspective of the past 50 years of work on the topicwhile we are aware that much more could be added to what is presented here we have made an attempt to cover at least the major areas of work and sketch the broad lines of development in the fieldquot of course wsd is problematic in part because of the inherent difficulty of determining or even defining word sense and this is not an issue that is likely to be solved in the near futurenonetheless it seems clear that current wsd research could benefit from a more comprehensive consideration of theories of meaning and work in the area of lexical semanticsone of the obvious stumbling blocks in much recent wsd work is the rather narrow view of sense that comes handinhand with the attempt to use sense distinctions in everyday dictionaries which cannot and are not intended to represent meaning in contexta different sort of view one more consistent with current linguistic theory is required here we see the recent work using generative lexicons as providing at least a point of departureanother goal of this paper is to provide a starting point for the growing number of researchers working in various areas of computational linguistics who want to learn about wsdthere is renewed interest in wsd as it contributes to various applications such as machine translation and document retrievalwsd as quotintermediate taskquot while interesting in its own right is difficult and perhaps ultimately impossible to assess in the abstract incorporation of wsd methods into larger applications will therefore hopefully inform and enhance future workfinally if a lesson is to be learned from a review of the history of wsd it is that research can be very myopic and as a result tends to revisit many of the same issues over timethis is especially true when work on a problem has been crossdisciplinarythere is some movement toward more merging of research from various areas at least as far as language processing is con cerned spurred by the practical problems of information access that we are facing as a result of rapid technological developmenthopefully this will contribute to further progress on wsd
J98-1001
introduction to the special issue on word sense disambiguation the state of the artwe present a very concise survey of the history of ideas used in word sense disambiguationin general the various wsd approaches of the past can be divided into two types ie data and knowledgebased approacheswe argue that word sense ambiguity is a central problem for many established hlt applications
using corpus statistics and wordnet relations for sense identification corpusbased approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck we show how knowledgebased techniques can be used to open the bottleneck by automatically locating training corpora we describe a statistical classifier that combines topical context with local cues to identify a word sense the classifier is used to disambiguate a noun a verb and an adjective a knowledge base in the form of wordnet lexical relations is used to automatically locate training examples in a general text corpus test results are compared with those from manually tagged training examples corpusbased approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneckwe show how knowledgebased techniques can be used to open the bottleneck by automatically locating training corporawe describe a statistical classifier that combines topical context with local cues to identify a word sensethe classifier is used to disambiguate a noun a verb and an adjectivea knowledge base in the form of wordnet lexical relations is used to automatically locate training examples in a general text corpustest results are compared with those from manually tagged training examplesan impressive array of statistical methods have been developed for word sense identificationthey range from dictionarybased approaches that rely on definitions to corpusbased approaches that use only word cooccurrence frequencies extracted from large textual corpora we have drawn on these two traditions using corpusbased cooccurrence and the lexical knowledge base that is embodied in the wordnet lexiconthe two traditions complement each othercorpusbased approaches have the advantage of being generally applicable to new texts domains and corpora without needing costly and perhaps errorprone parsing or semantic analysisthey require only training corpora in which the sense distinctions have been marked but therein lies their weaknessobtaining training materials for statistical methods is costly and timeconsumingit is a quotknowledge acquisition bottleneckquot to open this bottleneck we use wordnet lexical relations to locate unsupervised training examplessection 2 describes a statistical classifier tlc that uses topical context local context or a combination of the twothe results of combining the two types of context to disambiguate a noun a verb and an adjective are presentedthe following questions are discussed when is topical context superior to local context is their combination superior to either type alonedo the answers to these questions depend on the size of the trainingdo they depend on the syntactic category of the targetmanually tagged training materials were used in the development of tlc and the experiments in section 2the cognitive science laboratory at princeton university with support from nsfarpa is producing textual corpora that can be used in developing and evaluating automatic methods for disambiguationexamples of the different meanings of one thousand common polysemous openclass english words are being manually taggedthe results of this effort will be a useful resource for training statistical classifiers but what about the next thousand polysemous words and the nextin order to identify senses of these words it will be necessary to learn how to harvest training examples automaticallysection 3 describes wordnet lexical relations and the role that monosemous quotrelativesquot of polysemous words can play in creating unsupervised training materialstlc is trained with automatically extracted examples its performance is compared with that obtained from manually tagged training materialswork on automatic sense identification from the 1950s onward has been well summarized by hirst and dagan and itai the discussion below is limited to work that is closely related to our researchhearst represents local context with a shallow syntactic parse in which the context is segmented into prepositional phrases noun phrases and verb groupsthe target noun is coded for the word it modifies the word that modifies it and the prepositions that precede and follow itopenclass items within 3 phrase segments of the target are coded in terms of their relation to the target or their role in a construct that is adjacent to the targetevidence is combined in a manner similar to that used by the local classifier component of tlcwith supervised training of up to 70 sentences per sense performance on three homographs was quite good with fewer training examples and semantically related senses performance on two additional words was less satisfactory gale church and yarowsky developed a topical classifier based on bayesian decision theorythe only information the classifier uses is an unordered list of words that cooccur with the target in training examplesno other cues such as partofspeech tags or word order are usedleacock towel and voorhees compared this bayesian classifier with a content vector classifier as used in information retrieval and a neural network with backpropagationthe classifiers were compared using different numbers of senses and different amounts of training material on the sixsense task the classifiers averaged 74 correct answersleacock towel and voorhees found that the response patterns of the three classifiers converged suggesting that each of the classifiers was extracting as much data as is available in purely topical approaches that look only at word counts from training examplesif this is the case any technique that uses only topical information will not be significantly more accurate than the three classifiers testedleacock towell and voorhees showed that performance of the content vector topical classifier could be improved with the addition of local templates specific word patterns that were recognized as being indicative of a particular sense in an extension of an idea initially suggested by weiss although the templates proved to be highly reliable when they occurred all too often none were foundyarowsky also found that templatelike structures are very powerful indicators of sensehe located collocations by looking at adjacent words or at the first word to the left or right in a given part of speech and found that with binary ambiguity a word has only one sense in a given collocation with a probability of 90991 however he had an average of only 29 recall when local information occurred it was highly reliable but all too often it did not occurbruce and wiebe have developed a classifier that represents local context by morphology the syntactic category of words within a window of 2 words from the target and collocationspecific items found in the sentencethe collocationspecific items are those determined to be the most informative where an item is considered informative if the model for independence between it and a sense tag provided a poor fit to the training datathe relative probabilities of senses available from the training corpus are used in the decision process as prior probabilitiesfor each test example the evidence in its local context is combined in a bayesiantype model of the probability of each sense and the most probable sense is selectedperformance ranges from 7784 correct on the test words where a lower bound for performance based on always selecting the most frequent sense for the same words would yield 5380 correctyarowsky building on his earlier work designed a classifier that looks at words within k positions from the target lemma forms are obtained through morphological analysis and a coarse partofspeech assignment is performed by dictionary lookupcontext is represented by collocations based on words or parts of speech at specific positions within the window or less specifically in any positionalso coded are some special classes of words such as weekday that might serve to distinguish among word sensesfor each type of localcontext evidence found in the corpus a loglikelihood ratio is constructed indicating the strength of the evidence for one form of the homograph versus the otherthese ratios are then arranged in a sorted decision list with the largest values firsta decision is made for a test sentence by scanning down the decision list until a match is foundthus only the single best piece of evidence is usedthe classifier was tested on disambiguating the homographs that result from accent removal in spanish and french in tests with the number of training examples ranging from a few hundred to several thousand overall accuracy was high above 90clearly sense identification is an active area of research and considerable ingenuity is apparentbut despite the promising results reported in this literature the reality is that there still are no largescale operational systems for tagging the senses of words in textthe statistical classifier tlc uses topical context local context or a combination of the two for word sense identificationtlc flexibility in using both forms is an important asset for our investigationsa noun a verb and an adjective were tested in this studytable 1 provides a synonym or brief gloss for each of the senses usedtraining corpora and testing corpora were collected as follows wall street journal corpus and from the american printing house for the blind corpusexamples for hard were taken from the ldc san jose mercury news corpuseach consisted of the sentence containing the target and one sentence preceding itthe resulting strings had an average length of 49 items2examples where the target was the head of an unambiguous collocation were removed from the filesbeing unambiguous they do not need to be disambiguatedthese collocations for example product line and hard candy were found using wordnetin section 3 we consider how they can be used for unsupervised trainingexamples where the target was part of a proper noun were also removed for example japan air lines was not taken as an example of line first 25 50 100 and 200 examples of the least frequent sense and examples from the other senses in numbers that reflected their relative frequencies in the corpusas an illustration in the smallest training set for hard there were 25 examples of the least frequent sense 37 examples of the second most frequent sense and 256 examples of the most frequent sensethe test sets were of fixed size each contained 150 of the least frequent sense and examples of the other senses in numbers that reflected their relative frequenciesthe operation of tlc consists of preprocessing training and testingduring preprocessing examples are tagged with a partofspeech tagger special tags are inserted at sentence breaks and each openclass word found in wordnet is replaced with its base formthis step normalizes across morphological variants without resorting to the more drastic measure of stemmingmorphological information is not lost since the partofspeech tag remains unchangedtraining consists of counting the frequencies of various contextual cues for each sensetesting consists of taking a new example of the polysemous word and computing the most probable sense based on the cues present in the context of the new itema comparison is made to the sense assigned by a human judge and the classifier decision is scored as correct or incorrecttlc uses a bayesian approach to find the sense s that is the most probable given the cues ci contained in a context window of k positions around the polysemous target wordfor each s the probability is computed with bayes rule as golding points out the term p is difficult to estimate because of the sparse data problem but if we assume as is often done that the occurrence of each cue is independent of the others then this term can be replaced with in tlc we have made this assumption and have estimated p from the trainingof course the sparse data problem affects these probabilities too and so tlc uses the goodturing formula to smooth the values of p including providing probabilities for cues that did not occur in the trainingtlc actually uses the mean of the goodturing value and the trainingderived value for pwhen cues do not appear in training it uses the mean of the goodturing value and the global probability of the cue p obtained from a large text corpusthis approach to smoothing has yielded consistently better performance than relying on the goodturing values alone tuationfor this cue type p is the probability that item cl appears precisely at location j for sense sipositions j 2 1 12 are usedthe global probabilities for example p are based on counts of closedclass items found at these positions relative to the nouns in a large text corpusthe local window width of 2 was selected after pilot testing on the semantically tagged brown corpusas in above the local window does not extend beyond a sentence boundary4partofspeech tags in the positions j 2 10 12 are also used as cuesthe probabilities for these tags are computed for specific positions in a manner similar to that described in abovewhen tlc is configured to use only topical information cue type is employedwhen it is configured for local information cue types and are usedfinally in combined mode the set of cues contains all four types23 results figures 1 to 3 show the accuracy of the classifier as a function of the size of the training set when using local context topical context and a combination of the two averaged across three runs for each training setto the extent that the words used are representative some clear differences appear as a function of syntactic categorywith the verb serve local context was more reliable than topical context at all levels of training the combination of local and topical context showed improvement over either form alone with the adjective hard local context was much more reliable as an indicator of sense than topical context for all training sizes and the combined classifier performance was the same as for local in the case of the noun line topical was slightly better than local at all set sizes but with 200 training examples their combination yielded 84 accuracy greater than either topical or local alone to summarize local context was more reliable than topical context as an indicator of sense for this verb and this adjective but slightly less reliable for this nounthe combination of local and topical context showed improved or equal performance for all three wordsperformance for all of the classifiers improved with increased training sizeall classifiers performed best with at least 200 training examples per sense but the learning curve tended to level off beyond a minimum 100 training examplesthese results are consistent with those of yarowsky based on his experiments with pseudowords homophones and homonyms he observed that performance for verbs and adjectives dropped sharply as the window increased while distant context remained useful for nounsthus one is tempted to conclude that nouns depend more on topic than do verbs and adjectivesbut such a conclusion is probably an overgeneralization inasmuch as some noun senses are clearly nontopicalthus leacock towell and voorhees found that some senses of the noun line are not susceptible to disambiguation with topical contextfor example the textual sense of line can appear with any topic whereas the product sense of line cannotwhen it happens that a nontopical sense accounts for a large proportion of occurrences then adding topical context to local will have little benefit and may even reduce accuracyone should not conclude from these results that the topical classifiers and tlc are inferior to the classifiers reviewed in section 2in our experiments monosemous collocations in wordnet that contain the target word were systematically removed from the training and testing materialsthis was done on the assumption that these words are not ambiguousremoving them undoubtedly made the task more difficult than it would normally behow much more difficultan estimate is possiblewe classifier performance on four senses of the verb servepercentage accounted for by most frequent sense 41 searched through 7000 sentences containing line and found 1470 sentences contained line as the head of a monosemous collocation in wordnet ie line could be correctly disambiguated in some 21 of those 7000 sentences simply on the basis of the wordnet entries in which it occurredin other words if these sentences had been included in the experimentand had been identified by automatic lookupoverall accuracy would have increased from 83 to 87using topical context alone tlc performs no worse than other topical classifiersleacock towell and voorhees report that the three topical classifiers tested averaged 74 accuracy on six senses of the noun linewith these same training and testing data tlc performed at 73 accuracysimilarly when the content vector and neural network classifiers were run on manually tagged training and testing examples of the verb serve they averaged 74 accuracyas did tlc using only topical contextwhen local context is combined with topical tlc is superior to the topical classifiers compared in the leacock towel and voorhees studyjust how useful is a sense classifier whose accuracy is 85 or lessprobably not very useful if it is part of a fully automated nlp application but its performance might be adequate in an interactive application in fact when recall does not have to be 100 the precision of the classifier can be improved considerablythe classifier described above always selects the sense that has the highest probabilitywe have observed that when classifier performance on three senses of the adjective hardpercentage accounted for by most frequent sense 80 the difference between the probability of this sense and that of the second highest is relatively small the classifier choice is often incorrectone way to improve the precision of the classifier though at the price of reduced recall is to identify these situations and allow it to respond do not know rather than forcing a decisionwhat is needed is a measure of the difference in the probabilities of the two sensesfollowing the approach of dagan and itai we use the log of the ratio of the probabilities in for this purposebased on this value a threshold e can be set to control when the classifier selects the most probable sensefor example if e 2 then ln must be 2 or greater for a decision to be madedagan and itai also describe a way to make the threshold dynamic so that it adjusts for the amount of evidence used to estimate pi and p2the basic idea is to create a onetailed confidence interval so that we can state with probability 1 a that the true value of the difference measure is greater than owhen the amount of evidence is small the value of the measure must be larger in order to insure that e is indeed exceededtable 2 shows precision and recall values for serve hard and line at eight different settings of 0 using a 60 confidence intervaltlc was first trained on 100 examples of each sense and it was then tested on separate 100example setsin all cases precision was positively correlated with the square root of 0 and recall was negatively correlated with the square root of 0 as crossvalidation the equations of the lines that fit the precision and recall results on the test sample were used to predict the precision and recall at the various values of 0 on a second test samplethey provided a good fit to the new data accounting for an average of 93 of the variancethe standard errors of estimate for hard serve and line were 028 030 and 029 for precision and 053 068 and 041 for recallthis demonstrates that it is possible to produce accurate predictions of precision and recall as a function of for new test setswhen the threshold is set to a large value precision approaches 100the criterion thus provides a way to locate those cases that can be identified automatically with very high accuracywhen tlc uses a high criterion for assigning senses it can be used to augment the training examples by automatically collecting new examples from the test corpusin summary the results obtained with tlc support the following preliminary conclusions improvement with training levels off after about 100 training examples for the least frequent sense the high predictive power of local context for the verb and adjective indicate that the local parameters effectively capture syntactically mediated relations eg the subject and object or complement of verbs or the noun that an adjective modifies nouns may be more quottopicalquot than verbs and adjectives and therefore benefit more from the combination of topical and local context the precision of tlc can be considerably improved at the price of recall a tradeoff that may be desirable in some interactive nlp applicationsa final observation we can make is that when topical and local information is combined what we have called quotnontopical sensesquot can reduce overall accuracyfor example the textual sense of line is relatively topicindependentthe results of the line experiment were not affected too adversely because the nontopical sense of line accounted for only 10 of the training examplesthe effects of nontopical senses will be more serious when most senses are nontopical as in the case of many adjectives and verbsthe generality of these conclusions must of course be tested with additional words which brings us to the problem of obtaining training and testing corporaon one hand it is surprising that a purely statistical classifier can quotlearnquot how to identify a sense of a polysemous word with as few as 100 example contextson the other hand anyone who has manually built such sets knows that even collecting 100 examples of each sense is a long and tedious processthe next section presents one way in which the lexical knowledge in wordnet can be used to extract training examples automaticallycorpusbased word sense identifiers are data hungryit takes them mere seconds to digest all of the information contained in training materials that take months to prepare manuallyso although statistical classifiers are undeniably effective they are not feasible until we can obtain reliable unsupervised training datain the gale church and yarowsky study training and testing materials were automatically acquired using an aligned frenchenglish bilingual corpus by searching for english words that have two different french translationsfor example english tokens of sentence were translated as either peine or phrasethey collected contexts of sentence translated as peine to build a corpus for the judicial sense and collected contexts of sentence translated as phrase to build a corpus for the grammatical senseone problem with relying on bilingual corpora for data collection is that bilingual corpora are rare and aligned bilingual corpora are even rareranother is that since french and english are so closely related different senses of polysemous english words often translate to the same french wordfor example line is equally polysemous in french and englishand most senses of line translate into french as ligneseveral artificial techniques have been used so that classifiers can be developed and tested without having to invest in manually tagging the data yarowsky and schtitze have acquired training and testing materials by creating pseudowords from existing nonhomographic formsfor example a pseudoword was created by combining abusedescortedexamples containing the string escorted were collected to train on one sense of the pseudoword and examples containing the string abused were collected to train on the other sensein addition yarowsky used homophones and yarowsky created homographs by stripping accents from french and spanish wordsalthough these latter techniques are useful in their own right the resulting materials do not generalize to the acquisition of tagged training for real polysemous or even homographic wordsthe results of disambiguation strategies reported for pseudowords and the like are consistently above 95 overall accuracy far higher than those reported for disambiguating three or more senses of polysemous words yarowsky used a thesaurus to collect training materialshe tested the unsupervised training materials on 12 nouns with almost perfect results on homonyms 72 accuracy for four senses of interest and 77 on three senses of conethe training was collected in the following mannertake a roget categoryhis examples were tool and animaland collect sentences from a corpus using the words in each categoryconsider the noun crane which appears in both the roget categories tool and animalto represent the tool category yarowsky extracted contexts from groier encyclopediafor example contexts with the words adz shovel crane sickle and so onsimilarly he collected sentences with names of animals from the animal categoryin these samples crane and drill appeared under both categoriesyarowsky points out that the resulting noise will be a problem only when one of the spurious senses is salient dominating the training set and he uses frequencybased weights to minimize these effectswe propose to minimize spurious training by using monosemous words and collocationson the assumption that if a word has only one sense in wordnet it is monosemousschtitze developed a statistical topical approach to word sense identification that provides its own automatically extracted training examplesfor each occurrence t of a polysemous word in a corpus a context vector is constructed by summing all the vectors that represent the cooccurrence patterns of the openclass words in t context these context vectors are clustered and the centroid of each cluster is used to represent a quotsensequot when given a new occurrence of the word a vector of the words in its context is constructed and this vector is compared to the sense representations to find the closest matchschulze has used the method to disambiguate pseudowords homographs and polysemous wordsperformance varies depending in part on the number of clusters that are created to represent senses and on the degree to which the distinctions correspond to different topicsthis approach performs very well especially with pseudowords and homographshowever there is no automatic means to map the sense representations derived from the system onto the more conventional word senses found in dictionariesconsequently it does not provide disambiguated examples that can be used by other systemsyarowsky has proposed automatically augmenting a small set of experimentersupplied seed collocations into a much larger set of training materialshe resolved the problem of the sparseness of his collocations by iteratively bootstrapping acquisition of training materials from a few seed collocations for each sense of a homographhe locates examples containing the seeds in the corpus and analyzes these to find new predictive patterns in these sentences and retrieves examples containing these patternshe repeats this step iterativelyresults for the 12 pairs of homographs reported are almost perfectin his paper yarowsky suggests wordnet as a source for the seed collocationsa suggestion that we pursue in the next sectionwordnet is particularly well suited to the task of locating senserelevant context because each word sense is represented as a node in a rich semantic lexical network with synonymy hyponymy and meronymy links to other words some of them polysemous and others monosemousthese lexical quotrelativesquot provide a key to finding relevant training sentences in a corpusfor example the noun suit is polysemous but one sense of it has business suit as a monosemous daughter and another has legal proceeding as a hypernymby collecting sentences containing the unambiguous nouns business suit and legal proceeding we can build two corpora of contexts for the respective senses of the polysemous wordall the systems described in section 21 could benefit from the additional training materials that monosemous relatives can providethe wordnet online lexical database has been developed at princeton university over the past 10 yearslike a standard dictionary wordnet contains the definitions of wordsit differs from a standard dictionary in that instead of being organized alphabetically wordnet is organized conceptuallythe basic unit in wordnet is a synonym set or synset which represents a lexicalized conceptfor example wordnet version 15 distinguishes between two senses of the noun shot with the synsets shot snapshot and shot injectionin the context quotthe photographer took a shot of maryquot the word snapshot can be substituted for one sense of shotin the context quotthe nurse gave mary a flu shotquot the word injection can be substituted for another sense of shotnouns verbs adjectives and adverbs are each organized differently in wordnetall are organized in synsets but the semantic relations among the synsets differ depending on the grammatical category as can be seen in table 3nouns are organized in a hierarchical tree structure based on hypernymyhyponymythe hyponym of a noun is its subordinate and the relation between a hyponym and its hypernym is an is a kind of relationfor example maple is a hyponym of tree which is to say that a maple is a kind of treehypernymy and its inverse hyponymy are transitive semantic relations between synsetsmeronymy and its inverse holortymy are complex semantic relations that distinguish component parts substantive parts and member partsthe verbal hierarchy is based on troponymy the is a manner of relationfor example stroll is a troponym of walk which is to say that strolling is a manner of walkingentailment relations between verbs are also coded in wordnetthe organization of attributive adjectives is based on the antonymy relationwhere direct antonyms exist adjective synsets point to antonym synsetsa head adjective is one that has a direct antonym many adjectives like sultry have no direct antonymswhen an adjective has no direct antonym its synset points to a head that is semantically similar to itthus sultry and torrid are similar in meaning to hot which has the direct antonym of coldso although sultry has no direct antonym it has cold as its indirect antonymrelational adjectives do not have antonyms instead they point to nounsconsider the difference between a nervous disorder and a nervous studentin the former nervous pertains to a noun as in nervous system whereas the latter is defined by its relation to other adjectivesits synonyms and antonyms adverbs have synonymy and antonymy relationswhen the adverb is morphologically related to an adjective and semantically related to the adjective as well the adverb points to the adjectivewe have had some success in exploiting wordnet semantic relations for word sense identificationsince the main problem with classifiers that use local context is the sparseness of the training data leacock and chodorow used a proximity measure on the hypernym relation to replace the subject and complement of the verb serve in the testing examples with the subject and complement from training examples that were quotclosestquot to them in the noun hierarchyfor example one of the test sentences was quotsauerbraten is usually served with dumplingsquot where neither sauerbraten nor dumpling appeared in any training sentencethe similarity measures on wordnet found that sauerbraten was most similar to dinner in the training and dumpling to baconthese nouns were substituted for the novel ones in the test setsthus the sentence quotdinner is usually served with baconquot was substituted for the original sentenceaugmentation of the local context classifier with wordnet similarity measures showed a small but consistent improvement in the classifier performancethe improvement was greater with the smaller training setsresnik uses an informationbased measure the most informative class on the wordnet taxonomya class consists of the synonyms found at a node and the synonyms at all the nodes that it dominates based on verbobject pairs collected from a corpus resnik found for example that the objects for the verb open fall into two classes receptacle and oral communicationconversely the class of a verb object could be used to determine the appropriate sense of that verbthe experiments in the next section depend on a subset of the wordnet lexical relations those involving monosemous relatives so we were interested in determining just what proportion of word senses have such relativeswe examined 8500 polysemous nouns that appeared in a moderatesize 25millionword corpusin all these 8500 nouns have more than 24000 wordnet sensesrestricting the relations to synonyms immediate hyponyms and immediate hypernyms we found that about 64 have monosemous relatives attested in the corpuswith larger corpora and more lexical relations this percentage can be expected to increasethe approach we have used is related to that of yarowsky in that training materials are collected using a knowledge base but it differs in other respects notably in the selection of training and testing materials the choice of a knowledge base and use of both topical and local classifiersyarowsky collects his training and testing materials from a specialized corpus grolier encyclopediait remains to be seen whether a statistical classifier trained on a topically organized corpus such as an encyclopedia will perform in the same way when tested on general unrestricted text such as newspapers periodicals and booksone of our goals is to determine whether automatic extraction of training examples is feasible using general corporain his experiment yarowsky uses an updated online version of roget thesaurus that is not generally available to the research communitythe only generally available version of roget is the 1912 edition which contains many lexical gapswe are using wordnet which can be obtained via anonymous ftpyarowsky classifier is purely topical but we also examine local contextfinally we hope to avoid inclusion of spurious senses by using monosemous relativesin this experiment we collected monosemous relatives of senses of 14 nounstraining sets are created in the following mannera program called autotrain retrieves from wordnet all of the monosemous relatives of a polysemous word sense samples and retrieves example sentences containing these monosemous relatives from a 30millionword corpus of the san jose mercury news and formats them for tlcthe sampling process retrieves the quotclosestquot relatives firstfor example suppose that the system is asked to retrieve 100 examples for each sense of the noun courtthe system first looks for the strongest or toplevel relatives for monosemous synonyms of the sense and for daughter collocations that contain the target word as the head and tallies the number of examples in the corpus for eachif the corpus has 100 or more examples for these toplevel relatives it retrieves a sampling of them and formats them for tlcif there are not enough toplevel examples the remainder of the target monosemous relatives are inspected in the order all other daughters hyponym collocations that contain the target all other hyponyms hypernyms and finally sistersautotrain takes as broad a sampling as possible across the corpus and never takes more than one example from an articlethe number of examples for each relative is based on the relative proportion of its occurrences in the corpustable 4 shows the monosemous relatives that were used to train five senses of the noun linethe monosemous relatives of the sixth sense in the original study line as an abstract division are not attested in the sim corpusthe purpose of the experiment was to see how well tlc performed using unsupervised training and when possible to compare this with its performance when training on the manually tagged materials being produced at princeton cognitive science laboratorywhen a sufficient number of examples for two or more senses were available 100 examples of each sense were set aside to use in trainingthe remainder were used for testingonly the topical and local openclass cues were used since preliminary tests showed that performance declined when using local closedclass and partofspeech cues obtained from the monosemous relativesthis is not surprising as many of the relatives are collocations whose local syntax is quite different from that of the polysemous word in its typical usagefor example the formation sense of line is often followed by an ofphrase as in a line of children but its relative picket line is notprior probabilities for the sense were taken from the manually tagged materialstable 5 shows the results when tlc was trained on monosemous relatives and on manually tagged training materialsbaseline performance is when the classifier always chooses the most frequent senseeight additional words had a sufficient number of manually tagged examples for testing but not for training tlcthese are shown in table 6for four of the examples in table 5 training with relatives produced results within 1 or 2 of manually tagged trainingline and work however showed a substantial decrease in performancein the case of line this might be due to overly specific training contextsalmost half of the training examples for the formation sense of line come from one relative picket linein fact all of the monosemous relatives except for rivet line and trap line are human formationsthis may have skewed training so that the classifier performs poorly on other uses of line as formationin order to compare our results with those reported in yarowsky we trained and tested on the same two senses of the noun duty that yarowsky had tested he reported that his thesaurusbased approach yielded 96 precision with 100 recalltlc used training examples based on monosemous wordnet relatives and correctly identified the senses with 935 precision at 100 recalltable 6 shows tlc performance on the other eight words after training with monosemous relatives and testing on manually tagged examplesperformance is about the same as or only slightly better than the highest prior probabilityin part this is due to the rather high probability of the most frequent sense for this setthe values in the table are based on decisions made on all test examplesif a threshold is set for tlc precision of the classifier can be increased substantially at the expense of recalltable 7 shows recall levels when tlc is trained on monosemous relatives and the value of e is set for 95 precisionoperating in this mode the classifier can gather new training materials automatically and with high precisionthis is a particularly good way to find clear cases of the most frequent sensethe results also show that not all words are well suited to this kind of operationlittle can be gained for a word like work where the two senses activity and product are closely related and therefore difficult for the classifier to distinguish due to a high degree of overlap in the training contextsproblems of this sort can be detected even before testing by computing correlations between the vectors of openclass words for the different sensesthe cosine correlation between the activity and product senses of work is are 49 indicating a high degree of overlapthe mean correlation between pairs of senses for the other words in table 7 is are 31our evidence indicates that local context is superior to topical context as an indicator of word sense when using a statistical classifierthe benefits of adding topical to local context alone depend on syntactic category as well as on the characteristics of the individual wordthe three words studied yielded three different patterns a substantial benefit for the noun line slightly less for the verb serve and none for the adjective hardsome word senses are simply not limited to specific topics and appear freely in many different domains of discoursethe existence of nontopical senses also limits the applicability of the quotone sense per discoursequot generalization of gale church and yarowsky who observed that within a document a repeated word is almost always used in the same sensefuture work should be directed toward developing methods for determining when a word has a nontopical senseone approach to this problem is to look for a word that appears in many more topical domains than its total number of sensesbecause the supply of manually tagged training data will always be limited we propose a method to obtain training data automatically using commonly available materials exploiting wordnet lexical relations to harvest training examples from ldc corpora or even the world wide webwe found this method to be effective although not as effective as using manually tagged trainingwe have presented the components of a system for acquiring unsupervised training materials that can be used with any statistical classifierthe components can be fit together in the following mannerfor a polysemous word locate the monosemous relatives for each of its senses in wordnet and extract examples containing these relatives from a large corpussenses whose contexts greatly overlap can be identified with a simple cosine correlationoften correlations are high between senses of a word that are systematically related as we saw for the activity and product senses of workin some cases the contexts for the two closely related senses may be combinedsince the frequencies of the monosemous relatives do not correlate with the frequencies of the senses prior probabilities must be estimated for classifiers that use themin the experiments of section 32 these were estimated from the testing materialsthey can also be estimated from a small manually tagged sample such as the parts of the brown corpus that have been tagged with senses in wordnetwhen the threshold is set to maximize precision the results are highly reliable and can be used to support an interactive application such as machineassisted translation with the goal of reducing the amount of interactionalthough we have looked at only a few examples it is clear that given wordnet and a large enough corpus the methods outlined for training on monosemous relatives can be generalized to build training materials for thousands of polysemous wordswe are indebted to the other members of the wordnet group who have provided advice and technical support christiane fellbaum shari landes and randee tengiwe are also grateful to paul bagyenda ben johnsonlaird and joshua schecterwe thank scott wayland tim allison and jill hollifield for tagging the serve and hard corporafinally we are grateful to the three anonymous cl reviewers for their comments and advicethis material is based upon work supported in part by the national science foundation under nsf award no1r19528983 and by the defense advanced research projects agency grant non00014911634
J98-1006
using corpus statistics and wordnet relations for sense identificationcorpusbased approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneckwe show how knowledgebased techniques can be used to open the bottleneck by automatically locating training corporawe describe a statistical classifier that combines topical context with local cues to identity a word sensethe classifier is used to disambiguate a noun a verb and an adjectivea knowledge base in the form of wordnet lexical relations is used to automatically locate training examples in a general text corpustest results are compared with those from manually tagged training exampleswe present a method to obtain sensetagged examples using monosemous relatives
a corpusbased investigation of definite description use we present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation we ran two experiments in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles containing a total of 1412 definite descriptions we measured the agreement among annotators about the classes assigned to definite descriptions as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text the most interesting result of this study a corpus annotation perspective was the rather low agreement 063 we obtained versions of hawkins and prince classification schemes better results 076 obtained using the simplified scheme proposed by fraurud that includes only two classes firstmention and subsequentmention the agreement about antecedents was also not complete these findings raise questions concerning the strategy of evaluating systems for definite description inby comparing their results with a standardized annotation a linguistic of view the most interesting observations were the great number of discoursenew definites in our corpus and the presence of definites that did not seem to require a complete disambiguation we present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretationwe ran two experiments in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles containing a total of 1412 definite descriptionswe measured the agreement among annotators about the classes assigned to definite descriptions as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the textthe most interesting result of this study from a corpus annotation perspective was the rather low agreement that we obtained using versions of hawkins and prince classification schemes better results were obtained using the simplified scheme proposed by fraurud that includes only two classes firstmention and subsequentmentionthe agreement about antecedents was also not completethese findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotationfrom a linguistic point of view the most interesting observations were the great number of discoursenew definites in our corpus and the presence of definites that did not seem to require a complete disambiguationthe work presented in this paper was inspired by the growing realization in the field of computational linguistics of the need for experimental evaluation of linguistic theoriessemantic theories in our casethe evaluation we are considering typically takes the form of experiments in which human subjects are asked to annotate texts from a corpus according to a given classification scheme and the agreement among their annotations is measured these attempts at evaluation are in part motivated by the desire to put these theories on a more quotscientificquot footing by ensuring that the semantic judgments on which they are based reflect the intuitions of a large number of speakers1 but experimental evaluation is also seen as a necessary precondition for the kind of system evaluation done for example in the message understanding initiative where the performance of a system is evaluated by comparing its output on a collection of texts with a standardized annotation of those texts produced by humans clearly a mucstyle evaluation presupposes an annotation scheme on which all participants agreeour own concern are semantic judgments concerning the interpretation of noun phrases with the definite article the that we will call definite descriptions following 2 these noun phrases are one of the most common constructs in english and have been extensively studied by linguists philosophers psychologists and computational linguists theories of definite descriptions such as identify two subtasks involved in the interpretation of a definite description deciding whether the definite description is related to an antecedent in the texewhich in turn may involve recognizing fairly finegrained distinctionsand if so identifying this antecedentsome of these theories have been cast in the form of classification schemes and have been used for corpus analysis 5 yet we are aware of no attempt at verifying whether subjects not trained in linguistics are capable of recognizing the proposed distinctions which is a precondition for using these schemes for the kind of largescale text annotation exercises necessary to evaluate a system performance as done in mucin the past two or three years this kind of verification has been attempted for other aspects of semantic interpretation by passonneau and litman for segmentation and by kowtko isard and doherty and carletta et al for dialogue act annotationour intention was to do the same for definite descriptionswe ran two experiments to determine how good naive subjects are at doing the form of linguistic analysis presupposed by current schemes for classifying definite descriptionsour subjects were asked to classify the definite descriptions found in a corpus of natural language texts according to classification schemes that we developed starting from the taxonomies proposed by hawkins and prince but which took into account our intention of having naive speakers perform the classificationour experiments were also designed to assess the feasibility of a system to process definite descriptions on unrestricted text and to collect data that could be used for this implementationfor both of these reasons the classification schemes that we tried differ in several respects from those adopted in prior corpusbased studies such as prince and fraurud our study is also different from these previous ones in that measuring the agreement among annotators became an issue for the experiments we used a set of randomly selected articles from the wall street journal contained in the acldci cdrom rather than a corpus of transcripts of spoken language corpora such as the hcrc maptask corpus or the trains corpus the main reason for this choice was to avoid dealing with deictic uses of definite descriptions and with phenomena such as reference failure and repaira second reason was that we intended to use computer simulations of the classification task to supplement the results of our experiments and we needed a parsed corpus for this purpose the articles we chose were all part of the penn treebank in the remainder of the paper we review two existing classification schemes in section 2 and then discuss our two classification experiments in sections 3 and 4when looking for an annotation scheme for definite descriptions one is faced with a wide range of optionsat one end of the spectrum there are mostly descriptive lists of definite description uses such as those in christophersen and hawkins whose only goal is to assign a classification to all uses of definite descriptionsat the other end there are highly developed formal analyses such as russell heim lobner kadmon neale barker and kamp and reyle in which the compositional contribution of definite descriptions to the meaning of an utterance as well as their truthconditional properties are spelled out in detailthese more formal analyses are concerned with questions such as the quantificational or nonquantificational status of definite descriptions and the proper treatment of presuppositions but tend to concentrate on a subset of the full range of definite description useamong the more developed semantic analyses some identify uniqueness as the defining property of definite descriptions whereas others take familiarity as the basis for the analysis we will say more about some of these analyses belowour choice of a classification scheme was dictated in part by the intended use of the annotation in part by methodological considerationsan annotation used to evaluate the performance of a system ought to identify the anaphoric connections between discourse entities this makes familiaritybased analyses more attractivefrom a methodological point of view it was important to choose an annotation scheme that would make the classification task doable by subjects not trained in linguistics and had already been applied to the task of corpus analysiswe felt that we could ask naive subjects to assign each definite description to one of a few classes and to identify its antecedent when appropriate we also wanted an annotation scheme that would characterize the whole range of definite description use so that we would not need to worry about eliminating definite descriptions from our texts because they were unclassifiablefor these reasons we chose hawkins list of definite description uses and prince taxonomy as our starting point and developed from there two slightly different annotation schemes which allowed us to see whether it was better to describe the classes to our annotators in a surfaceoriented or a semantic fashion and to evaluate the seriousness of the problems with these schemes identified in the literature the wide range of uses of definite descriptions was already highlighted in christophersen in the third chapter of his book hawkins further develops and extends christophersen listhe identifies the following classes or uses of definite descriptions anaphoric usethese are definite descriptions that cospecify with a discourse entity already introduced in the discourse6 the definite description may use the same descriptive predicate as its antecedent or any other capable of indicating the same antecedent immediate situation usesthe next two uses of definite descriptions identified by hawkins are occurrences used to refer to an object in the situation of utterancethe referent may be visible or its presence may be inferredthe visible situation use occurs when the object referred to is visible to both speaker and hearer as in the following examples hawkins classifies as immediate situation uses those definite descriptions whose referent is a constituent of the immediate situation in which the use of the definite description is located without necessarily being visible standard terminology we will use the term referent to indicate the object in the world that is contributed to the meaning of an utterance by a definite descriptioneg we will say that bill clinton is the referent of a referential use of the definite description the president of the usa in 1997we will then say following sidner terminology that a definite description cospecifies with its antecedent in a text when such antecedent exists if the definite description and its antecedent denote the same objectthis is probably the most precise way of referring to the relation between an anaphoric expression and its antecedent note that two discourse entities can cospecify without referring to any object in the worldeg in the king of france is baldhe has a double chin as well he cospecifies with the king of france but this latter expression does not refer to anythinghowever since we will mostly be concerned with referential discourse entities we will often use the term corefer instead of cospecifyapart from this we have tried to avoid more complex issues of reference insofar as possible larger situation useshawkins lists two uses of definite descriptions characteristic of situations in which the speaker appeals to the hearer knowledge of entities that exist in the nonimmediate or larger situation of utteranceknowledge they share by being members of the same community for instancea definite description may rely on specific knowledge about the larger situation this is the case in which both the speaker and the hearer know about the existence of the referent as in the example below in which it is assumed that speaker and hearer are both inhabitants of halifax a town which has a gibbet at the top of gibbet street the gibbet no longer standsspecific knowledge is not however a necessary part of the meaning of larger situation uses of definite descriptionswhile some hearers may have specific knowledge about the actual individuals referred to by a definite description others may notgeneral knowledge about the existence of certain types of objects in certain types of situations is sufficienthawkins classifies those definite descriptions that depend on this knowledge as instances of general knowledge in the larger situation usean example is the following utterance in the context of a wedding such a firstmention of the bridesmaids is possible on the basis of the knowledge that weddings typically have bridesmaidsin the same way a firstmention of the bride the church service or the best man would be possibleassociative anaphoric usespeaker and hearer may have knowledge of the relations between certain objects and their components or attributes associative anaphoric uses of definite descriptions exploit this knowledgewhereas in larger situation uses the trigger is the situation itself in the associative anaphoric use the trigger is an np introduced in the discoursesome of the classes in the christophersenhawkins classification are specified in a semantic fashion other classes are defined in purely syntactic termsit is natural to ask what these uses of definite descriptions have in common from a semantic point of view for example is there a connection between the unfamiliar and unexplanatory uses of definite descriptions and the other usesmany authors including hawkins himself have attempted to go beyond the purely descriptive list just discussedone group of authors have identified uniqueness as the defining property of definite descriptionsthis idea goes back to russell and is motivated by larger situation definite descriptions such as the pope and by some cases of unexplanatory modifier use such as the first person to sail to americathe hypothesis was developed in recent years to address the problem of uniqueness within small situations 7 another line of research is based on the observation that many of the uses of definite descriptions listed by hawkins have one property in common the speaker is making some assumptions about what the hearer already knowsspeaking very loosely we might say that the speaker assumes that the hearer is able to quotidentifyquot the referent of the definite descriptionthis is also true of some of the uses hawkins classified as unfamiliar such as his nominal modifiers and associative clause classesattempts at making this intuition more precise include christophersen familiarity theory strawson presuppositional theory of definite descriptions hawkins location theory and its revision clark and marshall theory of definite reference and mutual knowledge as well as more formal proposals such as heim neither the uniqueness nor the familiarity approach have yet succeeded in providing a satisfactory account of all uses of definite descriptions however the theories based on familiarity address more directly the main concern of nlp system designers which is to identify the connections between discourse entitiesfurthermore the prior corpusbased studies of definite descriptions use that we are aware of are based on theories of this typefor both of these reasons we adopted semantic notions introduced in familiaritystyle accounts in designing our experimentsin particular distinctions introduced in prince taxonomyprince studied in detail the connection between a speaker writer assumptions about the hearer reader and the linguistic realization of noun phrases she criticizes as too simplistic the binary distinction between given and new discourse entities that is at the basis of most previous work on familiarity and proposes a much more detailed taxonomy of quotgivennessquotor as she calls it assumed familiaritymeant to address this problemalso prince analysis of noun phrases is closer than the christophersenhawkins taxonomy to a classification of definite descriptions on purely semantic terms for example she relates unfamiliar definites based on referentestablishing relative clauses with hawkins associative clause and associative anaphoric useshearernewheareroldone factor affecting the choice of a noun phrase according to prince is whether a discourse entity is old or new with respect to the hearer knowledgea speaker will use a proper name or a definite description when he or she assumes that the addressee already knows the entity whom the speaker is referring to as in and nine hundred people attended the instituteon the other hand if the speaker believes that the addressee does not know of sandy thompson an indefinite will be used i am waiting for it to be noon so i can call someone in californiadiscoursenewdiscourseoldin addition discourse entities can be new or old with respect to the discourse model an np may refer to an entity that has already been evoked in the current discourse or it may evoke an entity that has not been previously mentioneddiscourse novelty is distinct from hearer novelty both sandy thompson in and the someone in california mentioned in may well be discoursenew even if only the second one will be hearernewon the other hand for an entity being discourseold entails being heareroldin other words in prince theory the notion of familiarity is split in two familiarity with respect to the discourse and familiarity with respect to the hearereither type of familiarity can license the use of definites hawkins anaphoric uses of definite descriptions are cases of noun phrases referring to discourseold discourse entities whereas his larger situation and immediate situation uses are cases of noun phrases referring to discoursenew hearerold entitiesinferrablesthe uses of definite descriptions that hawkins called associative anaphoric such as a book the author are not discourseold or even hearerold but they are not entirely new either as hawkins pointed out the hearer is assumed to be capable of inferring their existenceprince called these discourse entities inferrablescontaining inferrablesfinally prince proposes a category for noun phrases that are like inferrables but whose connection with previous hearer knowledge is specified as part of the noun phrase itselfher example is the door of the bastille in the following example the door of the bastille was painted purpleat least three of the unfamiliar uses of hawkinsnp complements referentestablishing relative clauses and associative clausesfall into this categoryperhaps the most important question concerning a classification scheme is its coveragethe two taxonomies we have just seen are largely satisfactory in this respect but a couple of issues are worth mentioningfirst of all prince taxonomy does not give us a complete account of the licensing conditions for definite descriptionsof the uses mentioned by hawkins the unfamiliar definites with unexplanatory modifiers and np complements need not satisfy any of the conditions that license the use of definites according to prince these definites are not necessarily discourseold hearerold inferrables or containing inferrablesthese uses fall outside of clark and marshall classification as wellsecondly none of the classification schemes just discussed nor any of the alternatives proposed in the literature consider socalled generic uses of definite descriptions such as the use of the tiger in the generic sentence the tiger is a fierce animal that lives in the junglethe problem with these uses is that the very question of whether the quotreferentquot is familiar or not seems misplacedthese uses are not quotreferentialquot a problem related to the one just mentioned is that certain uses of definite descriptions are ambiguous between a referential and an attributive interpretation the sentence the first person to sail to america was an icelander for example can have two interpretations the writer may either refer to a specific person whose identity may be mutually known to both writer and reader or he or she may be simply expressing a property that is true of the first person to sail to america whoever that person happened to bethis ambiguity does not seem to be possible with all uses of definite descriptions eg pass me the salt seems only to have a referential useagain the schemes we have presented do not consider this issuethe question of how to annotate generic uses of definite descriptions or uses that are ambiguous between a referential and an attributive use will not be addressed in this papera second problem with the classification schemes we have discussed was raised by fraurud in her study of definite nps in a corpus of swedish text fraurud introduced a drastically simplified classification scheme based on two classes only subsequentmention corresponding to hawkins anaphoric definite descriptions and prince discourseold and firstmention including all other definite descriptionsfraurud simplified matters in this way because she was primarily interested in verifying the empirical basis for the claim that familiarity is the defining property of definite descriptions she also observed however that some of the distinctions introduced by hawkins and prince led to ambiguities of classificationfor example she observed that the reader of a swedish newspaper can equally well interpret the definite description the king in an article about sweden by reference to the larger situation or to the content of the articlewe took into account fraurud observations in designing our experiments and we will compare our results to hers belowfor our first experiment evaluating subjects performance at the classification task we developed a taxonomy of definite description uses based on the schemes discussed in the previous section preliminarily tested the taxonomy by annotating the corpus ourselves and then asked two annotators to do the same taskthis first experiment is described in the rest of this sectionwe explain first the classification we developed for this experiment then the experimental conditions and finally discuss the resultsthe annotation schemes for noun phrases proposed in the literature fall into one of two categorieson the one hand we have what we might call labeling schemes most typically used by corpus linguists which involve assigning to each noun phrase a class such as those discussed in the previous section the schemes used by fraurud and prince fall into this categoryon the other hand there are what we might call linking schemes concerned with identifying the links between the discourse entity or entities introduced by a noun phrase and other entities in the discourse the scheme used in muc6 is of this typein our experiments we tried both a pure labeling scheme and a mixed labeling and linking schemewe also tried two slightly different taxonomies of definite descriptions and we varied the way membership in a class was defined to the subjectsboth taxonomies were based on the schemes proposed by hawkins and prince but we introduced some changes in order first to find a scheme that would be easily understood by individuals without previous linguistic training and would lead to maximum agreement among the classifiers and second to make the classification more useful for our goal of feeding the results into an implementationin the first experiment we used a labeling scheme and the classes were introduced to the subjects with reference to the surface characteristics of the definite descriptionsthe taxonomy we used in this experiment is a simplification of hawkins scheme to which we made three main changesfirst of all we separated those anaphoric descriptions whose antecedents have the same descriptive content as their antecedent from other cases of anaphoric descriptions in which the association is based on more complex forms of lexical or commonsense knowledge we grouped these latter definite descriptions with hawkins associative descriptions in a class that we called associativethis was done in order to see how much need there is for complex lexical inferences in resolving anaphoric definite descriptions as opposed to simple head matchingsecondly we grouped together all the definite descriptions that introduce a novel discourse entity not associated to some previously established object in the text ie that were discoursenew in prince sensethis class that we will call larger situationunfamiliar includes both definite descriptions that exploit situational information and discoursenew definite descriptions introduced together with their links or referents this was done because of fraurud observation that distinguishing the two classes is generally difficult third we did not include a class for immediate situation uses since we assumed they would be rare in written textwe also introduced a separate class of idioms including indirect references idiomatic expressions and metaphorical uses and we allowed our subjects to mark definite descriptions as doubtsto summarize the classes used in this experiment were as follows caspar wyo to drill the bilbrey well a 15000foot 1millionplus 10 this was indeed the case but we did observe a few instances of an interesting kind of immediate situation usein these cases the text is describing the immediate situation in which the writer is and the writer apparently expects the reader to reconstruct this situation natural gas wellthe rig was built around 1980 but has drilled only two wells the last in 1982iiassociativewe assigned to this class those definite descriptions that stand in an anaphoric or associative anaphoric relation with an antecedent explicitly mentioned in the text but that are not identified by the same head noun as their antecedentthis class includes hawkins associative anaphoric definite descriptions and prince inferrables as well as some definite descriptions that would be classified as anaphoric by hawkins and as textually evoked in prince recognizing the antecedent of these definite descriptions involves at least knowledge of lexical associations and possibly general commonsense knowledge awith all this even the most wary oil men agree something has changedquotit does not appear to be getting worsethat in itself has got to cause people to feel a little more optimisticquot says glenn cox the president of phillips petroleum cothough modest the change reaches beyond the oil patch too b toni johnson pulls a tape measure across the front of what was once a stately victorian homea deep trench now runs along its north wall exposed when the house lurched two feet off its foundation during last week earthquake c once inside she spends nearly four hours measuring and diagramming each room in the 80yearold house gathering enough information to estimate what it would cost to rebuild itwhile she works inside a tenant returns with several friends to collect furniture and clothingone of the friends sweeps broken dishes and shattered glass from a countertop and starts to pack what can be salvaged from the kitcheniiilarger situationunfamiliarthis class includes hawkins larger situation uses of definite descriptions based on specific and general knowledge as well as his unfamiliar uses aout here on the querecho plains of new mexico however the mood is more upbeattrucks rumble along the dusty roads and burly men in hard hats sweat and swear through the afternoon sunfirst of all we classified the definite descriptions included in 20 randomly chosen articles from the wall street journal contained in the subset of the penn treebank corpus included in the acldci cdrom12 all together these articles contain 1040 instances of definite description usethe results of our analysis are summarized in table 1next we asked two subjects to perform the same taskour two subjects in this first experiment were graduate students in linguisticsthe two subjects were given the instructions in appendix athey had to assign each definite description to one of the classes described in section 31 i anaphoric ii associative iii larger situationunfamiliar and iv idiomthe subjects could also express v doubt about the classification of the definite descriptionsince the classes iiii are not mutually exclusive we instructed the subjects to resolve conflicts according to a preference ranking ie to choose a class with higher preference when two classes seemed equally applicablethe ranking was 1 anaphoric 2 larger situationunfamiliar and 3 associativethe annotators were given one text to familiarize themselves with the task before starting with the annotation proper annotator are shown in table 2 and those of the second annotator in table 3as the tables indicate the annotators assigned approximately the same percentage of definite descriptions to each of the five classes as we did however the classes do not always include the same elementsthis can be gathered by the confusion matrix in table 4 where an entry mxy indicates the number of definite descriptions assigned to class x by subject a and to class y by subject bin order to measure the agreement in a more precise way we used the kappa statistic recently proposed by carletta as a measure of agreement for discourse analysis we also used a measure of perclass agreement that we introduced ourselveswe discuss these results below after reviewing briefly how k is computed332 the kappa statistickappa is a test suitable for cases when the subjects have to assign items to one of a set of nonordered classesthe test computes a coefficient k of agreement among coders which takes into account the possibility of chance agreementit is dependent on the number of coders number of items being classified and number of choices of classes to be ascribed to itemsthe kappa coefficient of agreement between c annotators is defined as where p is the proportion of times the annotators agree and p is the proportion of times that we would expect the annotators to agree by chancewhen there is complete agreement among the raters k 1 if there is no agreement other than that expected by chance k 0according to carletta in the field of content analysis where the kappa statistic originatedk 08 is generally taken to indicate good reliability whereas 068 k 08 allows tentative conclusions to be drawnwe will illustrate the method for computing k proposed in siegel and castellan by means of an example from one of our texts shown in table 5the first column in table 5 shows the definite description being classifiedthe columns ash ass and lsu stand for the classification options presented to the subjects associative and larger situationunfamiliar respectivelythe numbers in each nil entry of the matrix indicate the number of classifiers that assigned the description in row i to the class in column jthe final column represents the percentage agreement for each definite description we explain below how this percentage agreement is calculatedthe last row in the table shows the total number of descriptions the total number of descriptions assigned to each class and finally the total percentage agreement for all descriptions the equations for computing si pe pa and k are shown in table 6in these formulas c is the number of coders s the percentage agreement for description i m the number of categories t the total number of classification judgments pe the percentage agreement expected by chance pa the total agreement and k the kappa coefficient333 value of k for the first experimentfor the first experiment k 068 if we count idioms as a class k 073 if we take them outthe overall coefficient of agreement between the two annotators and our own analysis is k 068 if we count idioms k 072 if we ignore them334 perclass agreementk gives a global measure of agreementwe also wanted to measure the agreement per class ie to understand where annotators agreed the most computing the k coefficient of agreement and where they disagreed the mostthe confusion matrix does this to some extent but only works for two annotatorsand therefore for example we could not use it to measure agreement on classes between the two annotators and ourselveswe computed what we called perclass percentage of agreement for three coders by taking the proportion of pairwise agreements relative to the number of pairwise comparisons as follows whenever all three coders ascribe a description to the same class we count six pairwise agreements out of six pairwise comparisons for that class if two coders ascribe a description to class 1 and the other coder to class 2 we count two agreements in four comparisons for class 1 and no agreement for class 2 the rates of agreement for each class thus obtained are presented in table 7the figures indicate better agreement on anaphoric same head and larger situationunfamiliar definite descriptions worse agreement on the other classes341 distributionone of the most interesting results of this first experiment is that a large proportion of the definite descriptions in our corpus are not related to an antecedent previously introduced in the textsurprising as it may seem this finding is in fact just a confirmation of the results of other researchersfraurud reports that 609 of definite descriptions in her corpus of 11 swedish texts are firstmention ie do not corefer with an entity already evoked in the textquot gallaway found a distribution similar to ours in spoken child language low agreement among annotatorsthe reason for this disagreement was not so much annotators errors as the fact already mentioned that the classes are not mutually exclusivethe confusion matrix in table 4 indicates that the major classes of disagreements were definite descriptions classified by annotator a as larger situation and by annotator b as associative and vice versaone such example is the government in this definite description could be classified as larger situation because it refers to the government of korea and presumably the fact that korea has a government is shared knowledge but it could also be classified as being associative on the predicate koreanswe will analyze the reasons for the disagreement in more detail in relation to our second experiment in which we also asked the annotators to indicate the antecedent of definite descriptions in this experiment we were able to confirm the correlation observed by hawkins between the syntactic structure of certain definite descriptions and their classification as discoursenewfactors that strongly suggest that a definite description is discoursenew include the presence of modifiers such as first or best and of a complement for nps of the form the fact that or the conclusion that 15 postnominal modification of any type is also a strong indicator of discourse novelty suggesting that most postnominal clauses serve to establish a referent in the sense discussed in the previous sectionin addition we observed a previously unreported correlation between discourse novelty and syntactic constructions such as appositions copular constructions and comparativesthe following examples from our corpus illustrate the correlations just mentioned in addition we observed a correlation between larger situation uses of definite descriptions and certain syntactic expressions and lexical itemsfor example we noticed that a large number of uses of definite descriptions in the corpus used for this first experiment referred to temporal entities such as the year or the month or included proper names in place of the head noun or in premodifier position as in the querecho plains of new mexico and the iraniraq waralthough these definite descriptions would have been classified by hawkins as larger situation uses in many cases they could not really be considered hearerold or unused what seems to be happening in these cases is that the writer assumed the reader would use information about the visual form of words or perhaps lexical knowledge to infer that an object of that name existed in the worldwe evaluated the strength of these correlations by means of a computer simulation the system attempts to classify the definite descriptions found in texts syntactically annotated according to the penn treebank formatthe system classifies a definite description as unfamiliar using heuristics based on the syntactic and lexical correlations just observed ie if either it includes an unexplanatory modifier it occurs in an apposition or a copular construction or it is modified by a relative clause or prepositional phrasea definite description is classified as larger situation if its head noun is a temporal expression such as year or month or if its head or premodifiers are head nounsthe implementation revealed that some of the correlations are very strong for example the agreement between the system classification and the annotators on definite descriptions with a nominal complement such as the fact that varied between 93 and 100 depending on the annotator and on average 70 of temporal expressions such as the year were interpreted as larger situation by the annotatorsall of this suggests that in using definite descriptions writers may not just make assumptions about their readers knowledge they may also rely on their readers ability to use lexical or syntactic cues to classify a definite description as discoursenew even when these readers do not know about the particular object referred to alreadythis observation is consistent with fraurud hypothesis that interpreting definite descriptions involves two processesdeciding whether a definite description relates to some entity in the discourse or not and searching the antecedentand that the two processes are fairly independentour findings also suggest that the classification process may rely on more than just lexical cues as fraurud seems to assume in order to address some of the questions raised by experiment 1 we set up a second experimentin this second experiment we modified both the classification scheme and what we asked the annotators to doone concern we had in designing this second experiment was to understand better the reasons for the disagreement among annotators observed in the first experimentin particular we wanted to understand whether the classification disagreements reflected disagreements about the final semantic interpretationanother difference between this new experiment and the first one is that we structured the task of deciding on a classification for a definite description around a series of questions originating a decision tree rather than giving our subjects an explicit preference rankinga third aspect of the first experiment we wanted to study more carefully was the distribution of definite descriptions in particular the characteristics of the large number of definite descriptions in the larger situationunfamiliar classfinally we chose truly naive subjects to perform the classification taskin order to get a better idea of the extent of agreement among annotators about the semantic interpretation of definite descriptions we asked our subjects to indicate the antecedent in the text for the definite descriptions they classified as anaphoric or associativethis would also allow us to test how well subjects did with a linking type of classification like the one used in muc6we also replaced the anaphoric class we had in the first experiment with a broader coreferent class including all cases in which a definite description is coreferential with its antecedent whether or not the head noun was the same eg we asked the subjects to classify as coreferent a definite like the house referring back to an antecedent introduced as a victorian home which would not have counted as anaphoric in our first experimentthis resulted in a taxonomy that was at the same time more semantically oriented and closer to hawkins and prince classification schemes our broadened coreferent class coincides with hawkins anaphoric and prince textually evoked classes whereas the resulting narrower associative class coincides with hawkins associative anaphoric and prince class of inferrablesour intention was to see whether the distinctions proposed by hawkins and prince would result in a better agreement among annotators than the taxonomy used in our first experiment ie whether the subjects would be more in agreement about the semantic relation between a definite description and its antecedent than they were about the relation between the head noun of the definite description and the head noun of its antecedentthe larger situationunfamiliar class we had in the first experiment was split back into two classes as in hawkins and prince schemeswe did this to see whether indeed these two classes were difficult to distinguish we also wanted to get a clearer idea of the relative importance of the two kinds of definites that we had grouped together in the first annotationthe two classes were called larger situation and unfamiliarwe used three subjects for experiment 2our subjects were english native speakers graduate students of mathematics geography and mechanical engineering at the university of edinburgh we will refer to them as c d and e belowthey were asked to annotate 14 randomly selected wall street journal articles all but one of them different from those used in experiment 1 and containing 464 definite descriptions in tota116 unlike in our first experiment we did not suggest any relation between the classes and the syntactic form of the definite descriptions in the instructionsthe subjects were asked to indicate whether the entity referred to by a definite description had been mentioned previously in the text else if it was new but related to an entity already mentioned in the text else it was new but presumably known to the average reader or finally it was new in the text and presumably new to the average readerwhen the description was indicated as discourseold or related to some other entity the subjects were asked to locate the previous mention of the related entity in the textunlike the first experiment the subjects did not have the option of classifying a definite description as idiom we instructed them to make a choice and write down their doubtsthe written instructions and the script given to the subjects can be found in appendix bas in experiment 1 the subjects were given one text to practice before starting with the analysis of the corpusthey took on average eight hours to complete the taskthe distribution of definite descriptions in the four classes according to the three coders is shown in table 8we had 283 cases of complete agreement among annotators on the classification 164 cases of complete agreement on coreferential definite descriptions 7 cases of complete agreement on bridging 65 cases of complete agreement on larger situation and 47 cases of complete agreement on the unfamiliar classas in experiment 1 we measured the k coefficient of agreement among annotators the result for annotators c d and e is k 058 if we consider the definite descriptions marked as doubts k 063 if we leave them out we also measured the extent of agreement among subjects on the antecedents for coreferential and bridging definite descriptionsa total of 164 descriptions were classified as coreferential by all three coders of these 155 were taken by all coders to refer to the same entity there were only 7 definite descriptions classified by all three annotators as bridging references in 5 of these cases the three annotators also agreed on a textual antecedent 441 distribution into classesas shown in table 8 the distribution of definite descriptions among discoursenew on the one side and coreferential with bridging references one the other is roughly the same in experiment 2 as in experiment 1 and roughly the same among annotatorsthe average percentage of discoursenew descriptions is 46 against an average of 50 in the first experimenthaving split the discoursenew class in two in this experiment we got an indication of the relative importance of the hearerold and hearernew subclasses about half of the discoursenew uses fall in each of these classesbut only very approximate since the first two annotators classified the majority of these definite descriptions as larger situation whereas the last annotator classified the majority as unfamiliaras expected the broader definition of the coreferent class resulted in a larger percentage of definite descriptions being included in this class and a smaller percentage being included in the bridging reference classconsidering the difference between the relative importance of the samehead anaphora class in the first experiment and of the coreferent class in the second experiment we can estimate that approximately 15 of definite descriptions are coreferential and have a different head from their antecedents442 agreement among annotatorsthe agreement among annotators in experiment 2 was not very high 61 total agreement which gives k 058 or k 063 depending on whether we consider doubts as a classthis value is worse than the one we obtained in experiment 1 in fact this value of k goes below the level at which we can tentatively assume agreement among the annotatorsthere could be several reasons for the fact that agreement got worse in this second experimentperhaps the simplest explanation is that we were just using more classesin order to check whether this was the case we merged the classes larger situation and unfamiliar back into one class as we had in the experiment 1 that is we recomputed k after counting all definite descriptions classified as either larger situation or unfamiliar as members of the same classand indeed the agreement figures went up from k 063 to k 068 when we did so ie within the quottentativequot margins of agreement according to carletta the remaining difference between the level of agreement obtained in this experiment and that obtained in the first one might have to do with the annotators with the difficulty of the texts or with using a syntactic as opposed to a semantic notion of what counts as coreferential we are inclined to think that the last two explanations are more likelyfor one thing we found very few examples of true mistakes in the annotation as discussed belowsecondly we observed that the coefficient of agreement changes dramatically from text to text in this second experiment it varies from k 042 to k 092 depending on the text and if we do not count the three worst texts in the second experiment we get again k 073third going from a syntactic to a semantic definition of anaphoric definite description resulted in worse agreement both for coreferential and for bridging references looking at the perclass figures one can see that we went from a perclass agreement on anaphoric definite descriptions in experiment 1 of 88 to a perclass agreement on coreferential definites of 86 in experiment 2 and the perclass agreement for associative definite descriptions of 59 went down rather dramatically to a perclass agreement of 31 on bridging descriptionsthe good result obtained by reducing the number of classes led us to try to find a way of grouping definite descriptions into classes that would result in an even better agreementan obvious idea was to try with still fewer classes ie just twowe first tried the binary division suggested by fraurud all coreferential definite descriptions 7 on one side and all other definite descriptions on the other splitting things this way did result in an agreement of k 076 ie almost a good reliability although not quite as strong an agreement as we would have expectedthe alternative of putting in one class all discourserelated definite descriptionscoreferential and bridging referencesand putting larger situation and unfamiliar definite descriptions in a second class resulted in a worse agreement although not by much this suggests that our subjects did reasonably well at distinguishing firstmention from subsequentmention entities but not at drawing more complex distinctionsthey were particularly bad at distinguishing bridging references from other definite descriptions dividing the classifications into bridging definites on the one hand and all other definite descriptions on the other resulted in a very low agreement we obtained about the same results by computing the perclass percentage of agreement discussed in section 3the rates of agreement for each class thus obtained are presented in table 9again we find that the annotators found it easier to agree on coreferential definite descriptions harder to agree on bridging references the percentage agreement on the classes larger situation and unfamiliar taken individually is much lower than the agreement on the class larger situationunfamiliar taken as a wholethe results in table 9 confirm the indications obtained by computing agreement for a smaller number of classes our subjects agree pretty much on coreferential definite descriptions but bridging references are not a natural classwe discuss the cases of disagreement in more detail next among annotators about classification and about the identification of an antecedentthere were 29 cases of complete classification disagreement among annotators ie cases in which no two annotators classified a definite description in the same way and 144 cases of partial disagreementall four of the possible combinations of total disagreement were observed but the two most common combinations were bcu and blu all six combinations of partial disagreements were also observedas we do not have the space to discuss each case in detail we will concentrate on pointing out what we take to be the most interesting observations especially from the perspective of designing a corpus annotation scheme for anaphoric expressionswe found very few true mistakeswe had some problems due to the presence of idioms such as they had to pick up the slack or on the wholebut in general most of the disagreements were due to genuine problems in assigning a unique classification to definite descriptionsthe mistakes that our annotators did make were of the form exemplified by in this case all three annotators indicate the same antecedent for the definite description the rewards but whereas two of them classify the rewards as coreferential one of them classifies it as bridgingwhat seems to be happening here and in similar cases is that even though we asked the subjects to classify semantically they ended up using a notion of relatedness that is more like the notion of associative in experiment 1quotwhen we evaluated raising our bid the risks seemed substantial and persistent over the next five years and the rewards seemed a long way outquot a particularly interesting version of this problem appears in the following example when two annotators took the verb to refund as antecedent of the definite description the refund but one of them interpreted the definite as coreferential with the eventuality the other as bridgingthe refund was about 55 million more than previously ordered by the illinois commerce commission and trade groups said it may be the largest ever required of a state or local utilityas could be expected by the discussion of the k results above the most common disagreements were between the classes larger situation and unfamiliarone typical source of disagreement was the introductory use of definite descriptions common in newspapers thus for example some of our annotators would classify the illinois commerce commission as larger situation others as unfamiliarin many cases in which this form of ambiguity was encountered the definite description worked effectively as a proper name the worldwide supercomputer law the new us trade law or the face of personal computingrather surprisingly from a semantic perspective the second most common form of disagreement was between the coreferential and bridging classesin this case the problem typically was that different subjects would choose different antecedents for a certain definite descriptionthus in example the third annotator indicated 250 million as the antecedent for the refund and classified the definite description as coreferentiala similar example is in which two of the annotators classified the spinoff as bridging on spinoff cray computer corp whereas the third classified it as coreferential with the pending spinoff the survival of spinoff cray computer corp as a fledgling in the supercomputer business appears to depend heavily on the creativityand longevityof its chairman and chief designer seymour craydocuments filed with the securities and exchange commission on the pending spinoff disclosed that cray research inc will withdraw the almost 100 million in financing it is providing the new firm if mr cray leaves or if the productdesign project he heads is scrappedwhile many of the risks were anticipated when minneapolisbased cray research first announced the spinoff in may the strings it attached to the financing had not been made public until yesterdayan example of total disagreement is the following in this case we can see that all three interpretations are acceptable we may take the definite description the government of president carlos menem who took office july 8 either as a case of bridging reference on the previously mentioned argentina or as a larger situation use or as a case of unfamiliar definite description especially if we assume that this latter class coincides with prince containing inferrablesin conclusion our figures can be seen as an empirical verification of fraurud and prince hypothesis that the classification disagreements among annotators depend to a large extent on the task they are asked to do rather than reflecting true differences in semantic intuitions444 antecedent disagreementsinterestingly we also found cases of disagreement about the antecedent of a definite descriptionwe have already discussed the most common case of antecedent disagreement the case in which a definite description could equally well be taken as coreferential with one discourse entity or as bridging to anotherfor example in an article in which the writer starts discussing aetna life casualty and then goes on mentioning major insurers either discourse entity could then serve as antecedent for the subsequent definite description the insurer depending on whether the definite description is classified as coreferential or bridgingperhaps the most interesting cases of disagreement about the antecedent are examples such as one subject indicated parts of the factory as the antecedent another indicated the factory and the third indicated areas of the factory dusty where the crocidolite was usedworkers dumped large burlap sacks of the imported material into a huge bin poured in cotton and acetate fibers and mechanically mixed the dry fibers in a process used to make filtersworkers described quotclouds of blue dustquot that hung over parts of the factory even though exhaust fans ventilated the areawhat is interesting about this example is that the text does not provide us with enough information to decide about the correct interpretation it is as if the writer did not think it necessary for the reader to assign an unambiguous interpretation to the definite descriptionsimilar cases of underspecified definite descriptions have been observed before but no real account has been given of the conditions under which they are possible511 consequences for corpus annotationthis study raises the issue of how feasible it is to annotate corpora for anaphoric informationwe observed two problems about the task of classifying definite descriptions first neither of the more complex classification schemes we tested resulted in a very good agreement among annotators and second even the task of identifying the antecedent of discourserelated definite descriptions is problematicwe only obtained an acceptable agreement in the case of coreferential definite descriptions and it was difficult for our annotators to choose a single antecedent for a definite description when both bridging and coreference were allowedthese results indicate that annotating corpora for anaphoric information may be more difficult than expectedthe task of indicating a unique antecedent for bridging definite descriptions appears to be especially challenging for the reasons discussed above on the positive side we have two observations our subjects did reasonably well at distinguishing firtmention from subsequentmention antecedents and at identifying the antecedent of a subsequentmention definite descriptiona classification scheme based on this distinction that just asked subjects to indicate an antecedent for subsequentmention definite descriptions may have a chance of resulting in a standardized annotationeven in this case however the agreement we observed was not very high but better results may be obtained with more trainingthe possibility we are exploring is that these results might get better if annotators are given computer support in the form of a semiautomatic classifierie a system capable of suggesting to annotators a classification for definite descriptions including possibly an indication of how reliable the classification might bewe briefly discuss below our progress in this direction so far512 consequences for linguistic theoryour study confirms the findings of previous work that a great number of definite descriptions in texts are discoursenew in our second experiment we found an equal number of discoursenew and discourserelated definite descriptions although many of the definite descriptions classified as discoursenew could be seen as associative in a loose senseinterestingly this suggests that each of the competing hypotheses about the licensing conditions for definite descriptionsthe uniqueness and the familiarity theory accountsaccounts satisfactorily for about half of the dataof the existing theories of definite descriptions the one that comes closest to accounting for all of the uses of definite descriptions that we observed is lobner lobner proposes that the defining property of definite descriptions from a semantic point of view is that they indicate that the head noun complex denotes a functional concept ie a function which according to lobner can take one two or three argumentshe argues that some head noun complexes denote such a function on purely lexical semantic grounds this is the case for example of the head noun complexes in the father of mr smith the first man to sail to america and the fact that life started on earth he calls these definite descriptions semantic definitesin other cases such as the dog the head noun by itself would not denote a function but a sort in these cases according to lobner the use of a definite description is only felicitous if context indicates the function to be usedthis latter class of pragmatic definites includes the bestknown cases of familiar definitesanaphoric immediate and visible situation and larger situationas well as some cases classified by hawkins as unfamiliar and by prince as containing inferrableslobner does not discuss the conditions under which a writer can assume that the reader can recognize that context creates a functional concept out of a sortal one but his account could be supplemented by clark and marshall theory of what may count as a basis for a mutual knowledge induction schema is also consistent with fraurud hypothesis that these methods are not just used when no suitable antecedent can be found but more extensive investigations will be needed before we can conclude that this architecture significantly outperforms other onesthe presence of such a large number of discoursenew definite descriptions is also problematic for the idea that definite descriptions are interpreted with respect to the global focus a significant percentage of the larger situation definite descriptions encountered in our corpus cannot be said to be in the global focus in any significant sense as we observed above in many of these cases the writer seems to rely on the reader capability to add a new object such as the illinois commerce commission to her or his model of the world rather than expecting that object to be already presentas already mentioned we are in the course of implementing a system capable of performing the classification task semiautomatically this system would help the human classifiers by suggesting possible classifications and possible antecedents in the case of discourserelated definite descriptionsour system implements the dualprocessing strategy discussed aboveon the one hand it attempts to resolve anaphoric same head definite descriptions by maintaining a simple discourse model and searching back into this model to find all possible antecedents of a definite description on the other it uses heuristics to identify unfamiliar and larger situation definite descriptions on the basis of syntactic information and very little lexical information about nouns that take complementsthe current order of application of the resolution and classification steps has been determined by empirical testing and has been compared with that suggested by decisiontree learning techniqueswe trained a version of the system on the corpus used for the first experiment and then compared its classification of the corpus used for the second experiment with that of our three subjectswe developed two versions of the system one that only attempts to classify subsequentmention and discoursenew definite descriptions and one that also attempts to classify bridging references the first version of the system finds a classification for 318 definite descriptions out of the 464 in our test data the agreement between the system and the three annotators on the two classes firstmention and subsequentmention is k 070 overall if all definite descriptions to which the system cannot assign a classification are treated as firstmention the coefficient of agreement is k 078 if we do not count the definite descriptions that the system cannot classify the version of the system that also attempts to recognize bridging references has a worse performance which is not surprising given the problems our subjects had in classifying bridging descriptionsthis version of the system finds a classification for 355 descriptions out of 464 and its agreement with the three annotators is k 063 if the cases that the system cannot classify are not counted k 057 if we count the cases that the system does not classify as discoursenew and k 063 again if we count the cases that the system does not classify as bridging we collected plenty of data about definite descriptions that we are still in the process of analyzingone issue we are studying at the moment is what to do with bridging references how to classify them if at all and how to process themwe also intend to study lobner hypothesis about the role played by the distinction between sortal and relational head nouns in determining the type of process involved in the resolution of a definite description possibly by finding a way to ask our subjects to recognize these distinctionswe also plan to study the issue of generic definitesan obvious direction in which to extend this study is by looking at other kinds of anaphoric expressions such as pronouns and demonstrativeswe are doing preliminary studies in this directionfinally we would like to emphasize that although this study is the most extensive investigation of definite description use in a corpus that we know of in practice we still got very little data on many of the uses of definite descriptions so some caution is necessary in interpreting these resultsthe problem is that the kind of analysis we performed is extremely time consuming it will be crucial in the future to find ways of performing this task that will allow us to analyze more data possibly with the help of computer simulationsyou will receive a set of texts to read and annotatefrom the texts the system will extract and present you quotthequotphrases and will ask you for a classificationyou must choose one of the following classes noun phrase which has a different noun for the interpretation of the given quotthequotphrasethe antecedent for the quotthequotphrase in this case may also for unfamiliar uses of quotthequotphrases the text does not provide an antecedentthe quotthequotphrase refers to something new to the textthe help for the interpretation may be given together with the quotthequotphrase as in preference order for the classification in spite of the fact that definites often fall in more than one class of use the identification of a unique class is requiredin order to make the choices uniform priority is to be given to anaphoric situationsaccording to this ordering cases like quotthe white housequot or quotthe governmentquot are anaphoric rather than larger situation when it has already occurred once in the textwhen a quotthequotphrase seems to belong both to larger situnfamiliar and associative classes preference is given to larger situnfamiliarexamples from the corpus were given as in section 31 there is an antecedent in the text which has the same descriptive noun of the quotthequotphrase2 associative there is an antecedent in the text which has a different noun but it is a synonym or associate to the descriptionwhen the referent for the description is known or new the quotthequotphrase is an idiomatic expressionthis material provides you with instructions examples and some training for the textannotation taskthe task consists of reading newspaper articles and analyzing occurrences of definite descriptions which are expressions starting with the definite article thewe will call these expressions dds or dddds describe things ideas or entities which are talked about in the textthe things ideas or entities being described by dds will be called entitiesyou should look at the text carefully in order to indicate whether the entity was mentioned before in the text and if so to indicate whereyou will receive a set of texts and their corresponding tables to fill inthere are basically four cases to be considered quotmrs park is saving to buy an apartmentthe housewife is saving harder than everquot the entity described by the dd quotthe housewifequot was mentioned before as quotmrs parkquot2if the entity itself was not mentioned before but its interpretation is based on dependent on or related to some other idea or thing in the text you should indicate itfor instance in the sequence quot the parks wanted to buy an apartment but the price was very high the entity described by the dd the price is related to the idea expressed by an apartment in the text3it may also be the case that the dd was not mentioned before and is not related to something in the text but it refers to something which is part of the common knowledge of the writer and readers in generalexample quotduring the past 15 years housing prices increased nearly fivefoldquot here the entity described by the dd the past 15 years is known to the general reader of the wall street journal and was not mentioned before in the text4or it may be the case that the dd is selfexplanatory or it is given together with its own identificationin these cases it becomes clear to the general reader what is being talked about even without previous mention in the text or without previous common knowledge of itfor instance quotthe proposed legislation is aimed at rectifying some of the inequities in the current landownership systemquot the entity described here is new in the text and is not part of the knowledge of readers but the dd the inequities in the current landownership system is selfexplanatorythe texts will be presented to you in the following format on the left the text with its dds in evidence on the right the keys and the dd to be analyzedthe key is for internal control only but it may help you to find dds in the table you have to fill intext 0 1 y jpark and her family scrimped for four years to buy a tiny apartment here but found that the closer they got to saving the 40000 they originally needed the more the price rose3 now the 33yearold housewife whose husband earns a modest salary as an assistant professor of economics is saving harder than evereach case is to be indicated on the table according to the following whenever you find a previous mention in the text of the dd you should mark the column link in the case of both 1 and 2 you should provide the sentence number where the previousrelated mention is and write down the previousrelated mention of it if the entity was not previously mentioned in the text and it is not related to something mentioned before then mark the column no link in case of doubt just leave the line in blank and comment at the back of the page using the key number to identify the dd you are commenting onnext we present some examples and further explanation for each one of the four cases that are being consideredcase 1 link for case no1 you may find a previous mention that may be equal or different from the dd distances from previous mentions and dds may also varycase 2 link here are cases of dds which are related to something that was present in the textif you ask for the examples below quotwhich government population nation is thatquotquotwhich blame is thatquot the answer is given by something previously mentioned in the text 20 housing prices increased nearly fivefoldthe report laid the blame on speculators who it said had pushed land prices up ninefoldcase 3 no link these cases of dds are based on the common reader knowledgethe texts to be analyzed are wall street journal articles location and time for instance are usually known to the general reader from sources which are outside the text 21case 4 no link these cases of dds are selfexplanatory or accompanied by their identificationfor instance if you ask quotwhich difficulty is thatquot quotwhich fact is thatquot quotwhich knowhow is thatquot etc for the examples below the answer is given by the dd itselfin the last example the dd is accompanied by its explanationin order to help you filling in the table answer the yesno questions below for each one of the dds in the textwhen the answer for the question is yes you have an action to follow if the answer is no skip to the next question mentioned before and tell where by providing the sentence number and the words used in the previous mentionn go to question no22is the entity new but related to something mentioned beforeif you ask quotwhich entity is thatquot is the answer based on previous text 22y mark quotrquot to indicate related entity and provide the sentence number and the previous mention on which the dd is basedn go to question no33is the entity new in the textif it was not mentioned before and its interpretation is not based on the previous text then is it something mutually known by writer and general readers of the wall street journaly mark quotkquot to indicate general knowledge about the entityn go to question no44is the entity new in the textif it was not mentioned before and its interpretation is not based on the previous text then is it selfexplanatory or accompanied by its identificationy mark quotdquot to indicate that the description is enough to make readers identify the entityn leave the line in blank and comment at the back of the page using the key number to identify the ddquotwe wish to thank jean carletta for much help both with designing the experiments and with the analysis of the resultswe are also grateful to ellen bard robin cooper kari fraurud janet hitzeman kjetil strand and our anonymous reviewers for many helpful commentsmassimo poesio holds an advanced research fellowship from epsrc uk renata vieira is supported by a fellowship from cnpq brazil
J98-2001
a corpusbased investigation of definite description usewe present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretationwe ran two experiments in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles containing a total of l412 definite descriptionswe measured the agreement among annotators about the classes assigned to definite descriptions as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the textthe most interesting result of this study from a corpus annotation perspective was the rather low agreement that we obtained using versions of hawkins and prince classification schemes better results were obtained using the simplified scheme proposed by fraurud that includes only two classes firstmention and subsequentmentionthe agreement about antecedents was also not completethese findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotationfrom a linguistic point of view the most interesting observations were the great number of discoursenew definites in our corpus and the presence of definites that did not seem to require a complete disambiguationwe propose an annotation scheme which is a product of a corpus based analysis of definite description use showing that more than 50 of the dds in their corpus are discourse new or unfamiliar
generalizing case frames using a thesaurus and the mdl principle a new method for automatically acquiring case frame patterns from large corpora is proposed in particular the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words and a new generalization method based on the minimum description length principle is proposed in order to assist with efficiency the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as quotcutsquot in the thesaurus tree thus reducing the generalization problem to that of estimating a quottree cut modelquot of the thesaurus tree an efficient algorithm is given which provably obtains the optimal tree cut model for the given frequency data of a case slot in the sense of mdl case frame patterns obtained by the method were used to resolve ppattachment ambiguity experimental results indicate that the proposed method improves upon or is at least comparable with existing methods a new method for automatically acquiring case frame patterns from large corpora is proposedin particular the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words and a new generalization method based on the minimum description length principle is proposedin order to assist with efficiency the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as quotcutsquot in the thesaurus tree thus reducing the generalization problem to that of estimating a quottree cut modelquot of the thesaurus treean efficient algorithm is given which provably obtains the optimal tree cut model for the given frequency data of a case slot in the sense of mdlcase frame patterns obtained by the method were used to resolve ppattachment ambiguityexperimental results indicate that the proposed method improves upon or is at least comparable with existing methodswe address the problem of automatically acquiring case frame patterns from large corporaa satisfactory solution to this problem would have a great impact on various tasks in natural language processing including the structural disambiguation problem in parsingthe acquired knowledge would also be helpful for building a lexicon as it would provide lexicographers with word usage descriptionsin our view the problem of acquiring case frame patterns involves the following two issues acquiring patterns of individual case frame slots and learning dependencies that may exist between different slotsin this paper we confine ourselves to the former issue and refer the interested reader to li and abe which deals with the latter issuethe case frame pattern acquisition process consists of two phases extraction of case frame instances from corpus data and generalization of those instances to case frame patternsthe generalization step is needed in order to represent the input case frame instances more compactly as well as to judge the acceptability of unseen case frame instancesfor the extraction problem there have been various methods proposed to date which are quite adequate the generalization problem in contrast is a more challenging one and has not been solved completelya number of methods for generalizing values of a case frame slot for a verb have been proposedsome of these methods make use of prior knowledge in the form of an existing thesaurus while others do not rely on any prior knowledge in this paper we propose a new generalization method belonging to the first of these two categories which is both theoretically wellmotivated and computationally efficientspecifically we formalize the problem of generalizing values of a case frame slot for a given verb as that of estimating a conditional probability distribution over a partition of words and propose a new generalization method based on the minimum description length principle a principle of data compression and statistical estimation from information theoryin order to assist with efficiency our method makes use of an existing thesaurus and restricts its attention on those partitions that are present as quotcutsquot in the thesaurus tree thus reducing the generalization problem to that of estimating a quottree cut modelquot of the thesaurus treewe then give an efficient algorithm that provably obtains the optimal tree cut model for the given frequency data of a case slot in the sense of mdlin order to test the effectiveness of our method we conducted ppattachment disambiguation experiments using the case frame patterns obtained by our methodour experimental results indicate that the proposed method improves upon or is at least comparable to existing methodsthe remainder of this paper is organized as follows in section 2 we formalize the problem of generalizing values of a case frame slot as that of estimating a conditional distributionin section 3 we describe our mdlbased generalization methodin section 4 we present our experimental resultswe then give some concluding remarks in section 5suppose that the data available to us are of the type shown in table 1 which are slot values for a given verb automatically extracted from a corpus using existing techniquesby counting the frequency of occurrence of each noun at a given slot of a verb the frequency data shown in figure 1 can be obtainedwe will refer to this type of data as cooccurrence datathe problem of generalizing values of a case frame slot for a verb can be viewed as the problem of learning the underlying conditional probability distribution that gives rise to such cooccurrence datasuch a conditional distribution can be represented by a probability model that specifies the conditional probability p for each n in the set of nouns ar ni n2 nn v in the set of verbs v vi 02 vv and r in the set of slot names are r2 rri satisfying this type of probability model is often referred to as a wordbased modelsince the number of probability parameters in wordbased models is large accurate frequency data for the subject slot of verb fly estimation of a wordbased model is difficult with the data size that is available in practicea problem usually referred to as the data sparseness problemfor example suppose that we employ the maximumlikelihood estimation to estimate the probability parameters of a conditional probability distribution as described above given the cooccurrence data in figure 1in this case mle amounts to estimating the parameters by simply normalizing the frequencies so that they sum to one giving for example the estimated probabilities of 0 02 and 04 for swallow eagle and bird respectively since in general the number of parameters exceeds the size of data that is typically available mle will result in estimating most of the probability parameters to be zeroto address this problem grishman and sterling proposed a method of smoothing conditional probabilities using the probability values of similar words where the similarity between words is judged based on cooccurrence data more specifically conditional probabilities of words are smoothed by taking the weighted average of those of similar words using the similarity measure as the weightsthe advantage of this approach is that it does not rely on any prior knowledge but it appears difficult to find a smoothing method that is both efficient and theoretically soundas an alternative a number of authors have proposed the use of classbased models which assign probability values to classes of words rather than individual wordsan example of the classbased approach is resnik method of generalizing values of a case frame slot using a thesaurus and the socalled selectional association measure the selectional association denoted a is defined as follows where c is a class of nouns present in a given thesaurus v is a verb and r is a slot name as described earlierin generalizing a given noun n to a noun class this method selects the noun class c having the maximum a among all super classes of n in a given thesaurusthis method is based on an interesting intuition but its interpretation as a method of estimation is not clearwe propose a classbased generalization method whose performance as a method of estimation is guaranteed to be near optimalwe define the classbased model as a model that consists of a partition of the set fsf of nouns and a parameter associated with each member of the partitionhere a partition f of at is any collection of mutually disjoint subsets of n that exhaustively cover nthe parameters specify the conditional probability p for each class within a given class c it is assumed that each noun is generated with equal probability namely here we assume that a word belongs to a single classin practice however many words have sense ambiguity and a word can belong to several different classes eg bird is a member of both bird and meatthorough treatment of this problem is beyond the scope of the present paper we simply note that one can employ an existing wordsense disambiguation technique in preprocessing and use the disambiguated word senses as virtual words in the following casepattern acquisition processit is also possible to extend our model so that each word probabilistically belongs to several different classes which would allow us to resolve both structural and wordsense ambiguities at the time of disambiguation2 employing probabilistic membership however would make the estimation process significantly more computationally demandingwe therefore leave this issue as a future topic and employ a simple heuristic of equally distributing each word occurrence in the data to all of its potential word senses in our experimentssince our learning method based on mdl is robust against noise this should not significantly degrade performancesince the number of partitions for a given set of nouns is extremely large the problem of selecting the best model from among all possible classbased models is most likely intractablein this paper we reduce the number of possible partitions to consider by using a thesaurus as prior knowledge following a basic idea of resnik in particular we restrict our attention to those partitions that exist within the thesaurus in the form of a cutby thesaurus we mean a tree in which each leaf node stands for a noun while each internal node represents a noun class and domination stands for set inclusion a cut in a tree is any set of nodes in the tree that defines a partition of the leaf nodes viewing each node as representing the set of all leaf nodes it dominatesfor example in the thesaurus of figure 3 there are five cuts animal bird insect bird bug bee insect swallow crow eagle bird insect and swallow crow eagle bird bug bee insectthe class of tree cut models of a fixed thesaurus tree is then obtained by restricting the partition f in the definition of a classbased model to be those partitions that are present as a cut in that thesaurus treeformally a tree cut model m can be represented by a pair consisting of a tree cut f and a probability parameter vector 0 of the same length that is where f and 9 are where c1 c2 cki is a cut in the thesaurus tree and ek411p 1 is satisfiedfor simplicity we sometimes write p i 1 for pif we use mle for the parameter estimation we can obtain five tree cut models from the cooccurrence data in figure 1 figures 46 show three of thesefor example a tree cut model with bird bug bee insectj14 shown in figure 5 is one such tree cut modelrecall that ai defines a conditional probability distribution i ki as follows for any noun that is in the tree cut such as bee the probability is given as explicitly specified by the model ie p 02for any class in the tree cut the probability is distributed uniformly to all nouns dominated by itfor example since there are four nouns that fall under the class bird and swallow is one of them the probability of swallow is thus given by pa4 084 02note that the probabilities assigned to the nouns under bird are smoothed even if the nouns have different observed frequencieswe have thus formalized the problem of generalizing values of a case frame slot as that of estimating a model from the class of tree cut models for some fixed thesaurus tree namely selecting a model that best explains the data from among the class of tree cut modelsthe question now becomes what strategy we should employ to select the best treecut modelwe adopt the minimum description length principle which has various desirable properties as will be described later3 mdl is a principle of data compression and statistical estimation from information theory which states that the best probability model for given data is that which requires the least code length in bits for the encoding of the model itself and the given data observed through itthe former is the model description length and the latter the data description lengthin our current problem it tends to be the case in general that a model nearer the root of the thesaurus tree such as that in figure 6 is simpler but tends to have a poorer fit to the datain contrast a model nearer the leaves of the thesaurus tree such as that in figure 4 is more complex but tends to have a better fit to the datatable 2 shows the number of free parameters and the kl distance from the empirical distribution of the data shown in figure 2 for each of the five tree cut models3 in the table one can see that there is a tradeoff between the simplicity of a model and the goodness of fit to the datain the mdl framework the model description length is an indicator of model complexity while the data description length indicates goodness of fit to the datathe mdl principle stipulates that the model that minimizes the sum total of the description lengths should be the best model in the remainder of this section we will describe how we apply mdl to our current problemwe will then discuss the rationale behind using mdl in our present contextwe first show how the description length for a model is calculatedwe use s to denote a sample which is a multiset of examples each of which is an occurrence of a noun at a given slot r of a given verb v we let i si denote the size of s as a multiset and n e s indicate the inclusion of n in s as a multisetfor example the column labeled slot_value in table 1 represents a sample s for the subject slot of fly and in this case i si 10given a sample s and a tree cut f we employ mle to estimate the parameters of the corresponding tree cut model 14 where ô denotes the estimated parametersthe total description length l of the tree cut model kl and the sample s observed through m is computed as the sum of the model description length l parameter description length l and data description length l l is a subjective quantity which depends on the coding scheme employedhere we choose to assign the same code length to each cut and let where g denotes the set of all cuts in the thesaurus tree t this corresponds to assuming that each tree cut model is equally likely a priori in the bayesian interpretation of mdlthe parameter description length l is calculated by l x log isi where i si denotes the sample size and k denotes the number of free parameters in the tree cut model ie k equals the number of nodes in f minus oneit is known to be best to use this number of bits to describe probability parameters in order to minimize the expected total description length an intuitive explanation of this is that the standard deviation of the maximumlikelihood estimator of each parameter is of the orderv11s1 and hence describing each parameter using more than log 1 log is i bits would be wasteful for the estimation accuracy possible with the given sample sizefinally the data description length l is calculated by calculating the description length for the model of figure 5bird bug bee insect f 8 0 2 0 cl 4 1 1 1 p 08 00 02 00 p 02 00 02 00 where for simplicity we write p for pmrecall that p is obtained by mle namely by normalizing the frequencies where f denotes the total frequency of nouns in class c in the sample s and f is a tree cutwe note that in fact the maximumlikelihood estimate is one that minimizes the data description length lwith description length defined in the above manner we wish to select a model with the minimum description length and output it as the result of generalizationsince we assume here that every tree cut has an equal l technically we need only calculate and compare l l l as the description lengthfor simplicity we will sometimes write just l for l where r is the tree cut of when c4 and s are clear from contextthe description lengths for the data in figure 1 using various tree cut models of the thesaurus tree in figure 3 are shown in table 4these figures indicate that the model in figure 6 is the best model according to mdlthus given the data in table 1 as input the generalization result shown in table 5 is obtainedin generalizing values of a case frame slot using mdl we could in principle calculate the description length of every possible tree cut model and output a model with the minimum description length as the generalization result if computation time were of no concernbut since the number of cuts in a thesaurus tree is exponential in the size of the tree it is impractical to do sononetheless we were able to devise a description length of the five tree cut modelsl l v animal 0 2807 2807 bird insect 166 2639 2805 bird bug bee insect 498 2322 2820 swallow crow eagle bird insect 664 2239 2903 swallow crow eagle bird bug bee insect 997 1922 2919 table 5 generalization result verb slot _name slot _value probability fly arg1 bird 08 fly argl insect 02 here we let t denote a thesaurus tree root the root of the tree t initially t is set to the entire treealso input to the algorithm is a cooccurrence datathe algorithm findmdl simple and efficient algorithm based on dynamic programming which is guaranteed to find a model with the minimum description lengthour algorithm which we call findmdl recursively finds the optimal mdl model for each child subtree of a given tree and appends all the optimal models of these subtrees and returns the appended models unless collapsing all the lowerlevel optimal models into a model consisting of a single node reduces the total description length in which case it does sothe details of the algorithm are given in figure 7note that for simplicity we describe findmdl as outputting a tree cut rather than a complete tree cut modelnote in the above algorithm that the parameter description length is calculated as an example application of findmdl entire tree and when it is a proper subtreethis contrasts with the fact that the number of free parameters is k for the former while it is k 1 for the latterfor the purpose of finding a tree cut with the minimum description length however this distinction can be ignored figure 8 illustrates how the algorithm works in the recursive application of findmdl on the subtree rooted at airplane the ifclause on line 9 evaluates to true since l 3227 l 3272 and hence airplane is returnedthen in the call to findmdl on the subtree rooted at artifact the same ifclause evaluates to false since l 4097 l 4109 and hence vehicle airplane is returnedconcerning the above algorithm we show that the following proposition holds the algorithm findmdl terminates in time 0i where n denotes the number of leaf nodes in the input thesaurus tree t and is i denotes the input sample size and outputs a tree cut model of t with the minimum description length here we will give an intuitive explanation of why the proposition holds and give the formal proof in appendix athe mle of each node is obtained simply by dividing the frequency of nouns within that class by the total sample sizethus the parameter estimation for each subtree can be done independently from the estimation of the parameters outside the subtreethe data description length for a subtree thus depends solely on the tree cut within that subtree and its calculation can be performed independently for each subtreeas for the parameter description length for a subtree it depends only on the number of classes in the tree cut within that subtree and hence can be computed independently as wellthe formal proof proceeds by mathematical induction which verifies that the optimal model in any tree is either the model consisting of the root of the tree or the model obtained by appending the optimal submodels for its child subtrees7 when a discrete model is fixed and the estimation problem involves only the estimation of probability parameters the classic maximumlikelihood estimation is known to be satisfactoryin particular the estimation of a wordbased model is one such problem since the partition is fixed and the size of the partition equals 1v1furthermore for a fixed discrete model it is known that mle coincides with mdl given data s x1 i 1 ml mle estimates parameter p which maximizes the likelihood with respect to the data that is it is easy to see that p also satisfies arg e _ log p this is nothing but the mdl estimate in this case since log p is the data description lengthwhen the estimation problem involves model selection ie the choice of a tree cut in the present context mdl behavior significantly deviates from that of mlethis is because mdl insists on minimizing the sum total of the data description length and the model description length while mle is still equivalent to minimizing the data description length onlyso for our problem of estimating a tree cut model mdl tends to select a model that is reasonably simple yet fits the data quite well whereas the model selected by mle will be a wordbased model as it will always manage to fit the datain statistical terms the superiority of mdl as an estimation method is related to the fact we noted earlier that even though mle can provide the best fit to the given data the estimation accuracy of the parameters is poor when applied on a sample of modest size as there are too many parameters to estimatemle is likely to estimate most parameters to be zero and thus suffers from the data sparseness problemnote in table 4 that mdl avoids this problem by taking into account the model complexity as well as the fit to the datamdl stipulates that the model with the minimum description length should be selected both for data compression and estimationthis intimate connection between estimation and data compression can also be thought of as that between estimation and generalization since in order to compress information generalization is necessaryin our current problem this corresponds to the generalization of individual nouns present in case frame instances in the data as classes of nouns present in a given thesaurusfor example given the thesaurus in figure 3 and frequency data in figure 1 we would 7 the process of finding the mdl model tends to be computationally demanding and is often intractablewhen the model class under consideration is restricted to tree structures however dynamic programming is often applicable and the mdl model can be efficiently foundfor example rissanen has devised an algorithm for learning decision trees8 consider for example the case when the cooccurrence data is given as f 2f 2f 2f 2 for the problem in section 2 like our system to judge that the class bird and the noun bee can be the subject slot of the verb flythe problem of deciding whether to stop generalizing at bird and bee or generalizing further to animal has been addressed by a number of authors minimization of the total description length provides a disciplined criterion to do thisa remarkable fact about mdl is that theoretical findings have indeed verified that mdl as an estimation strategy is near optimal in terms of the rate of convergence of its estimated models to the true model as data size increaseswhen the true model is included in the class of models considered the models selected by mdl converge to the true model at the rate of 0 where k is the number of parameters in the true model and is i the data size which is near optimal thus in the current problem mdl provides a way of smoothing probability parameters to solve the data sparseness problem and at the same time a way of generalizing nouns in the data to noun classes of an appropriate level both as a corollary to the near optimal estimation of the distribution of the given datathere is a bayesian interpretation of mdl mdl is essentially equivalent to the quotposterior modequot in the bayesian terminology given data s and a number of models the bayesian estimator selects a model ci that maximizes the posterior probability where p denotes the prior probability of the model m and p the probability of observing the data s given m equivalently m satisfies this is equivalent to the mdl estimate if we take log p to be the model description lengthinterpreting log p as the model description length translates in the bayesian estimation to assigning larger prior probabilities on simpler models since it is equivalent to assuming that p 1 where l is the description length of m to all models m then becomes equivalent to giving the maximumlikelihood estimaterecall that in our definition of parameter description length we assign a shorter parameter description length to a model with a smaller number of parameters k which admits the above interpretationas for the model description length we assigned an equal code length to each tree cut which translates to placing no bias on any cutwe could have employed a different coding scheme assigning shorter code lengths to cuts nearer the rootwe chose not to do so partly because for sufficiently large sample sizes the parameter description length starts dominating the model description length anywayanother important property of the definition of description length is that it affects not only the effective prior probabilities on the models but also the procedure for computing the model minimizing the measureindeed our definition of model description length was chosen to be compatible with the dynamic programming technique namely its calculation is performable locally for each subtreefor a different choice of coding scheme it is possible that a simple and efficient mdl algorithm like findmdl may not existwe believe that our choice of model description length is derived from a natural encoding scheme with reasonable interpretation as bayesian prior and at the same time allows an efficient algorithm for finding a model with the minimum description lengththe uniform distribution assumption made in namely that all nouns belonging to a class contained in the tree cut model are assigned the same probability seems to be rather stringentif one were to insist that the model be exactly accurate then it would seem that the true model would be the wordbased model resulting from no generalization at allif we allow approximations however it is likely that some reasonable tree cut model with the uniform probability assumption will be a good approximation of the true distribution in fact a best model for a given data sizeas we remarked earlier as mdl balances between the fit to the data and the simplicity of the model one can expect that the model selected by mdl will be a reasonable compromisenonetheless it is still a shortcoming of our model that it contains an oversimplified assumption and the problem is especially pressing when rare words are involvedrare words may not be observed at a slot of interest in the data simply because they are rare and not because they are unfit for that particular slot9 to see how rare is too rare for our method consider the following examplesuppose that the class bird contains 10 words bird swallow crow eagle parrot waxwing etcconsider cooccurrence data having 8 occurrences of bird 2 occurrences of swallow 1 occurrence of crow 1 occurrence of eagle and 0 occurrence of all other words as part of say 100 data obtained for the subject slot of verb flyfor this data set our method would select the model that generalizes bird swallow etc to the class bird since the sum of the data and parameter description lengths for the bird subtree is 7657 332 7989 if generalized and 5373 3322 8695 if not generalizedfor comparison consider the data with 10 occurrences of bird 3 occurrences of swallow and 1 occurrence of crow and 0 occurrence of all other words also as part of 100 data for the subject slot of flyin this case our method would select the model that stops generalizing at bird swallow eagle etc because the description length for the same subtree now is 8622 332 8954 if generalized and 5504 3322 8826 if not generalizedthese examples seem to indicate that our mdlbased method would choose to generalize even when there are relatively large differences in frequencies of words within a class but knows enough to stop generalizing when the discrepancy in frequencies is especially noticeable we applied our generalization method to large corpora and inspected the obtained tree cut models to see if they agreed with human intuitionin our experiments we extracted verbs and their case frame slots from the tagged texts of the wall street journal corpus consisting of 126084 sentences using existing techniques then example input data eat arg2 food 3 eat arg2 lobster 1 eat arg2 seed 1 eat arg2 heart 2 eat arg2 liver 1 eat arg2 plant 1 eat arg2 sandwich 2 eat arg2 crab 1 eat arg2 elephant 1 eat arg2 meal 2 eat arg2 rope 1 eat arg2 seafood 1 eat arg2 amount 2 eat arg2 horse 1 eat arg2 mushroom 1 eat arg2 night 2 eat arg2 bug 1 eat arg2 ketchup 1 eat arg2 lunch 2 eat arg2 bowl 1 eat arg2 sawdust 1 eat arg2 snack 2 eat arg2 month 1 eat arg2 egg 1 eat arg2 jam 2 eat arg2 effect 1 eat arg2 sprout 1 eat arg2 diet 1 eat arg2 debt 1 eat arg2 nail 1 eat arg2 pizza 1 eat arg2 oyster 1 applied our method to generalize the slot_valuestable 6 shows some example triple data for the direct object slot of the verb eatthere were some extraction errors present in the data but we chose not to remove them because in general there will always be extraction errors and realistic evaluation should leave them inwhen generalizing we used the noun taxonomy of wordnet as our thesaurusthe noun taxonomy of wordnet has a structure of directed acyclic graph and its nodes stand for a word sense and often contain several words having the same word sensewordnet thus deviates from our notion of thesaurusa tree in which each leaf node stands for a noun each internal node stands for the class of nouns below it and a noun is uniquely represented by a leaf nodeso we took a few measures to deal with thisfirst we modified our algorithm findmdl so that it can be applied to a dag now findmdl effectively copies each subgraph having multiple parents so that the dag is transformed to a tree structurenote that with this modification it is no longer guaranteed that the output model is optimalnext we dealt heuristically with the issue of wordsense ambiguity by equally dividing the observed frequency of a noun between all the nodes containing that nounfinally when an internal node contained nouns actually occurring in the data we assigned the frequencies of all the nodes below it to that internal node and excised the whole subtree below itthe last of these measures in effect defines the quotstarting cutquot of the thesaurus from which to begin generalizingsince nouns that occur in natural language tend to concentrate in the middle of a taxonomy the starting cut given by this method usually falls around the middle of the thesaurus1 figure 9 shows the starting cut and the resulting cut in wordnet for the direct object slot of eat with respect to the data in table 6 where denotes a node in wordnetthe starting cut consists of nodes etc which are the highest nodes containing values of the direct object slot of eatsince has significantly higher frequencies than its neighbors and the generalization stops there according to mdlin contrast the nodes under have relatively small differences in their frequencies and thus they are generalized to the node the same is true of the nodes under since has a much an example generalization result higher frequency than its neighbors and the generalization does not go up higherall of these results seem to agree with human intuition indicating that our method results in an appropriate level of generalizationtable 7 shows generalization results for the direct object slot of eat and some other arbitrarily selected verbs where classes are sorted in descending order of their probability valuestable 8 shows the computation time required to obtain the results shown in table 7even though the noun taxonomy of wordnet is a large thesaurus containing approximately 50000 nodes our method still manages to efficiently generalize case slots using itthe table also shows the average number of levels generalized for each slot namely the average number of links between a node in the starting cut and its ancestor node in the resulting cut is one in figure 9one can see that a significant amount of generalization is performed by our methodthe resulting tree cut is about 5 levels higher than the starting cut on the averagecase frame patterns obtained by our method can be used in various tasks in natural language processingin this paper we test its effectiveness in a structural disambiguation experimentdisambiguation methodsit has been empirically verified that the use of lexical semantic knowledge is effective in structural disambiguation such as the ppattachment problem there have been many probabilistic methods proposed in the literature to address the ppattachment problem using lexical semantic knowledge which in our view can be classified into three typesthe first approach takes doubles of the form and like those in table 9 as training data to acquire semantic knowledge and judges the attachment sites of the prepositional phrases in quadruples of the form eg based on the acquired knowledgehindle and rooth proposed the use of the lexical association measure calculated based on such doublesmore specifically they estimate p and p and calculate the socalled tscore which is a measure of the statistical significance of the difference between p and pif the tscore indicates that the former probability is significantly larger example input data as quadruples and labels see girl in park adv see man with telescope adv see girl with scarf adn then the prepositional phrase is attached to verb if the latter probability is significantly larger it is attached to nouni and otherwise no decision is madethe second approach takes triples and like those in table 10 as training data for acquiring semantic knowledge and performs ppattachment disambiguation on quadruplesfor example resnik proposes the use of the selectional association measure calculated based on such triples as described in section 2more specifically his method compares maxaass19noun2 a and maxclass13n0un2 a to make disambiguation decisionsthe third approach receives quadruples and labels indicating which way the ppattachment goes like those in table 11 and learns a disambiguation rule for resolving ppattachment ambiguitiesfor example brill and resnik propose a method they call transformationbased errordriven learning their method first learns ifthen type rules where the if parts represent conditions like and and the then parts represent transformations from to or vice versathe first rule is always a default decision and all the other rules indicate transformations subject to various if conditionswe note that for the disambiguation problem the first two approaches are basically unsupervised learning methods in the sense that the training data are merely positive examples for both types of attachments which could in principle be extracted from pure corpus data with no human interventionthe third approach on the other hand is a supervised learning method which requires labeled data prepared by a human beingthe generalization method we propose falls into the second category although it can also be used as a component in a combined scheme with many of the above methods we estimate p and p from training data consisting of triples and compare them if the former exceeds the latter we attach it to verb else if the latter exceeds the former we attach it to nouniin our experiments described below we compare the performance of our proposed method which we refer to as mdl against the methods proposed by hindle and rooth resnik and brill and resnik referred to respectively as la sa and teldata setwe used the bracketed corpus of the penn treebank as our datafirst we randomly selected one of the 26 directories of the wsj files as the test data and what remains as the training datawe repeated this process 10 times and obtained 10 sets of data consisting of different training data and test datawe used these 10 data sets to conduct crossvalidation as described belowfrom the test data in each data set we extracted quadruples using the extraction tool provided by the penn treebank called quottgrepquot at the same time we obtained the answer for the ppattachment site for each quadruplewe did not doublecheck if the answers provided in the penn treebank were actually correct or notthen from the training data of each data set we extracted and doubles and and triples using tools we developed ourselveswe also extracted quadruples from the training data as beforewe then applied 12 heuristic rules to further preprocess the data which include changing the inflected form of a word to its stem form replacing numerals with the word number replacing integers between 1900 and 2999 with the word year replacing co ltd etc with the words company limited etc11 after preprocessing there still remained some minor errors which we did not remove further due to the lack of a good method for doing so automaticallytable 12 shows the number of different types of data obtained by the above processexperimental procedurewe first compared the accuracy and coverage for each of the three disambiguation methods based on unsupervised learning mdl sa and la11 the experimental results obtained here are better than those obtained in our preliminary experiment in part because we only adopted rule in the pastaccuracycoverage curves for mdl sa and lafor mdl we generalized noun2 given and triples as training data for each data set using wordnet as the thesaurus in the same manner as in experiment 1when disambiguating we actually compared p and p where classi and class2 are classes in the output tree cut models dominating noun2 in place of p and p12 we found that doing so gives a slightly better resultfor sa we employed a somewhat simplified version in which noun2 is generalized given and triples using wordnet and maxaass3n0un2 a and maxclass3noun2 a are compared for disambiguation if the former exceeds the latter then the prepositional phrase is attached to verb and otherwise to nounifor la we estimated p and p from the training data of each data set and compared them for disambiguationwe then evaluated the results achieved by the three methods in terms of accuracy and coveragehere coverage refers to the proportion as a percentage of the test quadruples on which the disambiguation method could make a decision and accuracy refers to the proportion of correct decisions among themin figure 10 we plot the accuracycoverage curves for the three methodsin plotting these curves the attachment site is determined by simply seeing if the difference between the appropriate measures for the two alternatives be it probabilities or selectional association values exceeds a thresholdfor each method the threshold was set successively to 0 001 002 005 01 02 05 and 075when the difference between the two measures is less than a threshold we rule that no decision can be madethese curves were obtained by averaging over the 10 data setswe also implemented the exact method proposed by hindle and rooth which makes disambiguation judgement using the tscorefigure 10 shows the result as lat where the threshold for tscore is set to 128 from figure 10 we see that with respect to accuracycoverage curves mdl outperforms both sa and la throughout while sa is better than lanext we tested the method of applying a default rule after applying each methodthat is attaching to verb for the part of the test data for which no decision was made by the method in questionwe refer to these combined methods as mdldefault sadefault ladefault and latdefaulttable 13 shows the results again averaged over the 10 data setsfinally we used the transformationbased errordriven learning to acquire transformation rules for each data set and applied the obtained rules to disambiguate the test datathe average number of obtained rules for a data set was 27523table 13 shows the disambiguation result averaged over the 10 data setsfrom table 13 we see that tel performs the best edging over the second place mdldefault by a small margin and then followed by ladefault and sadefaultbelow we discuss further observations concerning these resultsmdl and saaccording to our experimental results the accuracy and coverage of mdl appear to be somewhat better than those of saas resnik pointed out the use of selectional association log p seems to be appropriate for cognitive modelingour experiments show however that the generalization method currently employed by resnik has a tendency to overfit the datatable 14 shows example generalization results for mdl and sanote that mdl tends to select a tree cut closer to the root of the thesaurus treethis is probably the key reason why mdl has a wider coverage than sa for the same degree of accuracyone may be concerned that mdl is quotovergeneralizingquot here14 but as shown in figure 10 its disambiguation accuracy does not seem to be degradedanother problem that must be dealt with concerning sa is how to remove noise from the generalization resultssince sa estimates the ratio between two probability values namely p the generalization result may be lead astray if one of the estimates of p and p is unreliablefor instance a high estimated value for at protect against shown in table 14 is rather odd and is because the estimate of p is unreliable this problem apparently costs sa a nonnegligible drop in disambiguation accuracyin contrast mdl does not suffer from this problem since a high estimated probability value is only possible with high frequency which cannot result just from extraction errorsconsider for example the occurrence of car in the data shown in figure 8 which has supposedly resulted from an erroneous extractionthe effect of this datum gets washed away as the estimated probability for vehicle to which car has been generalized is negligibleon the other hand sa has a merit not shared by mdl namely its use of the association ratio factors out the effect of absolute frequencies of words and focuses on their cooccurrence relationsince both mdl and sa have pros and cons it would be desirable to develop a methodology that combines the merits of the two methods mdl and lala makes its disambiguation decision completely ignoring noun2as resnik pointed out if we hope to improve disambiguation performance by increasing training data we need a richer model such as those used in mdl and sawe found that 88 of the quadruples in our entire test data were such that they shared the same verb prep nouni but had different noun2 and their ppattachment sites go both ways in the same data ie both to verb and to nouniclearly for these examples the ppattachment site cannot be reliably determined without knowing noun2table 15 shows some of these examplesmdl and telwe chose tel as an example of the quadruple approachthis method was designed specifically for the purpose of resolving ppattachment ambiguities and seems to perform slightly better than oursas we remarked earlier however the input data required by our method could be generated automatically from unparsed corpora making use of existing heuristic rules although for the experiments we report here we used a parsed corpusthus it would seem to be easier to obtain more data in the future for mdl and other methods based on unsupervised learningalso note that our method of generalizing values of a case slot can be used for purposes other than disambiguationwe proposed a new method of generalizing case framesour approach of applying mdl to estimate a tree cut model in an existing thesaurus is not limited to just the problem of generalizing values of a case frame slotit is potentially useful in other natural language processing tasks such as the problem of estimating ngram models or the problem of semantic tagging we believe that our method has the following merits it is theoretically sound it is computationally efficient it is robust against noiseour experimental results indicate that the performance of our method is better than or at least comparable to existing methodsone of the disadvantages of our method is that its performance depends on the structure of the particular thesaurus usedthis however is a problem commonly shared by any generalization method that uses a thesaurus as prior knowledgefor an arbitrary subtree t of a thesaurus tree t and an arbitrary tree cut model m 0 of t let mt denote the submodel of m that is contained in talso for any sample s and any subtree t of t let st denote the subsample of s contained in t then define in general for any submodel mt and subsample st ust i or to be the data description length of subsample st using submodel l to be the parameter description length for the submodel mr and l to be l lfirst note that for any tree t model mt contained in t and sample st contained in t and t child subtrees t i 1 k we have provided that ft is not a single node this follows from the mutual disjointness of the t and the independence of the parameters in the tiwe also have when t is a proper subtree of the thesaurus tree since the number of free parameters of a model in the entire thesaurus tree equals the number of nodes in the model minus one due to the stochastic condition when t equals the entire thesaurus tree theoretically the parameter description length for a tree cut model of t should be where is i is the size of the entire samplesince the second term log21s1 in is constant once the input sample s is fixed for the purpose of finding a model with the minimum description length it is irrelevantwe will thus use the identity both when t is the entire tree and when it is a proper subtreeit follows from and that the minimization of description length can be done essentially independently for each subtreenamely if we let l denote the minimum description length achievable for model mt on sample st contained in tree t ps the mle estimate for node n using the entire sample s and root the root node of tree t then we have min e st l l ps st the rest of the proof proceeds by inductionfirst when t is of a single leaf node the submodel consisting solely of the node and the mle of the generation probability for the class represented by t is returned which is clearly a submodel with minimum description length in the subtree t next inductively assume that findmdl correctly outputs a model with the minimum description length for any tree t of size less than n then given a tree t of size n whose root node has at least two children say t i 1 k for each t findmdl returns a model with the minimum description length by the inductive hypothesisthen since holds whichever way the ifclause on lines 8 9 of findmdl evaluates to what is returned on line 11 or line 13 will still be a model with the minimum description length completing the inductive stepit is easy to see that the running time of the algorithm is linear in both the number of leaf nodes of the input thesaurus tree and the input sample sizewe are grateful to k nakamura and t fujita of nec cc reslabs for their constant encouragementwe thank k yaminishi and j takeuchi of cc reslabs for their suggestions and commentswe thank t futagami of nis for his programming effortswe also express our special appreciation to the two anonymous reviewers who have provided many valuable commentswe acknowledge the acl for providing the acldci cdrom ldc of the university of pennsylvania for providing the penn treebank corpus data and princeton university for providing wordnet and e brill and p resnik for providing their ppattachment disambiguation program
J98-2002
generalizing case frames using a thesaurus and the mdl principlea new method for automatically acquiring case frame patterns from large corpora is proposedin particular the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words and a new generalization method based on the minimum description length principle is proposedin order to assist with efficiency the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as cuts in the thesaurus tree thus reducing the generalization problem to that of estimating a tree cut model of the thesaurus treean efficient algorithm is given which provably obtains the optimal tree cut model for the given frequency data of a case slot in the sense of mdlcase frame patterns obtained by the method were used to resolve ppattachment ambiguityexperimental results indicate that the proposed method improves upon or is at least comparable with existing methodswe use a minimum description lengthbased algorithm to find an optimal tree cut over wordnet for each classification problem finding improvements over both lexical association and conceptual association and equaling the transformationbased resultswe propose a model in which the appropriate cut c is selected according to the minimumdescription length principle this principle explicitly accounts for the tradeoff between generalisation and accuracy by minimising a sum of model description length and data description length
new figures of merit for bestfirst probabilistic chart parsing bestfirst parsing methods for natural language try to parse efficiently by considering the most likely constituents first some figure of merit is needed by which to compare the likelihood of constituents and the choice of this figure has a substantial impact on the efficiency of the parser while several parsers described in the literature have used such techniques there is little published data on their efficacy much less attempts to judge their relative merits we propose and evaluate several figures of merit for bestfirst parsing and we identify an easily computable figure of merit that provides excellent performance on various measures and two different grammars bestfirst parsing methods for natural language try to parse efficiently by considering the most likely constituents firstsome figure of merit is needed by which to compare the likelihood of constituents and the choice of this figure has a substantial impact on the efficiency of the parserwhile several parsers described in the literature have used such techniques there is little published data on their efficacy much less attempts to judge their relative meritswe propose and evaluate several figures of merit for bestfirst parsing and we identify an easily computable figure of merit that provides excellent performance on various measures and two different grammarschart parsing is a commonly used algorithm for parsing natural language textsthe chart is a data structure that contains all of the constituents for which subtrees have been found that is constituents for which a derivation has been found and which may therefore appear in some complete parse of the sentencethe agenda is a structure that stores a list of constituents for which a derivation has been found but which have not yet been combined with other constituentsinitially the agenda contains the terminal symbols from the sentence to be parseda constituent is removed from the agenda and added to the chart and the system considers how this constituent can be used to extend its current structural hypothesis by combining with other constituents in the chart according to the grammar rules in general this can lead to the creation of new more encompassing constituents which themselves are then added to the agendawhen one constituent has been processed a new one is chosen to be removed from the agenda and so ontraditionally the agenda is represented as a stack so that the last item added to the agenda is the next one removedchart parsing is described extensively in the literature for one such discussion see ection 14 of charniak bestfirst probabilistic chart parsing is a variation of chart parsing that attempts to find the most likely parses first by adding constituents to the chart in order of the likelihood that they will appear in a correct parse rather than simply popping constituents off of a stacksome probabilistic figure of merit is assigned to the constituents on the agenda and the constituent maximizing this value is the next to be added to the chartin this paper we consider probabilities primarily based on probabilistic contextfree grammars though in principle other more complicated schemes could be usedthe purpose of this work is to compare how well several figures of merit select constituent nja in a sentence ton constituents to be moved from the agenda to the chartideally we would like to use as our figure of merit the conditional probability of that constituent given the entire sentence in order to choose a constituent that not only appears likely in isolation but is most likely given the sentence as a whole that is we would like to pick the constituent that maximizes the following quantity where to is the sequence of the n tags or parts of speech in the sentence and nipin our experiments we use only tag sequences for parsingmore accurate probability estimates should be attainable using lexical information in future experiments as more detail usually leads to better statistics but lexicalized figures of merit are beyond the scope of the research described herenote that our quotidealquot figure is simply a heuristic since there is no guarantee that a constituent that scores well on this measure will appear in the correct parse of a sentencefor example there may be a very large number of lowprobability derivations of n 1c which are combined here to give a high value but a parse of the 1 sentence can only include one of these derivations making it unlikely that 1141k appears in the most probable parse of the sentenceon the other hand there is no reason to believe that such cases are common in practicewe cannot calculate p since in order to do so we would need to completely parse the sentencein this paper we examine the performance of several proposed figures of merit that approximate it in one way or another using two different grammarswe identify a figure of merit that gives superior results on all of our performance measures and on both grammarssection 2 of this paper describes the method we used to determine the effectiveness of figures of merit that is to compare how well they choose constituents to be moved from the agenda to the chartsection 21 explains the experiment section 22 describes the measures we used to compare the performance of the figures of merit and section 23 describes a model we used to represent the performance of a traditional parser using a simple stack as an agendain section 3 we describe and compare three simple and easily computable figures of merit based on inside probabilitysections 31 through 33 describe each figure in detail and section 34 presents the results of an experiment comparing these three figuressections 4 and 5 have a similar structure to section 3 with section 4 evaluating two figures of merit using statistics on the leftside context of the constituent and section 5 evaluating three additional figures of merit using statistics on the context on both sides of the constituentsection 6 contains a table summarizing the results from sections 3 4 and 5in section 7 we use another grammar in the experiment to verify that our results are not an artifact of the grammar used for parsingsection 8 describes previous work in this area and section 9 presents our conclusions and recommendationsthere are also three appendices to this paperappendix a gives our method for computing inside probability estimates while maintaining parser speedappendix b explains how we obtained our boundary statistics used in section 5appendix c presents data comparing the parsing accuracy obtained by each of our parsers as the number of edges they create increaseswe used as our first grammar a probabilistic contextfree grammar learned from the brown corpus this grammar contains about 5000 rules using 32 terminal and nonterminal symbolswe parsed 500 sentences of length 3 to 30 from the penn treebank wall street journal corpus using a bestfirst parsing method and various estimates for p ton as the figure of meritfor each figure of merit we compared the performance of bestfirst parsing using that figure of merit to exhaustive parsingby exhaustive parsing we mean continuing to parse until there are no more constituents available to be added to the chartwe parse exhaustively to determine the total probability of a sentence that is the sum of the probabilities of all parses found for that sentencewe then computed several quantities for bestfirst parsing with each figure of merit at the point where the bestfirst parsing method has found parses contributing at least 95 of the probability mass of the sentencethe 95 figure is simply a convenience see appendix c for a discussion of speed versus accuracywe compared the figures of merit using the following measures includes only words within the constituentthe statistics converged to their final values quicklythe edgecount percentages were generally within 01 of their final values after processing only 200 sentences so the results were quite stable by the end of our 500sentence test corpuswe gathered statistics for each sentence length from 3 to 30sentence length was limited to a maximum of 30 because of the huge number of edges that are generated in doing a full parse of long sentences using this grammar sentences in this length range have produced up to 130000 edgesas a basis for comparison we measured the cpu time for a nonbestfirst version of the parser to completely parse all 500 sentencesthe cpu time needed by this version of the parser was 4882 secondsfor a bestfirst version of the parser to be useful it must be able to find the most probable parse in less than this amount of timehere for the bestfirst parsers we will use for convenience the time needed to get 95 of the sentence total probability massit seems reasonable to base a figure of merit on the inside probability 0 of the constituentinside probability is defined as the probability of the words or tags in the constituent given that the constituent is dominated by a particular nonterminal symbol see figure 2this seems to be a reasonable basis for comparing constituent probabilities and has the additional advantage that it is easy to compute during chart parsingappendix a gives details of our online computation of 0the inside probability of the constituent nip see figure 3in this equation we can see that a and p represent the influence of the surrounding wordsthus using 13 alone assumes that a and p can be ignoredwe will refer to this figure of merit as straight 0one side effect of omitting the a and p terms in the straight j3 figure above is that inside probability alone tends to prefer shorter constituents to longer ones as the inside probability of a longer constituent involves the product of more probabilitiesthis can result in a quotthrashingquot effect as noted in chitrao and grishman where the system parses short constituents even very lowprobability ones while avoiding combining them into longer constituentsto avoid thrashing some technique is used to normalize the inside probability for use as a figure of meritone approach is to take the geometric mean of the inside probability to obtain a perword inside probability term acts as a normalizing factorthe perword inside probability of the constituent nipwe will refer to this figure as normalized 13an alternative way to rewrite the quotidealquot figure of merit is as follows once again applying the usual independence assumption that given a nonterminal the tag sequence it generates depends only on that nonterminal we can rewrite the figure of merit as follows to derive an estimate of this quantity for practical use as a figure of merit we make some additional independence assumptionswe assume that p 19 that is that the probability of a nonterminal is independent of the tags before and after it in the sentencewe also use a trigram model for the tags themselves giving we can calculate 13 as usualthe p term is estimated from our pcfg and the training data from which the grammar was learnedwe estimate p term is just the probability of the tag sequence t1 tk_i according to a trigram modelour tritag probabilities p were learned from the training data used for the grammar using nonzerolength edges for 95 of the probability mass for the 0 estimates the deleted interpolation method for smoothingour figure of merit uses we refer to this figure of merit as the trigram estimatethe results for the three figures of merit introduced in the last section according to the measurements given in section 22 are shown in table 1 figure 4 expands the non0 e data to show the percent of nonzerolength edges needed to get 95 of the probability mass for each sentence lengthstraight 13 performs quite poorly on this measurein order to find 95 of the probability mass for a sentence a parser using this figure of merit typically needs to do over 90 of the workon the other hand normalized 13 and the trigram estimate both result in substantial savings of workhowever while these two models produce average cpu time for 95 of the probability mass for the 0 estimates nearequivalent performance for short sentences for longer sentences with length greater than about 15 words the trigram estimate gains a clear advantagein fact the performance of normalized 13 appears to level off in this range while the amount of work done using the trigram estimate shows a continuing downward trendfigure 5 shows the average cpu time to get 95 of the probability mass for each estimate and each sentence lengtheach estimate averaged below 1 second on sentences of fewer than 7 wordsnote that while straight 0 does perform better than the quotstackquot model in cpu time the two models approach equivalent performance as sentence length increases which is what would be expected from the edge count measuresthe other two models provide a real time savings over the quotstackquot model as can be seen from figure 5 and from the total cpu times given earlierthrough most of the length range the cpu time needed by the normalized 0 and the trigram estimate is quite close but at the upper end of the range we can see better performance by the trigram estimateearlier we showed that our ideal figure of merit can be written as however the a term representing outside probability cannot be calculated directly during a parse since we need the full parse of the sentence to compute itin some of our figures of merit we use the quantity p which is closely related to outside probabilitywe call this quantity the left outside probability and denote it al the following recursive formula can be used to compute allet elk be the set of all edges or rule expansions in which the nonterminal 11k appearsfor each edge e in e we compute the product of cti of the nonterminal appearing on the lefthand pc side of the rule the probability of the rule itself and 13 of each nonterminal ks appearing to the left of ni hk in the rulethen al is the sum of these products given a complete parse of the sentence the formula above gives an exact value for alduring parsing the set elk is not complete and so the formula gives an approximation of althis formula can be infinitely recursive depending on the properties of the grammara method for calculating al more efficiently can be derived from the calculations given in jelinek and lafferty a simple extension to the normalized 0 model allows us to estimate the perword probability of all tags in the sentence through the end of the constituent under considerationthis allows us to take advantage of information already obtained in a leftright parsewe calculate this quantity as follows 1val0we are again taking the geometric mean to avoid thrashing by compensating for the al13 quantity preference for shorter constituents as explained in the previous sectionwe refer to this figure of merit as normalized at13we also derived an estimate of the ideal figure of merit that takes advantage of statistics on the first j 1 tags of the sentence as well as tikthis estimate represents the probability of the constituent in the context of the preceding tagswe again make the independence assumption that pk i nja toi ti 13 additionally we assume that p and p are independent of p giving the denominator p is once again calculated from a tritag modelthe p term is just au defined above in the discussion of the normalized al13 modelthus this figure of merit can be written as we will refer to this as the prefix estimatethe results for the figures of merit introduced in the previous section according to the measurements given in section 22 are shown in table 2 the geometricmeanbased models with sentence length can be seen clearlysecond when we consider only the two conditionalprobability models we can see that the additional information obtained from context in the prefix estimate gives a substantial improvement in this measure as compared to the trigram estimatehowever the cpu time needed to compute the al term exceeds the time saved by processing fewer edgesnote that using this estimate the parser took over 26000 seconds to get 95 of the probability mass while the quotstackquot model can exhaustively parse the test data in less than 5000 secondsfigure 8 shows the average cpu time for each sentence lengthwhile chart parsing and calculations of 13 can be done in 0 time we have been unable to find an algorithm to compute the al terms faster than 0when a constituent is removed from the agenda it only affects the 1 values of its ancestors in the parse trees however al values are propagated to all of the constituent siblings to the right and all of its descendantsrecomputing the al terms when a constituent is removed from the agenda can be done in 0 time and since there are 0 possible constituents the total time needed to compute the at terms in this manner is 0although the albased models seem impractical the edgecount and constituentcount statistics show that contextual information is usefulwe can derive an estimate similar to the prefix estimate but containing a much simpler model of the context as follows once again applying the usual independence assumption that given a nonterminal the tag sequence it generates depends only on that nonterminal we can rewrite the figure of merit as follows as usual we use a trigram model for the tags giving p p rz p that is that the probability of a nonterminal is dependent on the tag immediately before it in the sentence then we have we can calculate 3 and the tritag probabilities as usualthe pwe will refer to this figure as the left boundary trigram estimatewe can derive a similar estimate using context on both sides of the constituent as follows once again applying the usual independence assumption that given a nonterminal the tag sequence it generates depends only on that nonterminal and also assuming that the probability of t depends only on the previous tags we can rewrite the figure of merit as follows now we add some new independence assumptionswe assume that the probability of the nonterminal depends only on the immediately preceding tag and that the probability of the tag immediately following the nonterminal depends only on the nonterminal giving we can calculate 0 and the tritag probabilities as usualthe p and p probabilities are estimated from our training data by parsing the training data and counting the occurrences of the nonterminal and the tag weighted by their probability in the parseagain see appendix b for details of how these estimates were obtainedwe will refer to this figure as the boundary trigram estimatewe also wished to examine whether contextual information by itself is sufficient as a figure of meritwe can derive an estimate based only on easily computable contextual information as follows most of the independence assumptions we make are the same as in the boundary trigram estimatewe assume that the probability of the nonterminal depends only on the previous tag that the probability of the immediately following tag depends only on the nonterminal and that the probability of the tags following that depend only on the previous tagshowever we make one independence assumption that differs from all of our previous estimatesrather than assuming that the probability of the tags within the constituent depends on the nonterminal giving an inside probability term we assume that the probability of these tags depends only on the previous tagsthen we have which is simply the product of the two boundary statistics described in the previous sectionwe refer to this estimate as boundary statistics only for which we use 1 then at run time we only use the trigram probabilities for tok nonzerolength edges for 95 of the probability mass for the boundary estimatesthe results for the figures of merit introduced in the previous section according to the measurements given in section 22 are shown in table 3figure 11 shows a graph of non0 e for each sentence length for the boundary models and the trigram and prefix estimatesthis graph shows that the contextual information gained from using old in the prefix estimate is almost completely included in just the previous tag as illustrated by the left boundary trigram estimateadding right contextual information in the boundary trigram estimate gives us the best performance on this measure of any of our figures of meritwe can consider the left boundary trigram estimate to be an approximation of the prefix estimate where the effect of the left context is approximated by the effect of the single tag to the leftsimilarly the boundary trigram estimate is an approximation to an estimate involving the full context ie an estimate involving the outside probability ahowever the parser cannot compute the outside probability of a constituent during a parse and so in order to use context on both sides of the constituent we need to use something like our boundary statisticsour results suggest that a single tag before or after the constituent can be used as a reasonable approximation to the full context on average cpu time for 95 of the probability mass for the boundary estimates that side of the constituentfigure 12 shows the average cpu time for each sentence lengthsince the boundary trigram estimate has none of the overhead associated with the prefix estimate it is the best performer in terms of cpu time as wellwe can also see that using just the boundary statistics which can be precomputed and require no extra processing during parsing still results in a substantial improvement over the nonbestfirst quotstackquot modelas another method of comparison between the two bestperforming estimates the contextdependent boundary trigram model and the contextindependent trigram model we compared the number of edges needed to find the first parse for averagelength sentencesthe average length of a sentence in our test data is about 22 wordsfigure 13 shows the percentage of sentences of length 18 through 26 for which a parse could be found within 2500 edgesfor this experiment we used a separate test set from the wall street journal corpus containing approximately 570 sentences in the desired length rangethis measure also shows a real advantage of the boundary trigram estimate over the trigram estimatetable 4 summarizes the results obtained for each figure of meritto verify that our results are not an artifact of the particular grammar we chose for testing we also tested using a treebank grammar introduced in charniak this of the 18 to 26word sentences finding a parse in a fixed number of edges grammar was trained in a straightforward way by reading the grammar directly from a portion of the penn treebank wall street journal data comprised of about 300000 wordsthe boundary statistics were counted directly from the training data as wellthe treebank grammar is much larger and more ambiguous than our original grammar containing about 16000 rules and 78 terminal and nonterminal symbols and it was impractical to parse sentences to exhaustion using our existing hardware so the figures based on 95 of the probability mass could not be computedwe were able to use this grammar to compare the number of edges needed to find the first parse using the trigram and boundary trigram estimates of the 18 to 26word sentences finding a parse in a fixed number of edges for a treebank grammarfigure 14 shows the percentage of sentences of length 18 through 26 for which a parse could be found within 20000 edgesagain we used a test set of approximately 570 sentences of the appropriate length from the wall street journal corpusalthough the xaxis covers a much wider range than in figure 13 the relationship between the two estimates is quite similarin an earlier version of this paper we presented the results for several of these models using our original grammarthe treebank grammar was introduced in charniak and the parser in that paper is a bestfirst parser using the boundary trigram figure of meritthe literature shows many implementations of bestfirst parsing but none of the previous work shares our goal of explicitly comparing figures of meritbobrow and chitrao and grishman introduced statistical agendabased parsing techniqueschitrao and grishman implemented a bestfirst probabilistic parser and noted the parser tendency to prefer shorter constituentsthey proposed a heuristic solution of penalizing shorter constituents by a fixed amount per wordmiller and fox compare the performance of parsers using three different types of grammars and show that a probabilistic contextfree grammar using inside probability as a figure of merit outperforms both a contextfree grammar and a contextdependent grammarkochman and kupin propose a figure of merit closely related to our prefix estimatethey do not actually incorporate this figure into a bestfirst parsermagerman and marcus use the geometric mean to compute a figure of merit that is independent of constituent lengthmagerman and weir use a similar model with a different parsing algorithmwe have presented and evaluated several figures of merit for bestfirst parsingthe best performer according to all of our measures was the parser using the boundary trigram estimate as a figure of merit and this result holds for two different grammarsthis figure has the additional advantage that it can be easily incorporated into existing bestfirst parsers using a figure of merit based on inside probabilitywe strongly recommend this figure of merit as the basis for bestfirst statistical parsersthe measurements presented here almost certainly underestimate the true benefits of this modelwe restricted sentence length to a maximum of 30 words in order to keep the number of edges in the exhaustive parse to a practical size however since the percentage of edges needed by the bestfirst parse decreases with increasing sentence length we assume that the improvement would be even more dramatic for sentences longer than 30 wordswe compute estimates of the inside probability for each proposed constituent incrementally as new constituents are added to the chartinitially is set to 1 for each terminal symbol since our input is given as a stream of tags which are our terminalswhen a new proposed constituent is added to the agenda its 3 estimate is set to its current inside probability according to the constituents already in the charthowever as more constituents are added to the chart we may find a new way to build up a proposed constituent ie additional evidence for that proposed constituent so we need to update the 13 for the proposed constituent these updates can be quite expensive in terms of cpu timehowever many of the updates are quite small and do not affect the relative ordering of the proposed constituents on the agendainstead of propagating every change to 0 then we only want to propagate those changes that we expect to have an effect on this orderingwhat we have done is to have each constituent store not only its value but also an incrementincreases to the inside probability are added not to itself but to this increment until the increment exceeds some thresholdexperimentally we have found that we can avoid propagating increments until they exceed 1 of the current value of with very little effect on the parser selection of constituents from the agendathis thresholding on the propagation of allows us to update the values on line while still keeping the performance of the parser as 0 empiricallyour figures of merit incorporating boundary statistics use the figures p to represent the effect of the left context and psince we use the tags as our input the probability of a nonterminal appearing with a particular previous tag is the same as the probability of that nonterminal appearing in any sentence containing that tagwe can then count the probabilityweighted occurrences of a nonterminal given the previous tag as follows that is for each sentence that contains the previous tag tj_i we increment our count by the probability of the nonterminal nk occurring immediately following ti_i in that sentencesince we have a complete parse the inside and outside probabilities and the sentence probability can be easily computedwe can also obtain the count c simply by counting the number of sentences in which that tag appears in position j 1we then obtain the conditional probability for the left boundary statistic as follows the right boundary statistic is computed in the corresponding wayfor the experiment using the treebank grammar these statistics were obtained by counting directly from the wall street journal treebank corpus just as the grammar rules and trigram statistics wereas an additional verification of our results we gathered data on speed versus accuracyfor this experiment we used the probabilistic contextfree grammar learned from the brown corpus and the averagelength test sentences described in section 54for each figure of merit we computed the average precision and recall of the best parse found as compared to the number of edges createdwe computed unlabeled precision and recall only since our grammar uses a different set of nonterminals from those used in the test dataprecision is defined as the percentage of the constituents proposed by our parser that are actually correct according to the treebankfor each edge count we measured the precision of the best parse of each sentence found within that number of edgesfigure 15 is a graph of the average precision for the 0 figures of merit from section 3 plotted against edge countsthe fluctuations at the low edge counts are due to the small amount of data at this levelat a low edge count very few sentences have actually been parsed and since these sentences tend to be short and simple the parses are likely to be correctthe sentences that could not be parsed do not contribute to the measurement of precisionas more sentences are parsed precision settles at about 47 the highest precision attainable by our particular test grammar and remains there as edge counts increaserecall of the best parse found in a fixed number of edges for the estimatesrecall of the best parse found in a fixed number of edges for the boundary estimatescaraballo and charniak figures of merit this level of precision is independent of the figure of merit used so measurement of precision does not help evaluate our figures of merita much more useful measure is recallrecall is defined as the percentage of constituents in the treebank test data that are found by our parseragain we measured the recall of the best parse of each sentence found within each number of edgesfigure 16 shows the results for the figures of merit from section 3straight beta clearly shows little or no improvement over the quotstackquot parser using no figure of merit at allthe other figures of merit increase quickly to about 64 the maximum recall attainable with our test grammarthe quotstackquot parser and the one using straight beta on the other hand do not reach this maximum level until about 50000 edgeswe have no explanation for the relatively poor performance of the parser using the trigram estimate compared to the other bestfirst parsers as shown in figures 16 17 and 18figure 17 shows the recall values for the al0 figures of merit from section 4 and figure 18 shows recall for the boundary figures of merit from section 5since precision is not a useful measure we have not included precision data for these figures of meritthese data confirm that the parser using the boundary trigram figure of merit performs better than any of the othersrecall using this figure of merit is consistently higher than any of the others at low edge counts and it reaches the maximum value in fewer than 2000 edges with the nearest competitors approaching the maximum at about 3000 edgesthe authors are very grateful to heidi fox for obtaining the speed vs accuracy data discussed in appendix c we also wish to thank the anonymous reviewers for their comments and suggestionsthis research was supported in part by nsf grant iri9319516 and by onr grant n00149610549
J98-2004
new figures of merit for bestfirst probabilistic chart parsingbestfirst parsing methods for natural language try to parse efficiently by considering the most likely constituents firstsome figure of merit is needed by which to compare the likelihood of constituents and the choice of this figure has a substantial impact on the efficiency of the parserwhile several parsers described in the literature have used such techniques there is little published data on their efficacy much less attempts to judge their relative meritswe propose and evaluate several figures of merit for bestfirst parsing and we identify an easily computable figure of merit that provides excellent performance on various measures and two different grammarswe present bestfirst parsing with figures of merit that allows conditioning of the heuristic function on statistics of the input string
generating natural language summaries from multiple online sources we present a methodology for summarization of news about current events in the form of briefings that include appropriate background information the system that we developed summons uses the output of systems developed for the darpa message understanding conferences to generate summaries of multiple documents on the same or related events presenting similarities and differences contradictions and generalizations among sources of information we describe the various components of the system showing how information from multiple articles is combined organized into a paragraph and finally realized as english sentences a feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefing we present a methodology for summarization of news about current events in the form of briefings that include appropriate background informationthe system that we developed summons uses the output of systems developed for the darpa message understanding conferences to generate summaries of multiple documents on the same or related events presenting similarities and differences contradictions and generalizations among sources of informationwe describe the various components of the system showing how information from multiple articles is combined organized into a paragraph and finally realized as english sentencesa feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefingone of the major problems with the internet is the abundance of information and the resulting difficulty for a typical computer user to read all existing documents on a specific topiceven within the domain of current news the user task is infeasiblethere exist now more than 100 sources of live newswire on the internet mostly accessible through the worldwide web some of the most popular sites include news agencies and television stations like reuters news cnn web and clarinet enews online newspaper as well as online versions of print media such as the new york times on the web edition for the typical user it is nearly impossible to go through megabytes of news every day to select articles he wishes to readeven when the user can actually select all news relevant to the topic of interest he will still be faced with the problem of selecting a small subset that he can actually read in a limited time from the immense corpus of news availablehence there is a need for search and selection services as well as for summarization facilities there currently exist more than 40 search and selection services on the worldwide web such as dec altavista lycos and dejanews all of which allow keyword searches for recent newshowever only recently have there been practical results in the area of summarizationsummaries can be used to determine if any of the retrieved articles are relevant or can be read in place of the articles to learn about information of interest to the userexisting summarization systems typically use statistical techniques to extract relevant sentences from a news articlethis domainindependent approach produces a summary of a single article at a time which can indicate to the user what the article is aboutin contrast our work focuses on generation of a summary that briefs the user on information in which he has indicated interestsuch briefings pull together information of interest from multiple sources aggregating information to provide generalizations similarities and differences across articles and changes in perspective across timebriefings do not necessarily fully summarize the articles retrieved but they update the user on information he has specified is of interestwe present a system called summons shown in figure 1 which introduces novel techniques in the following areas as can be expected from a knowledgebased summarization system summons works in a restricted domainwe have chosen the domain of news on terrorism for several reasonsfirst there is already a large body of related research projects in information extraction knowledge representation and text planning in the domain of terrorismfor example earlier systems developed under the darpa message understanding conference were in the terrorist domain and thus we can build on these systems without having to start from scratchthe domain is important to a variety of users including casual news readers journalists and security analystsfinally summons is being developed as part of a general environment for illustrated briefing over live multimedia information of all muc system domains terrorism is more likely to have a variety of related images than other domains that were explored such as mergers and acquisitions or management successionin order to extract information of interest to the user summons makes use of components from several muc systemsthe output of such modules is in the form of summons architecture templates that represent certain pieces of information found in the source news articles such as victims perpetrators or type of eventby relying on these systems the task we have addressed to date is happily more restricted than direct summarization of full textthis has allowed us to focus on issues related to the combination of information in the templates and the generation of text to express themin order to port our system to other domains we would need to develop new templates and the information extraction rules required for themwhile this is a task we leave to those working in the information extraction field we note that there do exist tools for semiautomatically acquiring such rules this helps to alleviate the otherwise knowledgeintensive nature of the taskwe are working on the development of tools for domainindependent types of information extractionfor example our work on extracting descriptions of individuals and organizations and representing them in a formalism that facilitates reuse of the descriptions in summaries can be used in any domainin the remainder of this section we highlight the novel techniques of summons and explain why they are important for our workwith a few exceptions all existing summarizers provide summaries of single articles by extracting sentences from themif such systems were applied to a series of articles they might be able to extract sentences that have words in common with the other articles but they would be unable to indicate how sentences that were extracted from different articles were similarmoreover they would certainly not be able to indicate significant differences between articlesin contrast our work focuses on processing of information from multiple sources to highlight agreements and contradictions as part of the summarygiven the omnipresence of online news services one can expect that any interesting news event will be covered by several if not most of themif different sources present the same information the user clearly only needs to have access to one of thempractically this assumption does not hold as different sources provide updates from a different perspective and at different timesan intelligent summarizer task therefore is to attain as much information from the multiple sources as possible combine it and present it in a concise form to the userfor example if two sources of information report a different number of casualties in a particular incident summons will report the contradiction and attribute the contradictory information to its sources rather than select one of the contradictory pieces without the otheran inherent problem to summarizers based on sentence extraction is the lack of discourselevel fluency in the outputthe extracted sentences fit together only in the case they are adjacent in the source documentbecause summons uses language generation techniques to determine the content and wording of the summary based on information extracted from input articles it has all necessary information to produce a fluent surface summarywe show how the summary generated using symbolic techniques can be enhanced so that it includes descriptions of entities it containsif a user tunes in to news on a given event several days after the first report references to and descriptions of the event people and organizations involved may not be adequatewe collect such descriptions from online sources of past news and represent them using our generation formalism for reuse in later generation of summariesthe following section positions our research in the context of prior work in the areasection 3 describes the system architecture that we have developed for the summarization taskthe next two sections describe in more detail how a base summary is generated from multiple source articles and how the base summary is extended using descriptions extracted from online sourcessection 6 describes the current status of our systemwe conclude this article in sections 7 and 8 by describing some directions for future work in symbolic summarization of heterogeneous sourcesprevious work related to summarization falls into three main categoriesin the first full text is accepted as input and some percentage of the text is produced as outputtypically statistical approaches augmented with keyword or phrase matching are used to lift from the article full sentences that can serve as a summarymost of the work in this category produces a summary for a single article although there are a few exceptionsthe other two categories correspond to the two stages of processing that would have to be carried out if sentence extraction were not used analysis of the input document to identify information that should appear in a summary and generation of a textual summary from a set of facts that are to be includedin this section we first present work on sentence extraction next turn to work on identifying information in an article that should appear in a summary and conclude with work on generation of summaries from data showing how this task differs from the more general language generation taskthis is a systemsoriented perspective on summarizationrelated work focusing on techniques that have been implemented for the taskthere is also a large body of work on the nature of abstracting from a library science point of view this work distinguishes between different types of abstracts most notably indicative abstracts that tell what an article is about and informative abstracts that include major results from the article and can be read in place of itsummons generates summaries that are informative in natureresearch in psychology and education also focuses on how to teach people to write summaries this type of work can aid the development of summarization systems by providing insights into the human process of summarization that could be simulated in systemsto allow summarization in arbitrary domains researchers have traditionally applied statistical techniques this approach can be better termed extraction rather than summarization since it attempts to identify and extract key sentences from an article using statistical techniques that locate important phrases using various statistical measuresthis has been successful in different domains and is in fact the approach used in recent commercial summarizers rau brandow and mitze report that statistical summaries of individual news articles were rated lower by evaluators than summaries formed by simply using the lead sentence or two from the articlethis follows the principle of the quotinverted pyramidquot in news writing which puts the most salient information in the beginning of the article and leaves elaborations for later paragraphs allowing editors to cut from the end of the text without compromising the readability of the remaining textpaice also notes that problems for this approach center around the fluency of the resulting summaryfor example extracted sentences may accidentally include pronouns that have no previous reference in the extracted text or in the case of extracting several sentences may result in incoherent text when the extracted sentences are not consecutive in the original text and do not naturally follow one anotherpaice describes techniques for modifying the extracted text to replace unresolved referencessummaries that consist of sentences plucked from texts have been shown to be useful indicators of content but they are often judged to be highly unreadable a more recent approach uses a corpus of articles with summaries to train a statistical summarization systemduring training the system uses abstracts of existing articles to identify the features of sentences that are typically included in abstractsin order to avoid problems noted by paice the system produces an itemized list of sentences from the article thus eliminating the implication that these sentences function together coherently as a full paragraphas with the other statistical approaches this work is aimed at summarization of single articleswork presented at the 1997 acl workshop on intelligent scalable text summarization primarily focused on the use of sentence extractionalternatives to the use of frequency of key phrases included the identification and representation of lexical chains to find the major themes of an article followed by the extraction of one or two sentences per chain training over the position of summary sentences in the full article and the construction of a graph of important topics to identify paragraphs that should be extracted while most of the work in this category focuses on summarization of single articles early work is beginning to emerge on summarization across multiple documentsin ongoing work at carnegie mellon carbonell is developing statistical techniques to identify similar sentences and phrases across articlesthe aim is to identify sentences that are representative of more than one articlemani and bloedorn link similar words and phrases from a pair of articles using wordnet semantic relationsthey show extracted sentences from the two articles side by side in the outputwhile useful in general sentence extraction approaches cannot handle the task that we address aggregate summarization across multiple documents since this requires reasoning about similarities and differences across documents to produce generalizations or contradictions at a conceptual levelwork in summarization using symbolic techniques has tended to focus more on identifying information in text that can serve as a summary than on generating the summary and often relies heavily on domaindependent scripts the darpa message understanding systems which process news articles in specific domains to extract specified types of information also fall within this categoryas output work of this type produces templates that identify important pieces of information in the text representing them as attributevalue pairs that could be part of a database entrythe message understanding systems in particular have been developed over a long period have undergone repeated evaluation and development including moves to new domains and as a result are quite robustthey are impressive in their ability to handle large quantities of freeform text as inputas standalone systems however they do not address the task of summarization since they do not combine and rephrase extracted information as part of a textual summarya recent approach to symbolic summarization is being carried out at cambridge university on identifying strategies for summarization this work studies how various discourse processing techniques can be used to both identify important information and form the actual summarywhile promising this work does not involve an implementation as of yet but provides a framework and strategies for future workmarcu uses a rhetorical parser to build rhetorical structure trees for arbitrary texts and produces a summary by extracting sentences that span the major rhetorical nodes of the treein addition to domainspecific information extraction systems there has also been a large body of work on identifying people and organizations in text through proper noun extractionthese are domainindependent techniques that can also be used to extract information for a summarytechniques for proper noun extraction include the use of regular grammars to delimit and identify proper nouns the use of extensive name lists place names titles and quotgazetteersquot in conjunction with partial grammars in order to recognize proper nouns as unknown words in close proximity to known words statistical training to learn for example spanish names from online corpora and the use of conceptbased pattern matchers that use semantic concepts as pattern categories as well as partofspeech information in addition some researchers have explored the use of both local context surrounding the hypothesized proper nouns and the larger discourse context to improve the accuracy of proper noun extraction when large knownword lists are not availablein a way similar to this research our work also aims at extracting proper nouns without the aid of large word listswe use a regular grammar encoding partofspeech categories to extract certain text patterns and we use wordnet to provide semantic filteringanother system called murax is similar to ours from a different perspectivemurax also extracts information from a text to serve directly in response to a user questionmurax uses lexicosyntactic patterns collocational analysis along with information retrieval statistics to find the string of words in a text that is most likely to serve as an answer to a user whquery ultimately this approach could be used to extract information on items of interest in a user profile where each question may represent a different point of interestin our work we also reuse strings as part of the summary but the string that is extracted may be merged or regenerated as part of a larger textual summarysummarization of data using symbolic techniques has met with more success than summarization of textsummary generation is distinguished from the more traditional language generation problem by the fact that summarization is concerned with conveying the maximal amount of information within minimal spacethis goal is achieved through two distinct subprocesses conceptual and linguistic summarizationconceptual summarization is a form of content selectionit must determine which concepts out of a large number of concepts in the input should be included in the summarylinguistic summarization is concerned with expressing that information in the most concise way possiblewe have worked on the problem of summarization of data within the context of three separate systemsstreak generates summaries of basketball games using a revisionbased approach to summarizationit builds a first draft using fixed information that must appear in the summary in a second pass it uses revision rules to opportunistically add in information as allowed by the form of the existing textusing this approach information that might otherwise appear as separate sentences gets added in as modifiers of the existing sentences or new words that can simultaneously convey both pieces of information are selectedplandoc generates summaries of the activities of telephone planning engineers using linguistic summarization both to order its input messages and to combine them into single sentencesfocus has been on the combined use of conjunction ellipsis and paraphrase to result in concise yet fluent reports zeddoc generates web traffic summaries for advertisement management softwareit makes use of an ontology over the domain to combine information at the conceptual levelall of these systems take tabular data as inputthe research focus has been on linguistic summarizationsummons on the other hand focuses on conceptual summarization of both structured and fulltext dataat least four previous systems developed elsewhere use natural language to summarize quantitative data including ana semtex fog and lfs all of these use some forms of conceptual and linguistic summarization and the techniques can be adapted for our current work on summarization of multiple articlesin related work dalianis and hovy have also looked at the problem of summarization identifying eight aggregation operators that apply during generation to create more concise textthe overall architecture of our summarization system given earlier in figure 1 draws on research in software agents to allow connections to a variety of different types of data sourcesfacilities are used to provide a transparent interface to heterogeneous data sources that run on several machines and may be written in different programming languagescurrently we have incorporated facilities to various live news streams the cia world factbook and past newspaper archivesthe architecture allows for the incorporation of additional facilitators and data sources as our work progressesthe system extracts data from the different sources and then combines it into a conceptual representation of the summarythe summarization component shown on the left side of the figure consists of a base summary generator which combines information from multiple input articles and organizes that information using a paragraph plannerthe structured conceptual representation of the summary is passed to the lexical chooser shown at the bottom of the diagramthe lexical chooser also receives input from the world factbook and possible descriptions of people or organizations to augment the base summary the full content is then passed through a sentence generator implemented using the fuf surge language generation system fuf is a functional unification formalism that uses a large systemic grammar of english called surge to fill in syntactic constraints build a syntactic tree choose closed class words and eventually linearize the tree as a sentencethe right side of the figure shows how proper nouns and their descriptions are extracted from past newsan entity extractor identifies proper nouns in the past newswire archives along with descriptionsdescriptions are then categorized using the wordnet hierarchyfinally an fd or functional description for the description is generated so that it can be reused in fluent ways in the final summary fds mix functional semantic syntactic and lexical information in a recursive attributevalue format that serves as the basic data structure for all information within fuf surgesummons produces a summary from sets of templates that contain the salient facts reported in the input articles and that are produced by the message understanding systemsthese systems extract specific pieces of information from a given news articlean example of a template produced by muc systems and used in our system is shown in figures 2 and 3to test our system we used the templates produced by systems participating in muc4 as inputmuc4 systems operate on the terrorist domain and extract information by filling fields such as perpetrator victim and type of event for a total number of 25 fields per templatein addition we filled the same template forms by hand from current news articles for further testingcurrently work is under way in our group on the building of an information extraction module similar to the ones used in the muc conferences which we will later use as an input to summonswe are basing our implementation on the tools developed at the university of massachusetts the resulting system will not only be able to generate summaries from preparsed templates but will also produce summaries directly from raw text by merging the message understanding component with the current version of summonsour work provides a methodology for developing summarization systems identifies planning operators for combining information in a concise summary and uses empirically collected phrases to mark summarized materialwe have collected a corpus of newswire summaries that we used as data for developing the planning operators and for gathering a large set of lexical constructions used in summarizationthis reuters reported that 18 people were killed in a jerusalem bombing sundaythe next day a bomb in tel aviv killed at least 10 people and wounded 30 according to israel radioreuters reported that at least 12 people were killed and 105 woundedlater the same day reuters reported that the radical muslim group hamas had claimed responsibility for the act corpus will eventually aid in a full system evaluationsince news articles often summarize previous reports of the same event our corpus also includes short summaries of previous articleswe used this corpus to develop both the content planner and the linguistic component of our systemwe used the corpus to identify planning operators that are used to combine information this includes techniques for linking information together in a related way as well as making generalizationswe also identified phrases that are used to mark summaries and used these to build the system lexiconan example summary produced by the system is shown in figure 4this paragraph summarizes four articles about two separate terrorist acts that took place in israel in march of 1996 using two different planning operatorswhile the system we report on is fully implemented our work is undergoing continuous developmentcurrently the system includes eight different planning operators a testbed of 200 input templates grouped into sets on the same event and can produce fully lexicalized summaries for approximately half of the cases we have not performed an evaluation beyond the testbedour work provides a methodology for increasing the vocabulary size and the robustness of the system using a collected corpus and moreover it shows how summarization can be used to evaluate the message understanding systems identifying future research directions that would not be pursued under the current muc evaluation cycle2 due to inherent difficulties in the summarization task our work is a substantial first step and provides the framework for a number of different research directionsthe rest of this section describes the summarizer specifying the planning operators used for summarization as well as a detailed discussion of the summarization algorithm showing how summaries of different length are generatedwe provide examples of the summarization markers we collected for the lexicon and show the demands that summarization creates for interpretationthe summarization component of summons is based on the traditional language generation system architecture a typical language generator is divided into two main components a 2 participating systems in the darpa message understanding program are evaluated on a regular basisparticipants are given a set of training text to tune their systems over a period of time and their systems are tested on unseen text at followup conferences content planner which selects information from an underlying knowledge base to include in a text and a linguistic component which selects words to refer to concepts contained in the selected information and arranges those words appropriately inflecting them to form an english sentencethe content planner produces a conceptual representation of text meaning and typically does not include any linguistic informationthe linguistic component uses a lexicon and a grammar of english to realize the conceptual representation into a sentencethe lexicon contains the vocabulary for the system and encodes constraints about when each word can be usedas shown in figure 1 the content planner used by summons determines what information from the input muc templates should be included in the summary using a set of planning operators that are specific to summarization and to some extent to the terrorist domainits linguistic component determines the phrases and surface syntactic form of the summarythe linguistic component consists of a lexical chooser which determines the highlevel sentence structure of each sentence and the words that realize each semantic role and the fuf surge sentence generatorinput to summons is a set of templates where each template represents the information extracted from one or more articles by a message understanding systemhowever we constructed by hand an additional set of templates that include also terrorist events that have taken place after the period of time covered in muc4 such as the world trade center bombing the hebron mosque massacre and more recent incidents in israel as well as the disaster in oklahoma citythese incidents were not handled by the original message understanding systemswe also created by hand a set of templates unrelated to real newswire articles which we used for testing some techniques of our systemwe enriched the templates for all these cases by adding four slots the primary source the secondary source and the times at which both sources made their reportswe found having the source of the report immensely useful for discovering and reporting contradictions and generalizations because often different reports of an event are in conflictalso source information can indicate the level of confidence of the report particularly when reported information changes over timefor example if several secondary sources all report the same facts for a single event citing multiple primary sources it is more likely that this is the way the event really happened while if there are many contradictions between reports it is likely that the facts are not yet fully knownmembers of our research group are currently working on event tracking their prototype uses patternmatching techniques to track changes to online news sources and provide a live feed of articles that relate to a changing eventsummons summarization component generates a base summary which contains facts extracted from the input set of articlesthe base summary is later enhanced with additional facts from online structured databases with descriptions of individuals extracted from previous news to produce the extended summarythe base summary is a paragraph consisting of one or more sentences where the length of the summary is controlled by a variable input parameterin the absence of a specific user model the base summary is producedotherwise the extended summary is generated insteadsimilarly the default is that the summary contains references to contradictory and updated informationhowever if the user profile makes it explicit only the latest and the most trusted facts are includedsummons rates information in terms of importance where information that appears in only one article is given a lower rating and information that is synthesized from multiple articles is rated more highlydevelopment of the text generation component of summons was made easier because of the language generation tools and framework available at columbia universityno changes in the fuf sentence generator were neededin addition the lexical chooser and content planner were based on the design used in the plandoc automated documentation system described in section 23in particular we used fuf to implement the lexical chooser representing the lexicon as a grammar as we have done in many previous systems the main effort in porting the approach to summons was in identifying the words and phrases needed for the domainthe content planner features several stagesit first groups news articles together identifies commonalities between them and notes how the discourse influences wording by setting realization flags which denote such discourse features as quotsimilarityquot and quotcontradictionquot realization flags guide the choice of connectives in the generation stagebefore lexical choice summons maps the templates into fds that are expected as input to fuf and uses a domain ontology to enrich the inputfor example grenades and bombs are both explosives while diplomats and civilians are both considered to be human targetsin order to produce plausible and understandable summaries we used available online corpora as models including the wall street journal and current newswire from reuters and the associated pressthe corpus of summaries is 25 mb in sizewe have manually grouped 300 articles in threads related to single events or series of similar eventsfrom the corpora collected in this way we extracted manually and after careful investigation several hundred language constructions that we found relevant to the types of summaries we want to producein addition to the summary cue phrases collected from the corpus we also tried to incorporate as many phrases as possible that have relevance to the message understanding conference domaindue to domain variety such phrases were essentially scarce in the newswire corpora and we needed to collect them from other sources since one of the features of a briefing is conciseness we have tried to assemble small paragraph summaries that in essence describe a single event and the change of perception of the event over time or a series of related events with no more than a few sentencesthe main point of departure for summons from previous work is in the stage of identifying what information to include and how to group it together as well as the use of a corpus to guide this and later processesin plandoc successive items to summarize are very similar and the problem is to form a grouping that puts the most similar items together allowing the use of conjunction and ellipsis to delete repetitive materialfor summarizing multiple news articles the task is almost the opposite we need to find the differences from one article to the next identifying how the reported facts have changedthus the main problem was the identification of summarization strategies which indicate how information is linked together to form a concise and cohesive summaryas we have found in other work what information is included is often dependent on the language available to make concise additionsthus using a corpus summary was critical to identifying the different summaries possiblewe have developed a set of heuristics derived from the corpora that decide what types of simple sentences constitute a summary in what order they need to be listed as well as the ways in which simple sentences are combined into more complex onesin addition we have specified which summarizationspecific phrases are to be included in different types of summariesthe system identifies a preeminent set of templates from the input to the muc systemthis set needs to contain a large number of similar fieldsif this holds we can merge the set into a simpler structure keeping the common features and marking the distinct features as elhadad and mckeown kukich and shaw suggestat each step a summary operator is selected based on existing similarities between articles in the databasethis operator is then applied to the input templates resulting in a new template that combines or synthesizes information from the oldeach operator is independent of the others and several can be applied in succession to the input templateseach of the seven major operators is further subdivided to cover various modifications to its inputfigure 5 shows part of the rules for the contradiction operatorgiven two templates if incidentlocation is the same the time of first report is before time of second report the report sources are different and at least one other slot differs in value apply the contradiction operator to combine the templatesa summary operator encodes a means for linking information in two different templatesoften it results in the synthesis of new informationfor example a generalization may be formed from two independent factsalternatively since we are summarizing reports written over time highlighting how knowledge of the event changed is important and therefore summaries sometimes must identify differences between reportsa description of the operators we identified in our corpus follows accompanied by an example of system output for each operatoreach example primarily summarizes two or three input templates as this is the result of applying a single operator oncemore complex summaries can be produced by applying multiple operators on the same input as shown in the examples see figures 6 to 11 in section 45431 change of perspectivewhen an initial report gets a fact wrong or has incomplete information the change is usually included in the summaryin order for the quotchange of perspectivequot operator to apply the source field must be the same while the value of another field changes so that it is not compatible with the original valuefor example if the number of victims changes we know that the first report was wrong if the number goes down while the source had incomplete information if the number goes upthe first two sentences from the following example were generated using the change of perspective operatorthe initial estimate of quotat least 10 peoplequot killed in the incident becomes quotat least 12 peoplequot similarly the change in the number of wounded people is also reportedmarch 4th reuters reported that a bomb in tel aviv killed at least 10 people and wounded 30later the same day reuters reported that at least 12 people were killed and 105 wounded432 contradictionwhen two sources report conflicting information about the same event a contradiction arisesin the absence of values indicating the reliability of the sources a summary cannot report either of them as true but can indicate that the facts are not clearthe number of sources that contradict each other can indicate the level of confusion about the eventnote that the current output of the message understanding systems does not include sourceshowever summons uses this feature to report disagreement between output by different systemsa summary might indicate that one of the sources determined that 20 people were killed while the other source determined that only 5 were indeed killedthe difference between this example and the previous one on change of perspective is the source of the updateif the same source announces a change then we know that it is reporting a change in the factsotherwise an additional source presents information that is not necessarily more correct than the information presented by the earlier source and we can therefore conclude that we have a contradictionthe afternoon of february 26 1993 reuters reported that a suspected bomb killed at least six people in the world trade centerhowever associated press announced that exactly five people were killed in the blast433 additionwhen a subsequent report indicates that additional facts are known this is reported in a summaryadditional results of the event may occur after the initial report or additional information may become knownthe operator determines this by the way the value of a template slot changessince the former template does not contain a value for the perpetrator slot and the latter contains information about claimed responsibility we can apply the addition operatoron monday a bomb in tel aviv killed at least 10 people and wounded 30 according to israel radiolater the same day reuters reported that the radical muslim group hamas had claimed responsibility for the act434 refinementin subsequent reports a more general piece of information may be refinedthus if an event is originally reported to have occurred in new york city the location might later be specified as a particular borough of the citysimilarly if a terrorist group is identified as palestinian later the exact name of the terrorist group may be determinedsince the update is assigned a higher value of quotimportancequot it will be favored over the original article in a shorter summaryunlike the previous example there was a value for the perpetrator slot in the first template while the second one further elaborates on it identifying the perpetrator more specificallyexample 4 on monday reuters announced that a suicide bomber killed at least 10 people in tel avivlater the same day reuters reported that the islamic fundamentalist group hamas claimed responsibility435 agreementif two sources have the same values for a specific slot this will heighten the reader confidence in their veracity and thus agreement between sources is usually reportedexample 5 the morning of march 1st 1994 upi reported that a man was kidnapped in the bronxlater this was confirmed by reuters436 supersetgeneralizationif the same event is reported from different sources and all of them have incomplete information it is possible to combine information from them to produce a more complete summarythis operator is also used to aggregate multiple events as shown in the examplereuters reported that 18 people were killed in a jerusalem bombing sundaythe next day a bomb in tel aviv killed at least 10 people and wounded 30 according to israel radioa total of at least 28 people were killed in the two terrorist acts in israel over the last two daysit should be noted that in this example the third sentence will not be generated if there is a restriction on the length of the summary437 trendthere is a trend if two or more articles reflect similar patterns over timethus we might notice that three consecutive bombings occurred at the same location and summarize them into a single sentencethis is the third terrorist act committed by hamas in four weeks438 no informationsince we are interested in conveying information about the primary and secondary sources of a certain piece of news and since these are generally trusted sources of information we ought also to pay attention to the lack of information from a certain source when such is expected to be presentfor example it might be the case that a certain news agency reports a terrorist act in a given country but the authorities of that country do not give out any informationsince there is an infinite number of sources that might not confirm a given fact we have included this operator only as an illustration of a concept that further highlights the domainspecificity of the systemtwo bombs exploded in baghdad iraqi dissidents reported fridaythere was no confirmation of the incidents by the iraqi national congressthe algorithm used in the system to sort combine and generalize the input templates is described in the following subsections441 inputat this stage the system receives a set of templates from the message understanding conferences or a similar set of templates from a related domainall templates are described as lists of attribute value pairs these pairs are defined in the muc4 guidelines message understanding system for the site submitting the template if it is not present in the input templatenote that since the current message understanding systems do not extract the source this is the most specific we can be for such caseswe are experimenting with some techniques to automate the preprocessing stageour preliminary impressions show that by restricting summons to templates in which at least five or six slots are filled we can eliminate most of the irrelevant templates tween templates which will trigger certain operatorssince slots are matched among templates in chronological order there is only one sequence in which they can be appliedsuch patterns trigger reordering of the templates and modification of their individual importance valuesas an example if two templates are combined with the refinement operator the importance value of the combined template will be greater than the sum of the individual importance of the constituent templatesat the same time the values of these two templates are lowered all templates directly extracted from the muc output are assigned an initial importance value of 100currently with each application of an operator we lower the value of a contributing individual template by 20 points and give any newly produced template that combines information from already existing contributing templates a value greater than the sum of the values of the contributing templates after those values have been updatedfurthermore some operators reduce the importance values of existing templates even further thus the final summary will contain only the combined template if there are restrictions on lengthotherwise text corresponding to the constituent templates will also be generatedthe value of the importance of the template corresponds also to the position in the summary paragraph as more important templates will be generated firsteach new template contains information indicating whether its constituent templates are obsolete and thus no longer neededalso at this stage the coverage vector is updated to point to the templates that are still active and can be further combinedthis way we make sure that all templates still have a chance of participating in the actual summarythe resulting templates are combined into small paragraphs according to the event or series of events that they describeeach paragraph is then realized by the linguistic componenteach set of templates produces a single paragraph444 discourse planninggiven the relative importance of the templates included in the database after the heuristic combination stage the content planner organizes the presentation of information within a paragraphit looks at consecutive templates in the database marked as separate paragraphs from the previous stage and assigns values to quotrealization switchesquot that control local choices such as tense and voicethey also govern the presence or absence of certain constituents to avoid repetition of constituents and to satisfy anaphora constraintsthis subsection describes how the algorithm is applied to a set of four templates by tracing the computational process that transforms the raw source into a final natural language summaryexcerpts from the four input news articles are shown in figure 6the four news articles are transformed into four templates that correspond to four separate accounts of two related events and will be included in the set of templates from which the template combiner will workonly the relevant fields are shownlet us now consider the four templates in the order that they appear in the list of templatesthese templates are shown in figures 7 to 10they are generated manually from the input newswire textsinformation about the primary and secondary sources of information is added the differences in the templates are shown in bold facethe summary generated by the system was shown earlier in figure 4 and is repeated here in figure 11the first two sentences are generated from template onethe subsequent sentences are generated using different operators that are triggered according to changing values for certain attributes in the three remaining templatesas previous templates did not contain information about the perpetrator summons applies the refinement operator to generate the fourth sentencesentence three is generated using the change of perspective operator as the number of victims reported in articles two and three is differentthe description for hamas was added by the extraction generator typically a description is included in the source text and should be extracted by the message understanding systemin cases in which a description does not appear or is not extracted summons generates a description from the database of extracted descriptionswe are currently working on an algorithm that template for article three will select the best description based on such parameters as the user model the attitude towards the entity or a historical model that describes the changes in the profile of a person over the period of time template for article fourreuters reported that 18 people were killed in a jerusalem bombing sundaythe next day a bomb in tel aviv killed at least 10 people and wounded 30 according to israel radioreuters reported that at least 12 people were killed and 105 woundedlater the same day reuters reported that the radical muslim group hamas had claimed responsibility for the actwhen a summary refers to an entity it can make use of descriptions extracted by the muc systemsproblems arise when information needed for the summary is either missing from the input article or not extracted by the information extraction systemin such cases the information may be readily available in other current news stories in past news or in online databasesif the summarization system can find the needed information in other online sources then it can produce an improved summary by merging information extracted from the input articles with information from the other sources in the news domain a summary needs to refer to people places and organizations and provide descriptions that clearly identify the entity for the readersuch descriptions may not be present in the original text that is being summarizedfor example the american pilot scott ogrady downed in bosnia in june of 1995 was unknown to the american public prior to the incidentto a reader who tuned into news on this event days later descriptions from the initial articles might be more usefula summarizer that has access to different descriptions will be able to select the description that best suits both the reader and the series of articles being summarizedsimilarly in the example in section 4 if the user has not been informed about what hamas is and no description is available in the source template older descriptions in the fd format can be retrieved and usedin this section we describe an enhancement to the base summarization system called the profile manager which tracks prior references to a given entity by extracting descriptions for later use in summarizationthe component includes the entity extractor and description extractor modules shown in figure 1 and has the following features as a result summons will be able to combine descriptions from articles appearing only a few minutes before the ones being summarized with descriptions from past news in a permanent storage for future usesince the profile manager constructs a lexicalized syntactic fd from the extracted description the generator can reuse the description in new contexts merging it with other descriptions into a new grammatical sentencethis would not be possible if only canned strings were used with no information about their internal structurethus in addition to collecting a knowledge source that provides identifying features of individuals the profile manager also provides a lexicon of domainappropriate phrases that can be integrated with individual words from a generator lexicon to produce summary wording in a flexible fashionwe have extended the profile manager by semantically categorizing descriptions using wordnet so that a generator can more easily determine which description is relevant in different contextsthe profile manager can also be used in a realtime fashion to monitor entities and the changes of descriptions associated with them over the course of timethe rest of this section discusses the stages involved in the collection and reuse of descriptionsin this subsection we describe the description management module of summons shown in figure 1we explain how entity names and descriptions for them are extracted from old newswire and how these descriptions are converted to fds for surface generation an initial set of descriptions we used a 17 mb corpus containing reuters newswire from february to june of 1995later we used a webbased interface that allowed anyone on the internet to type in an entity name and force a robot to search for documents containing mentions of the entity and extract the relevant descriptionsthese descriptions are then also added to the databaseat this stage search is limited to the database of retrieved descriptions only thus reducing search time as no connections will be made to external news sources at the time of the queryonly when a suitable stored description cannot be found will the system initiate search of additional text dictionarythis resulted in a list of 421 unique entity names that we used for the automatic description extraction stageall 421 entity names retrieved by the system are indeed proper nouns512 extraction of descriptionsthere are two occasions on which we extract descriptions using finitestate techniquesthe first case is when the entity that we want to describe was already extracted automatically and exists in the database of descriptionsthe second case is when we want a description to be retrieved in real time based on a request from the generation componentin the first stage the profile manager generates finitestate representations of the entities that need to be describedthese full expressions are used as input to the description extraction module which uses them to find candidate sentences in the corpus for extracting descriptionssince the need for a description may arise at a later time than when the entity was found and may require searching new text the description finder must first locate these expressions in the textthese representations are fed to crep which extracts noun phrases on either side of the entity from the news corpusthe finitestate grammar for noun phrases that we use represents a variety of different syntactic structures for both premodifiers and appositionsthus they may range from a simple noun to a much longer expression other forms of descriptions such as relative clauses are the focus of ongoing implementationtable 2 shows some of the different patterns retrievedfor example when the profile manager has retrieved the description the political arm of the irish republican army for sinn fein it looks at the head noun in the description np which we manually added to the list of trigger words to be categorized as an organization it is important to notice that even though wordnet typically presents problems with disambiguation of words retrieved from arbitrary text we do not have any trouble disambiguating arm in this case due to the constraints on the context in which it appears 513 categorization of descriptionswe use wordnet to group extracted descriptions into categoriesfor the head noun of the description np we try to find a wordnet hypernym that can restrict the semantics of the descriptioncurrently we identify concepts such as quotprofessionquot quotnationalityquot and quotorganizationquot each of these concepts is triggered by one or more words in the descriptiontable 2 shows some examples of descriptions and the concepts under which they are classified based on the wordnet hypernyms for some trigger wordsfor example all of the following triggers in the list can be traced up to leader in the wordnet hierarchywe have currently a list of 75 such trigger words that we have compiled manually we create a new profile in a database of profileswe keep information about the surface string that is used to describe the entity in newswire the source of the description and the date that the entry has been made in the database in addition to these pieces of metainformation all retrieved descriptions and their frequencies are also storedcurrently our system does not have the capability of matching references to the same entity that use different wordingsas a result we keep separate profiles for each of the following robert dole dole and bob dolewe use each of these strings as the key in the database of descriptionsfigure 12 shows the profile associated with the key john majorit can be seen that four different descriptions have been used in the parsed corpus to describe john majortwo of the four are common and are used in summons whereas the other two result from incorrect processing by pos and or crepthe database of profiles is updated every time a query retrieves new descriptions matching a certain keywhen presenting an entity to the user the content planner of a language generation system may decide to include some background information about it if the user has generated fd for silvio berlusconi not previously seen the entitywhen the extracted information does not contain an appropriate description the system can use some descriptions retrieved by the profile manager the extracted descriptions in the generation of summaries we have developed a module that converts finitestate descriptions retrieved by the description extractor into functional descriptions that we can use directly in generationa description retrieved by the system is shown in figure 13the corresponding fd is shown in figure 14 semantics the profile manager can prefer to generate one over another based on semantic featuresthis is useful if a summary discusses events related to one description associated with the entity more than the othersfor example when an article concerns bill clinton on the campaign trail then the description democratic presidential candidate is more appropriateon the other hand when an article concerns an international summit of world leaders then the description yous president is more appropriatecurrently our system can produce simple summaries consisting of one to three sentence paragraphs which are limited to the muc domain and to a few additional events for which we have manually created muclike templateswe have also implemented the modules to connect to the world factbookwe have converted all ontologies related to the muc and the factbook into fdsthe user model which would allow users to specify preferred sources of information frequency of briefings etc has not been fully implemented yeta problem that we have not addressed is related to the clustering of articles according to their relevance to a specific eventthis is an area that requires further researchanother such area is the development of algorithms for grouping together articles that belong to the same topicfinally one of our main topics for future work is the development of techniques that can generate summary updatesto do this we must make use of a discourse model that represents the content and wording of summaries that have already been presented to the userwhen generating an update the summarizer must avoid repeating content and at the same time must be able to generate references to entities and events that were previously describedat the current stage the description generator has the following coverage person profile with the profile of the organization of which he is a memberwe should note that extensive research in this field exists and we plan to make use of one of the proposed methods to solve this probleman important issue is portability of summons to other domainsthere are no a priori restrictions in our approach that would limit summons to templatebased inputs it would be interesting to determine the actual number of different representation schemes for news in generalsince there exist systems that can learn extraction rules for unrestricted domains the information extraction does not seem to present any fundamental bottleneck eitherrather the questions are how many manhours are required to convert to each new domain and how many of the rules from one domain are applicable to each new domainthere are no clear answers to these questionsthe library of planning operators used in summons is extensible and can be ported to other domains although it is likely that new operators will be neededin addition new vocabulary will also be neededthe authors plan to perform a portability analysis and report on it in the futuregiven that no alternative approaches to conceptual summarization of multiple articles exist we have found it very hard to perform an adequate evaluation of the summaries generated by summonswe consider several potential evaluations qualitative and taskbasedin a taskbased evaluation one set of judges would have access to the full set of articles while another set of evaluators would have the summaries generated by summonsthe task would involve decision making the time for decision making will be plotted against the accuracy of the answers provided by the judges from the two setsa third set of judges might have access to summaries generated by summarizers based on sentence extraction from multiple documentssimilar evaluation techniques have been proposed for singledocument summarizers the prototype system that we have developed serves as the springboard for research in a variety of directionsfirst and foremost is the need to use statistical techniques to increase the robustness and vocabulary of the systemsince we were looking for phrasings that mark summarization in a full article that includes other material as well for a first pass we found it necessary to do a manual analysis in order to determine which phrases were used for summarizationin other words we knew of no automatic way of identifying summary phraseshowever having an initial seed set of summary phrases might allow us to automate a second pass analysis of the corpus by looking for variant patterns of the ones we have foundby using automated statistical techniques to find additional phrases we could increase the size of the lexicon and use the additional phrases to identify new summarization strategies to add to our stock of operatorsour summary generator could be used both for evaluating message understanding systems by using the summaries to highlight differences between systems and for identifying weaknesses in the current systemswe have already noted a number of drawbacks with the current output which makes summarization more difficult giving the generator less information to work withfor example it is only sometimes indicated in the output that a reference to a person place or event is identical to an earlier reference there is no connection across articles the source of the report is not includedfinally the structure of the template representation is somewhat shallow being closer to a database record than a knowledge representationthis means that the generator knowledge of different features of the event and relations between them is somewhat shallowone of the more important current goals is to increase coverage of the system by providing interfaces to a large number of online sources of newswe would ideally want to build a comprehensive and shareable database of profiles that can be queried over the worldwide webthe database will have a defined interface that will allow for systems such as summons to connect to itanother goal of our research is the generation of evolving summaries that continuously update the user on a given topic of interestin that case the system will have a model containing all prior interaction with the userto avoid repetitiveness such a system will have to resort to using different descriptions to address a specific entitywe will be investigating an algorithm that will select a proper ordering of multiple descriptions referring to the same person within the same discourseafter we collect a series of descriptions for each possible entity we need to decide how to select among themthere are two scenariosin the first one we have to pick one single description from the database that best fits the summary we are generatingin the second scenario the evolving summary we have to generate a sequence of descriptions which might possibly view the entity from different perspectiveswe are investigating algorithms that will decide the order of generation of the different descriptionsamong the factors that will influence the selection and ordering of descriptions we can note the user interests his knowledge of the entity and the focus of the summary we can also select one description over another based on how recently they have been included in the database whether or not one of them has been used in a summary already whether the summary is an update to an earlier summary and whether another description from the same category has been used alreadywe have yet to decide under what circumstances a description needs to be generated at allwe are interested in implementing existing algorithms or designing our own that will match different instances of the same entity appearing in different syntactic forms eg to establish that plo is an alias for the palestine liberation organizationwe will investigate using cooccurrence information to match acronyms to full organization names as well as alternative spellings of the same namewe will also look into connecting the current interface with news available on the internet and with an existing search engine such as lycos altavista or yahoowe can then use the existing indices of all web documents mentioning a given entity as a news corpus on which to perform the extraction of descriptionsour prototype system demonstrates the feasibility of generating briefings of a series of domainspecific news articles on the same event highlighting changes over time as well as similarities and differences among sources and including some historical information about the participantsthe ability to automatically provide summaries of heterogeneous material will critically help in the effective use of the internet in order to avoid overload with informationwe show how planning operators can be used to synthesize summary content from a set of templates each representing a single articlethese planning operators are empirically based coming from analysis of existing summaries and allow for the generation of concise briefingsour framework allows for experimentation with summaries of different lengths and for the combination of multiple independent summary operators to produce more complex summaries with added descriptionsthis work was partially supported by nsf grants ger9024069 iri9619124 iri9618797 and cda9625374 as well as a grant from columbia university strategic initiative fund sponsored by the provost officethe authors are grateful to the following people for their invaluable comments during the writing of the paper and at presentations of work related to the content of the paper alfred aho shihfu chang eleazar eskin vasileios hatzivassiloglou alejandro jaimes hongyan jing judith klavans minyen kan carl sable eric siegel john smith nina wacholder kazi zaman as well as the anonymous reviewers and the editors of the special issue on natural language generation
J98-3005
generating natural language summaries from multiple online sourceswe present a methodology for summarization of news about current events in the form of briefings that include appropriate background informationthe system that we developed summons uses the output of systems developed for the darpa message understanding conferences to generate summaries of multiple documents on the same or related events presenting similarities and differences contradictions and generalizations among sources of informationwe describe the various components of the system showing how information from multiple articles is combined organized into a paragraph and finally realized as english sentencesa feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefingwe combine work in information extraction and natural language processing
machine transliteration it is challenging to translate names and technical terms across languages with different alphabets and sound inventories these items are commonly transliterated ie replaced with approximate phonetic equivalents for example quotcomputerquot in english comes out as quotkonpyuutaaquot in japanese translating such items from japanese back to english is even more challenging and of practical interest as transliterated items make up the bulk of text phrases not found in bilingual dictionaries we describe and evaluate a method for performing backwards transliterations by machine this method uses a generative model incorporating several distinct stages in the transliteration process it is challenging to translate names and technical terms across languages with different alphabets and sound inventoriesthese items are commonly transliterated ie replaced with approximate phonetic equivalentsfor example quotcomputerquot in english comes out as quotkonpyuutaaquot in japanesetranslating such items from japanese back to english is even more challenging and of practical interest as transliterated items make up the bulk of text phrases not found in bilingual dictionarieswe describe and evaluate a method for performing backwards transliterations by machinethis method uses a generative model incorporating several distinct stages in the transliteration processone of the most frequent problems translators must deal with is translating proper names and technical termsfor language pairs like spanishenglish this presents no great challenge a phrase like antonio gil usually gets translated as antonio gilhowever the situation is more complicated for language pairs that employ very different alphabets and sound systems such as japaneseenglish and arabicenglishphonetic translation across these pairs is called transliterationwe will look at japaneseenglish transliteration in this articlejapanese frequently imports vocabulary from other languages primarily from englishit has a special phonetic alphabet called katakana which is used primarily to write down foreign names and loanwordsthe katakana symbols are shown in figure 1 with their japanese pronunciationsthe two symbols shown in the lower right corner are used to lengthen any japanese vowel or consonantto write a word like golfbag in katakana some compromises must be madefor example japanese has no distinct l and r sounds the two english sounds collapse onto the same japanese sounda similar compromise must be struck for english h and f also japanese generally uses an alternating consonantvowel structure making it impossible to pronounce lfb without intervening vowelskatakana writing is a syllabary rather than an alphabetthere is one symbol for ga another for gi ev another for gu etcso the way to write golfbag in katakana is 7 z y roughly pronounced goruhubagguhere are a few more examples katakana symbols and their japanese pronunciationsangela johnson new york times ice cream notice how the transliteration is more phonetic than orthographic the letter h in johnson does not produce any katakanaalso a dotseparator is used to separate words but not consistentlyand transliteration is clearly an informationlosing operation ranpu could come from either lamp or ramp while aisukuriimu loses the distinction between ice cream and i screamtransliteration is not trivial to automate but we will be concerned with an even more challenging problemgoing from katakana back to english ie backtransliterationhuman translators can often quotsound outquot a katakana phrase to guess an appropriate translationautomating this process has great practical importance in japaneseenglish machine translationkatakana phrases are the largest source of text phrases that do not appear in bilingual dictionaries or training corpora but very little computational work has been done in this areayamron et al briefly mention a patternmatching approach while arbabi et al discuss a hybrid neuralnetexpertsystem approach to transliterationthe informationlosing aspect of transliteration makes it hard to inverthere are some problem instances taken from actual newspaper articles english translations appear later in this articlehere are a few observations about backtransliteration that give an idea of the difficulty of the task the most desirable feature of an automatic backtransliterator is accuracyif possible our techniques should also be like most problems in computational linguistics this one requires full world knowledge for a 100 solutionchoosing between katarina and catalina might even require detailed knowledge of geography and figure skatingat that level human translators find the problem quite difficult as well so we only aim to match or possibly exceed their performancebilingual glossaries contain many entries mapping katakana phrases onto english phrases eg it is possible to automatically analyze such pairs to gain enough knowledge to accurately map new katakana phrases that come along and this learning approach travels well to other language pairsa naive approach to finding direct correspondences between english letters and katakana symbols however suffers from a number of problemsone can easily wind up with a system that proposes iskrym as a backtransliteration of aisukuriimutaking letter frequencies into account improves this to a more plausiblelooking isclimmoving to real words may give is crime the i corresponds to ai the s corresponds to su etcunfortunately the correct answer here is ice creamafter initial experiments along these lines we stepped back and built a generative model of the transliteration process which goes like this this divides our problem into five subproblemsfortunately there are techniques for coordinating solutions to such subproblems and for using generative models in the reverse directionthese techniques rely on probabilities and bayes theoremsuppose we build an english phrase generator that produces word sequences according to some probability distribution pand suppose we build an english pronouncer that takes a word sequence and assigns it a set of pronunciations again probabilistically according to some pgiven a pronunciation p we may want to search for the word sequence w that maximizes pbayes theorem let us us equivalently maximize p p exactly the two distributions we have modeledextending this notion we settled down to build five probability distributions given a katakana string o observed by ocr we want to find the english word sequence w that maximizes the sum over all e j and k of of the models in turnthe result is a large wfsa containing all possible english translationswe have implemented two algorithms for extracting the best translationsthe first is dijkstra shortestpath graph algorithm the second is a recently discovered kshortestpaths algorithm that makes it possible for us to identify the top k translations in efficient 0 time where the wfsa contains n states and m arcsthe approach is modularwe can test each engine independently and be confident that their results are combined correctlywe do no pruning so the final wfsa contains every solution however unlikelythe only approximation is the viterbi one which searches for the best path through a wfsa instead of the best sequence this section describes how we designed and built each of our five modelsfor consistency we continue to print written english word sequences in italics english sound sequences in all capitals japanese sound sequences in lower case and katakana sequences naturally the first model generates scored word sequences the idea being that ice cream should score higher than ice creme which should score higher than aice kreemwe adopted a simple unigram scoring method that multiplies the scores of the known words and phrases in a sequenceour 262000entry frequency list draws its words and phrases from the wall street journal corpus an online english name list and an online gazetteer of place namesa portion of the wfsa looks like this los 0000087 month i 0000992 an ideal word sequence model would look a bit differentit would prefer exactly those strings which are actually grist for japanese transliteratorsfor example people rarely transliterate auxiliary verbs but surnames are often transliteratedwe have approximated such a model by removing highfrequency words like has an are am were their and does plus unlikely words corresponding to japanese sound bites like coup and ohwe also built a separate word sequence model containing only english first and last namesif we know that the transliterated phrase is a personal name this model is more precisethe next wfst converts english word sequences into english sound sequenceswe use the english phoneme inventory from the online cmu pronunciation dictiofederal i 00013 nary minus the stress marks2 this gives a total of 40 sounds including 14 vowel sounds 25 consonant sounds plus one special symbol the dictionary has pronunciations for 110000 words and we organized a treebased wfst from it note that we insert an optional pause between word pronunciationswe originally thought to build a general lettertosound wfst on the theory that while wrong pronunciations might occasionally be generated japanese transliterators also mispronounce wordshowever our lettertosound wfst did not match the performance of japanese transliterators and it turns out that mispronunciations are modeled adequately in the next stage of the cascadenext we map english sound sequences onto japanese sound sequencesthis is an inherently informationlosing process as english r and l sounds collapse onto japanese r the 14 english vowel sounds collapse onto the 5 japanese vowel sounds etcwe face two immediate problems an obvious target inventory is the japanese syllabary itself written down in katakana or a roman equivalent with this approach the english sound k corresponds to one of t or depending on its contextunfortunately because katakana is a syllabary we would be unable to express an obvious and useful generalization namely that english k usually corresponds to japanese k independent of contextmoreover the correspondence of japanese katakana writing to japanese sound sequences is not perfectly onetoone so an independent sound inventory is wellmotivated in any caseour japanese sound inventory includes 39 symbols 5 vowel sounds 33 consonant sounds and one special symbol an english sound sequence like might map onto a japanese sound sequence like note that long japanese vowel sounds knight and graehl machine transliteration are written with two symbols instead of just one this scheme is attractive because japanese sequences are almost always longer than english sequencesour wfst is learned automatically from 8000 pairs of englishjapanese sound sequences eg we were able to produce these pairs by manipulating a small englishkatakana glossaryfor each glossary entry we converted english words into english sounds using the model described in the previous section and we converted katakana words into japanese sounds using the model we describe in the next sectionwe then applied the estimationmaximization algorithm to generate symbolmapping probabilities shown in figure 2our them training goes like this alignments between their elementsin our case an alignment is a drawing that connects each english sound with one or more japanese sounds such that all japanese sounds are covered and no lines crossfor example there are two ways to align the pair in this case the alignment on the left is intuitively preferablethe algorithm learns such preferences2for each pair assign an equal weight to each of its alignments such that those weights sum to 1in the case above each alignment gets a weight of 05pausepause our wfst has 99 states and 283 arcsenglish sounds with probabilistic mappings to japanese sound sequences as learned by estimationmaximizationonly mappings with conditional probabilities greater than 1 are shown so the figures may not sum to 1we have also built models that allow individual english sounds to be quotswallowedquot however these models are expensive to compute and lead to a vast number of hypotheses during wfst compositionfurthermore in disallowing quotswallowingquot we were able to automatically remove hundreds of potentially harmful pairs from our training set eg 4 because no alignments are possible such pairs are skipped by the learning algorithm cases like these must be solved by dictionary alignments between english and japanese sound sequences as determined by them trainingbest alignments are shown for the english words biscuit divider and filter lookup anywayonly two pairs failed to align when we wished they hadboth involved turning english y uw into japanese you as in 4 note also that our model translates each english sound without regard to contextwe have also built contextbased models using decision trees recoded as wfstsfor example at the end of a word english t is likely to come out as rather than however contextbased models proved unnecessary for backtransliterationthey are more useful for englishtojapanese forward transliterationto map japanese sound sequences like onto katakana sequences like we manually constructed two wfstscomposed together they yield an integrated wfst with 53 states and 303 arcs producing a katakana inventory containing 81 symbols including the dotseparator the first wfst simply merges long japanese vowel sounds into new symbols aa ii uu ee and oothe second wfst maps japanese sounds onto katakana symbolsthe basic idea is to consume a whole syllable worth of sounds before producing any katakanafor example this fragment shows one kind of spelling variation in japanese long vowel sounds are usually written with a long vowel mark but are sometimes written with repeated katakana we combined corpus analysis with guidelines from a japanese textbook to turn up many spelling variations and unusual katakana symbols and so onspelling variation is clearest in cases where an english word like switch shows up transliterated variously in different dictionariestreating these variations as an equivalence class enables us to learn general sound mappings even if our bilingual glossary adheres to a single narrow spelling conventionwe do not however generate all katakana sequences with this model for example we do not output strings that begin with a subscripted vowel katakanaso this model also serves to filter out some illformed katakana sequences possibly proposed by optical character recognitionperhaps uncharitably we can view optical character recognition as a device that garbles perfectly good katakana sequencestypical confusions made by our commercial ocr system include t for 71 for 7 for 7 and 7 for ito generate preocr text we collected 19500 characters worth of katakana words stored them in a file and printed them outto generate postocr text we ocrd the printoutswe then ran the them algorithm to determine symbolmapping probabilitieshere is part of that table this model outputs a superset of the 81 katakana symbols including spurious quote marks alphabetic symbols and the numeral 73 we can now use the models to do a sample backtransliterationwe start with a katakana phrase as observed by ocrwe then serially compose it with the models in reverse ordereach intermediate stage is a wfsa that encodes many possibilitiesthe final stage contains all backtransliterations suggested by the models and we finally extract the best onewe start with the masutaazutoonamento problem from section 1our ocr observes this string has two recognition errors for and for we turn the string into a chained 12state11arc wfsa and compose it with the p modelthis yields a fatter 12state15arc wfsa which accepts the correct spelling at a lower probabilitynext comes the poo model which produces a 28state31arc wfsa whose highestscoring sequence is masutaazutoochimento next comes p yielding a 62state241arc wfsa whose best sequence is next to last comes p which results in a 2982state4601arc wfsa whose best sequence is masters tone am ent awe this english string is closest phonetically to the japanese but we are willing to trade phonetic proximity for more sensical english we rescore this wfsa by composing it with p and extract the best translation other section 1 examples are translated correctly as earth day and robert sean leonardwe may also be interested in the k best translationsin fact after any composition we can inspect several highscoring sequences using the algorithm of eppstein given the following katakana input phrase inspecting the kbest list is useful for diagnosing problems with the modelsif the right answer appears low in the list then some numbers are probably off somewhereif the right answer does not appear at all then one of the models may be missing a word or suffer from some kind of brittlenessa kbest list can also be used as input to a later contextbased disambiguator or as an aid to a human translatorwe have performed two largescale experiments one using a fulllanguage p model and one using a personal name language modelin the first experiment we extracted 1449 unique katakana phrases from a corpus of 100 short news articlesof these 222 were missing from an online 100000entry bilingual dictionarywe backtransliterated these 222 phrasesmany of the translations are perfect technical program sex scandal omaha beach new york times ramon diazothers are close tanya harding nickel simpson danger washington world capsome miss the mark nancy care again plus occur patriot miss rea14 while it is difficult to judge overall accuracysome of the phrases are onomatopoetic and others are simply too hard even for good human translatorsit is easier to identify system weaknesses and most of these lie in the p modelfor example nancy kerrigan should be preferred over nancy care againin a second experiment we took katakana versions of the names of 100 yous politicians eg 1 71quot1quot 7 and q4 7 s we backtransliterated these by machine and asked four human subjects to do the samethese subjects were native english speakers and newsaware we gave them brief instructionsthe results were as in table 1there is room for improvement on both sidesbeing english speakers the human subjects were good at english name spelling and yous politics but not at japanese phoneticsa native japanese speaker might be expert at the latter but not the formerpeople who are expert in all of these areas however are rareon the automatic side many errors can be correcteda firstnamelastname model would rank richard bryan more highly than richard briana bigram model would prefer orren hatch over olin hatchother errors are due to unigram training problems or more rarely incorrect or brittle phonetic modelsfor example long occurs much more often than ron in newspaper text and our word selection does not exclude phrases like long islandso we get long wyden instead of ron wydenone way to fix these problems is by manually changing unigram probabilitiesreducing p by a factor of ten solves the problem while maintaining a high score for pdespite these problems the machine performance is impressivewhen word separators are removed from the katakana phrases rendering the task exceedingly difficult for people the machine performance is unchangedin other words it offers the same topscoring translations whether or not the separators are present however their presence significantly cuts down on the number of alternatives considered improving efficiencywhen we use ocr 7 of katakana tokens are misrecognized affecting 50 of test strings but translation accuracy only drops from 64 to 52in a 1947 memorandum weaver wrote one naturally wonders if the problem of translation could conceivably be treated as a problem of cryptographywhen i look at an article in russian i say quotthis is really written in english but it has been coded in some strange symbolsi will now proceed to decodequot whether this is a useful perspective for machine translation is debatable however it is a deadon description of transliterationmost katakana phrases really are english ready to be decodedwe have presented a method for automatic backtransliteration which while far from perfect is highly competitiveit also achieves the objectives outlined in section 1it ports easily to new language pairs the p and p models are entirely reusable while other models are learned automaticallyit is robust against ocr noise in a rare example of highlevel language processing being useful in improving lowlevel ocrthere are several directions for improving accuracythe biggest problem is that raw english frequency counts are not the best indication of whether a word is a possible source for transliterationalternative data collection methods must be consideredwe may also consider changes to the model sequence itselfas we have presented it our hypothetical human transliterator produces japanese sounds from english sounds only without regard for the original english spellingthis means that english homonyms will produce exactly the same katakana stringsin reality though transliterators will sometimes key off spelling so that tonya and tanya produce toonya and taanyait might pay to carry along some spelling information in the english pronunciation latticessentential context should be useful for determining correct translationsit is often clear from a japanese sentence whether a katakana phrase is a person an institution or a placein many cases it is possible to narrow things furthergiven the phrase quotsuchandsuch arizonaquot we can restrict our p model to include only those cities and towns in arizonait is also interesting to consider transliteration for other languagesin arabic for example it is more difficult to identify candidates for transliteration because there is no distinct explicit alphabet that marks themfurthermore arabic is usually written without vowels so we must generate vowel sounds from scratch in order to produce correct englishfinally it may be possible to embed phoneticshift models inside speech recognizers to explicitly adjust for heavy foreign accentswe would like to thank alton earl ingram yolanda gil bonnie glover stalls richard whitney kenji yamada and the anonymous reviewers for their helpful commentswe would also like to thank our sponsors at the department of defense
J98-4003
machine transliterationit is challenging to translate names and technical terms across languages with different alphabets and sound inventoriesthese items are commonly transliterated ie replaced with approximate phonetic equivalentsfor example computer in english comes out as konpyuutaa in japanesetranslating such items from japanese back to english is even more challenging and of practical interest as transliterated items make up the bulk of text phrases not found in bilingual dictionarieswe describe and evaluate a method for performing backwards transliterations by machinethis method uses a generative model incorporating several distinct stages in the transliteration processwe proposed a japaneseenglish transliteration method based on the mapping probability between english and japanese katakana sounds
pcfg models of linguistic tree representations the kinds of tree representations used in a treebank corpus can have a dramatic effect on performance of a parser based on the pcfg estimated from that corpus causing the estimated likelihood of a tree to differ substantially from its frequency in the training corpus this paper points out that the penn ii treebank representations are of the kind predicted to have such an effect and describes a simple node relabeling transformation that improves a treebank pcfgbased parser average precision and recall by around 8 or approximately half of the performance difference between a simple pcfg model and the best broadcoverage parsers available today this performance variation comes about because any pcfg and hence the corpus of trees from which the pcfg is induced embodies independence assumptions about the distribution of words and phrases the particular independence assumptions implicit in a tree representation can be studied theoretically and investigated empirically by means of a tree transformation detransformation process the kinds of tree representations used in a treebank corpus can have a dramatic effect on performance of a parser based on the pcfg estimated from that corpus causing the estimated likelihood of a tree to differ substantially from its frequency in the training corpusthis paper points out that the penn ii treebank representations are of the kind predicted to have such an effect and describes a simple node relabeling transformation that improves a treebank pcfgbased parser average precision and recall by around 8 or approximately half of the performance difference between a simple pcfg model and the best broadcoverage parsers available todaythis performance variation comes about because any pcfg and hence the corpus of trees from which the pcfg is induced embodies independence assumptions about the distribution of words and phrasesthe particular independence assumptions implicit in a tree representation can be studied theoretically and investigated empirically by means of a tree transformation detransformation processprobabalistic contextfree grammars provide simple statistical models of natural languagesthe relative frequency estimator provides a straightforward way of inducing these grammars from treebank corpora and a broadcoverage parsing system can be obtained by using a parser to find a maximumlikelihood parse tree for the input string with respect to such a treebank grammarpcfg parsing systems often perform as well as other simple broadcoverage parsing system for predicting tree structure from partofspeech tag sequences while pcfg models do not perform as well as models that are sensitive to a wider range of dependencies their simplicity makes them straightforward to analyze both theoretically and empiricallymoreover since more sophisticated systems can be viewed as refinements of the basic pcfg model it seems reasonable to first attempt to better understand the properties of pcfg models themselvesit is well known that natural language exhibits dependencies that contextfree grammars cannot describe but the statistical independence assumptions embodied in a particular pcfg description of a particular natural language construction are in general much stronger than the requirement that the construction be generated by a cfgwe show below that the pcfg extension of what seems to be an adequate cfg description of pp attachment constructions performs no better than pcfg models estimated from noncfg accounts of the same constructionsmore specifically this paper studies the effect of varying the tree structure representation of pp modification from both a theoretical and an empirical point of viewit compares pcfg models induced from treebanks using several different tree representations including the representation used in the penn ii treebank corpora and the quotchomsky adjunctionquot representation now standardly assumed in generative linguisticsone of the weaknesses of a pcfg model is that it is insensitive to nonlocal relationships between nodesif these relationships are significant then a pcfg will be a poor language modelindeed the sense in which the set of trees generated by a cfg is quotcontext freequot is precisely that the label on a node completely characterizes the relationships between the subtree dominated by the node and the nodes that properly dominate this subtreeroughly speaking the more nodes in the trees of the training corpus the stronger the independence assumptions in the pcfg language model induced from those treesfor example a pcfg induced from a corpus of completely flat trees generates precisely the strings of training corpus with likelihoods equal to their relative frequencies in that corpusthus the location and labeling on the nonroot nonterminal nodes determine how a pcfg induced from a treebank generalizes from that training datagenerally one might expect that the fewer the nodes in the training corpus trees the weaker the independence assumptions in the induced language modelfor this reason a quotflatquot tree representation of pp modification is investigated here as wella second method of relaxing the independence assumptions implicit in a pcfg is to encode more information in each node labelhere the intuition is that the label on a node is a quotcommunication channelquot that conveys information between the subtree dominated by the node and the part of the tree not dominated by this node so all other things being equal appending to the node label additional information about the context in which the node appears should make the independence assumptions implicit in the pcfg model weakerthe effect of adding a particularly simple kind of contextual informationthe category of the node parentis also studied in this paperwhether either of these two pcfg models outperforms a pcfg induced from the original treebank is a separate questionwe face a classical quotbias versus variancequot dilemma here as the independence assumptions implicit in the pcfg model are weakened the number of parameters that must be estimated increasesthus while moving to a class of models with weaker independence assumptions permits us to more accurately describe a wider class of distributions in general our estimate of these parameters will be less accurate simply because there are more of them to estimate from the same data this paper studies the effects of these differing tree representations of pp modification theoretically by considering their effect on very simple corpora and empirically by means of a tree transformationdetransformation methodology introduced belowthe corpus used as the source for the empirical study is version ii of the wall street journal corpus constructed at the university of pennsylvania modified as described in charniak in thatthe theory of pcfgs is described elsewhere so it is only summarized herea pcfg is a cfg in which each production a 4 a in the grammar set of productions p is associated with an emission probability p that satisfies a normalization constraint and a consistency or tightness constraint not discussed here that pcfgs estimated from tree banks using the relative frequency estimator always satisfy a pcfg defines a probability distribution over the parse trees generated by the grammar where the probability of a tree t is given by where cr is the number of times the production a a is used in the derivation t the pcfg that assigns maximum likelihood to the sequence i of trees in a treebank corpus is given by the relative frequency estimatorhere c is the number of times the production a a is used in derivations of the trees in f this estimation procedure can be used in a broadcoverage parsing procedure as follows a pcfg g is estimated from a treebank corpus 1 of training datain the work presented here the actual lexical items are ignored and the terminals of the trees are taken to be the partofspeech tags assigned to the lexical itemsgiven a sequence of pos tags to be analyzed a dynamic programming method based on the cky algorithm is used to search for a maximumlikelihood parse using this pcfgfor something so apparently fundamental to syntactic research there is considerable disagreement among linguists as to just what the right tree structure analysis of various linguistic constructions ought to befigure 1 shows some of the variation in pp modification structures postulated in generative syntactic approaches over the past 30 yearsthe flat attachment structure was popular in the early days of transformational grammar and is used to represent vps in the wsj corpusin this representation both arguments and adjuncts are sisters to the lexical head and so are not directly distinguished in the tree structurethe adjunction representation was introduced by chomsky in that representation arguments are sisters to the lexical head while adjuncts are adjoined as sisters to a phrasal node either a maximal projection or a quot1barquot projection in the quotxbarquot theory of grammar and its descendantsthe third representation depicted in figure 1 is a mixed representation in which phrases with adjuncts have exactly two levels of phrasal projectionthe lower level contains the lexical head and all adjuncts are attached as sisters to a maximal projection at the higher levelto a first approximation this is the representation used for nps with pp modifiers or complements in the wsj corpus used in this study1 if the standard linguistic intuition that the number of pp modifiers permitted in natural language is unbounded is correct then only the chomsky adjunction representation trees can be generated by a cfg as the other two representations depicted in figure 1 require a different production for each possible number of pp modifiersfor example the rule schema vp 4 v np pp which generates the flat attachment structure abbreviates an infinite number of cf productionsin addition if a treebank using the twolevel representation contains at least one node with a single pp modifier then the pcfg induced from it will generate chomsky adjunction representations of multiple pp modification in addition to the twolevel representations used in the treebankthis raises the question how should a parse tree be interpreted that does not fit the representational scheme used to construct the treebank training dataas noted above the wsj corpus represents pp modification to nps using the twolevel representationthe pcfg estimated from sections 221 of this corpus contains the following two productions these productions generate the twolevel representations of one and two pp adjunctions to np as explained abovehowever the second of these productions will never be used in a maximumlikelihood parse as the parse of sequence np pp pp involving two applications of the first rule has a higher estimated likelihoodin fact all of the productions of the form np np pr where n 1 in the pcfg induced from sections 221 of the wsj corpus are subsumed by the np np pp production in this waythus pp adjunctions to np in the maximumlikelihood parses using this pcfg always appear as chomsky adjunctions even though the original treebank uses a twolevel representationa large number of productions in the pcfg induced from sections 221 of the wsj corpus are subsumed by higherlikelihood combinations of shorter higherprobability productionsof the 14962 productions in the pcfg 1327 productions or just under 9 are subsumed by combinations of two or more productionssince the subsumed productions are never used to construct a maximumlikelihood parse they can be ignored if only maximumlikelihood parses are requiredmoreover since these subsumed productions tend to be longer than the productions that subsume them removing them from the grammar reduces the average parse time of the exhaustive pcfg parser used here by more than 9finally note that the overgeneration of the pcfg model of the twolevel adjunction structures is due to an independence assumption implicit in the pcfg model specifically that the upper and lower nps in the twolevel structure have the same expansions and that these expansions have the same distributionsthis assumption is clearly incorrect for the twolevel tree representationsif we systematically relabel one of these nps with a fresh label then a pcfg induced from the resulting transformed treebank no longer has this propertythe quotparent annotationquot transform discussed below which appends the category of a parent node onto the label of all of its nonterminal children as sketched in figure 2 has just this effectcharniak and carroll describe this transformation as adding quotpseudo contextsensitivityquot to the language model because the distribution of expansions of a node depends on nonlocal context viz the category of its parent3 this nonlocal information is sufficient to distinguish the upper and lower nps in the structures considered hereindeed even though the pcfg estimated from the trees obtained by applying the quotparent annotationquot transformation to sections 221 of the wsj corpus contains 22773 productions only 965 of them or just over 4 are subsumed by two or more other productionswe can gain some theoretical insight into the effect that different tree representations have on pcfg language models by considering several artifical corpora whose estimated pcfgs are simple enough to study analyticallypp attachment was chosen for investigation here because the alternative structures are simple and clear but presumably the same points could be made for any construction that has several alternative tree representationscorrectly resolving pp attachment ambiguities requires information such as lexical information that is simply not available to the pcfg models considered herestill one might hope that a pcfg model might be able to accurately reflect general statistical trends concerning attachment preferences in the training data even if it lacks the information to correctly resolve individual casesbut as the analysis in this section makes clear even this is not always obtainedfor example suppose our corpora only contain two trees both of which have yields v det n p det n are always analyzed as a vp with a direct object np and a pp and differ only as to whether the pp modifies the np or the vpthe corpora differ as to how these modifications are represented as treesthe dependencies in these corpora violate the independence assumptions implicit in a pcfg model so one should not expect a pcfg model to exactly reproduce any of these corporaas a cl reviewer points out the results presented here depend on the assumption that there is exactly one ppnevertheless the analysis of these corpora highlights two important points suppose we train a pcfg on a corpus f1 consisting only of two different tree structures the np attachment structure labeled and the vp attachment tree labeled the training corpusquotfithis corpus which uses penn ii tree representations consists of the trees with relative frequency f and the trees with relative frequency 1 f the pcfg p1 is estimated from this corpus occurs in the corpus with relative frequency f and occurs with relative frequency in fact in the wsj corpus structure occurs 7033 times in sections 221 and 279 times in section 22 while structure occurs 7717 times in sections 221 and 299 times in section 22thus f 048 in both the f221 subcorpora and the f22 corpusreturning to the theoretical analysis the relative frequency counts c1 and the nonunit production probability estimates pi for the pcfg induced from this twotree corpus are as follows of course in a real treebank the counts of all these productions would also include their occurrences in other constructions so the theoretical analysis presented here is but a crude idealizationempirical studies using actual corpus data are presented in section 5thus the estimated likelihoods using pi of the tree structures and are clearly pi 1 and 2 ensures that these transforms will only apply a finite number of time to any given subtreenv produces trees that represent pp modification of nps and vps with a chomsky adjunction representation that uses an intermediate level of x structurethis is the result of repeatedly applying the four transformations depicted in figure 8 as in the npvp transform with the modification that the new nonmaximal nodes are labeled n or v as appropriate flatten produces trees in which nps have a flatter structure than the twolevel representation of nps used in the penn ii treebankonly subtrees consisting of a parent node labeled np whose first child is also labeled np are affected by this transformationthe effect of this transformation is to excise all the children nodes labeled np from the tree and to attach their children as direct descendants of the parent node as depicted in the schema belowparent appends to each nonroot nonterminal node label its parent categorythe effect of this transformation is to produce trees of the kind discussed in section 44it is straightforward to estimate pcfgs using the relative frequency estimator from the sequences of trees produced by applying these transforms to the wsj corpuswe turn now to the question of evaluating the different pcfgs so obtainednone of the pcfgs induced from the various tree representations discussed here reliably identifies the correct tree representations on sentences from heldout datait is standard to evaluate broadcoverage parsers using lessstringent criteria that measure how similiar the trees produced by the parser are to the quotcorrectquot analysis trees in a portion of the treebank held out for testing purposesthis study uses the 1578 sentences in section 22 of the wsj corpus of length 40 or less for this purposethe labeled precision and recall figures are obtained by regarding the sequence of trees f produced by a parser as a multiset or bag e of edges ie triples where n is a nonterminal label and 1 and r are left and right string positions in yield of the entire corpusrelative to a test sequence of trees the labeled precision and recall of a sequence of trees f with the same yield as are calculated as follows where the n operation denotes multiset intersectionthus precision is the fraction of edges in the tree sequence to be evaluated that also appear in the test tree sequence and recall is the fraction of edges in the test tree sequence that also appear in tree sequence to be evaluatedit is straightforward to use the pcfg estimation techniques described in section 2 to estimate pcfgs from the result of applying these transformations to sections 221 of the penn ii wsj corpusthe resulting pcfgs can be used with a parser to obtain maximumlikelihood parse trees for the pos tag yields of the trees of the heldout test corpus while the resulting parse trees can be compared to the trees in the test corpus using the precision and recall measures described above the results would not be meaningful as the parse trees reflect a different tree representation to that used in the test corpus and thus are not directly comparable with the test corpus treesfor example the node labels used in the pcfg induced from trees produced by applying the parent transform are pairs of categories from the original penn ii wsj tree bank and so the labeled precision and recall measures obtained by comparing the parse trees obtained using this pcfg with the trees from the tree bank would be close to zeroone might try to overcome this by applying the same transformation to the test trees as was used to obtain the training trees for the pcfg but then the resulting precision and recall measures would not be comparable across transformationsfor example as two different penn ii format trees may map to the same flattened tree the flatten transformation is in general not invertiblethus a parsing system that produces perfect flat tree representations provides less information than one that produces perfect penn ii tree representations and one might expect that all else being equal a parsing system using flat representations will score higher in terms of precision and recall than an equivalent one producing penn ii representationsthe approach developed here overcomes this problem by applying an additional tree transformation step that converts the parse trees produced using the pcfg back to the penn ii tree representations and compares these trees to the heldout test trees using the labeled precision and recall treesthis transformationdetransformation process is depicted in figure 9it has the virtue that all precision and recall measures involve trees using the penn ii tree representations but it does involve an additional detransformation stepit is straightforward to define detransformers for all of the tree transformations described in this section except for the flattening transformthe difficulty in this case is that several different penn ii format trees may map onto the same flattened tree as mentioned abovethe detransformer for the flattening transform was obtained by recording for each distinct local tree in the flattened tree representation of the training corpus the various tree fragments in the penn ii format training corpus it could have been derived fromthe detransformation of a flattened tree is effected by replacing each local tree in the parse tree with its most frequently occuring penn ii format fragmentthis detransformation step is in principle an additional source of error in that a parser could produce flawless parse trees in its particular tree representation but the transformation to the corresponding penn ii tree representations might itself introduce errorsfor example it might be that several different penn ii tree representations can correspond to a single parse tree as is the case with a parser producing flattened tree representationsto determine if detransformation can be done reliably for each tree transformation labeled precision and recall measures were calculated comparing the result of applying the transformation and the corresponding detransformation to the test corpus trees with the original trees of the test corpusin all cases except for the flattening transform these precision and recall measures were always greater than 995 indicating that the transformationdetransformation process is quite reliablefor the flattening transform the measures were greater than 975 suggesting that while the error introduced by this process is noticable the transformationdetransformation process does not introduce a very large error on its owntable 1 presents an analysis of the sequences of trees produced via this detransformation process applied to the maximumlikelihoodparse treesthe columns of this table correspond to sequences of parse trees for section 22 of the wsj corpusthe column labeled quot22quot describes the trees given in section 22 of the wsj corpus and the column labeled quot22 idquot describes the maximumlikelihoodparse trees of section 22 of the wsj corpus using the pcfg induced from those very treesthis is thus an example of training on the test data and is often assumed to provide an upper bound on the performance of a learning algorithmthe remaining columns describe the sequences of trees produced using the transformationdetransformation process described abovethe first three rows of the table show the number of productions in each pcfg and the labeled precision and recall measures for the detransformed parse treesrandomization tests for paired sample data were performed to assess the significance of the difference between the labeled precision and recall scores for the output of the id pcfg and the other pcfgs the labeled precision and recall scores for the flatten and parent transforms differed significantly from each other and also from the id transform at the 001 level while neither the npvp nor the nv transform differed significantly from each other or the id transform at the 01 levelthe remaining rows of table 1 show the number of times certain tree schema appear in these tree sequencesthe rows labeled np attachments and vp attachments provide the number of times the following tree schema which as expected the pcfgs induced from the output of the flatten transform and parent transform significantly improve precision and recall over the original treebank pcfg the pcfg induced from the output of the parent transform performed significantly better than any other pcfg investigated hereas discussed above both the parent and the flatten transforms induce pcfgs that are sensitive to what would be noncf dependencies in the original treebank trees which perhaps accounts for their superior performanceboth the flatten and parent transforms induced pcfgs that have substantially more productions than the original treebank grammar perhaps reflecting the fact that they encode more contextual information than the original treebank grammar albeit in different waystheir superior performance suggests that the reduction in bias obtained by the weakening of independence assumptions that these transformations induce more than outweighs any associated increase in variancethe various adjunction transformations only had minimal effect on labeled precision and recallperhaps this is because pp attachment ambiguities despite their important role in linguistic and parsing theory are just one source of ambiguity among many in real language and the effect of the alternative representations is only minorindeed moving to the purportedly linguistically more realistic chomsky adjunction representations did not improve performance on these measureson reflection perhaps this should not be surprisingthe chomsky adjunction representations are motivated within the theoretical framework of transformational grammar which explicitly argues for nonlocal indeed noncontextfree dependenciesthus its poor performance when used as input to a statistical model that is insensitive to such dependencies is perhaps to be expectedindeed it might be the case that inserting the additional adjunction nodes inserted by the npvp and nv transformations above have the effect of converting a local dependency into a nonlocal dependency another initially surprising property of the tree sequences produced by the pcfgs is that they do not reflect at all well the frequency of the different kinds of pp attachment found in the penn ii corpusthis is in fact to be expected since the sequences consist of maximumlikelihood parsesto see this consider any of the examples analyzed in section 4in all of these cases the corpora contained two tree structures and the induced pcfg associates each with an estimated likelihoodif these likelihoods differ then a maximumlikelihood parser will always return the same maximumlikelihood tree structure each time it is presented with its yield and will never return the tree structure with lower likelihood even though the pcfg assigns it a nonzero likelihoodthus the surprising fact is that these pcfg parsers ever produce a nonzero number of np attachments and vp attachments in the same tree sequencethis is possible because the node label v in the attachment schema above abbreviates several different preterminal labels further investigation shows that once the v label in np attachment and vp attachment schemas is instantiated with a particular verbal tag only either the relevant np attachment schema or the vp attachment schema appears in the tree sequencefor instance in the id tree sequence the 67 np attachments all occurred with the v label instantiated to the verbal tag auxit is worth noting that the 8 improvement in average precision and recall obtained by the parent annotation transform is approximately half of the performance difference between a parser using a pcfg induced directly from the tree bank and the best currently available broadcoverage parsing systems which exploit lexical as well as purely syntactic information in order to better understand just why the parent annotation transform performs so much better than the other transforms transformationdetransformation experiments were performed in which the parent annotation transform was performed selectively either on all nodes with a given category label or all nodes with a given category label and parent category labelfigure 10 depicts the effect of selective application of the parent annotation transform on the change of the average of precision and recall with respect to the id transformit is clear that distinguishing the context of np and s nodes is responsible for an important part of the improvement in performancemerely distinguishing root from nonroot s nodesa distinction made in early transformational grammar but ignored in more recent workimproves average precision and recall by approximately 3thus it is possible that the performance gains achieved by the parent annotation transform have little to do with pp attachmentthis paper has presented theoretical and empirical evidence that the choice of tree representation can make a significant difference to the performance of a pcfgbased parsing systemwhat makes a tree representation a good choice for pcfg modeling seems to be quite different to what makes it a good choice for a representation of a linguistic theoryin conventional linguistic theories the choice of rules and hence trees the effects of selective application of the parent transformeach point corresponds to a pcfg induced after selective application of the parent transformthe point labeled all corresponds to the pcfg induced after the parent transform to all nonroot nonterminal nodes as beforepoints labeled with a single category a correspond to pcfgs induced after applying the parent transform to just those nodes labeled a while points labeled with a pair of categories kb correspond to pcfgs induced applying the parent transform to nodes labeled a with parents labeled bthe xaxis shows the difference in number of productions in the pcfg after selective parent transform and the untransformed treebank pcfg and the yaxis shows the difference in the average of the precision and recall scores is usually influenced by considerations of parsimony thus the chomsky adjunction representation of pp modification may be preferred because it requires only a single contextfree rule rather than a rule schema abbreviating a potentially unbounded number of rules that would be required in flat tree representations of adjunctionbut in a pcfg model the additional nodes required by the chomsky adjunction representation represent independence assumptions that seem not to be justifiedin general in selecting a tree structure one faces a biasvariance tradeoff in that tree structures with fewer nodes andor richer node labels reduce bias but possibly at the expense of an increase in variancea tree transformationdetransformation methodology for empirically evaluating the effect of different tree representations on parsing systems was developed in this paperthe results presented earlier show that the tree representations that incorporated weaker independence assumptions performed signficantly better in the empirical studies than the more linguistically motivated chomsky adjunction structuresof course there is nothing particularly special about the particular tree transformations studied in this paper other transforms couldand shouldbe studied in exactly the same mannerfor example i am currently using this methodology to study the interaction between tree structure and a quotslash categoryquot node labeling in tree representations with empty categories while the work presented here focussed on pcfg parsing models it seems that the general transformationdetransformation approach can be applied to a wider range of proba lemsfor example it would be interesting to know to what extent the performance of more sophisticated parsing systems such as those described by collins and charniak depends on the particular tree representations they are trained oni would like to thank dick oehrle and chris manning eugene charniak and my other colleagues at brown and the cl reviewers for their excellent advice in this researchthis material is based on work supported by the national science foundation under grants nosbr9720368 and sbr9812169
J98-4004
pcfg models of linguistic tree representationsthe kinds of tree representations used in a treebank corpus can have a dramatic effect on performance of a parser based on the pcfg estimated from that corpus causing the estimated likelihood of a tree to differ substantially from its frequency in the training corpusthis paper points out that the penn ii treebank representations are of the kind predicted to have such an effect and describes a simple node relabeling transformation that improves a treebank pcfgbased parser average precision and recall by around 8 or approximately half of the performance difference between a simple pcfg model and the best broadcoverage parsers available todaythis performance variation comes about because any pcfg and hence the corpus of trees from which the pcfg is induced embodies independence assumptions about the distribution of words and phrasesthe particular independence assumptions implicit in a tree representation can be studied theoretically and investigated empirically by means of a tree transformation detransformation processwe annotate each node by its parent category in a tree and gets significant improvements compared with the original pcfgs on the penn treebank
bitext maps and alignment via pattern recognition that are available in two languages becoming more and more plentiful both in private data warehouses and on publicly accessible sites on the world wide web as with other kinds of data the value of bitexts largely depends on the efficacy of the available data mining tools the first step in extracting useful information from bitexts is to find corresponding words andor segment boundaries in their two halves maps this article advances the state of the art of bitext mapping by formulating the problem in terms of pattern recognition from this point of view the success of a bitext mapping algorithm hinges on how well it performs three tasks signal generation noise filtering and search the smooth injective map recognizer algorithm presented here integrates innovative approaches to each of these tasks objective evaluation has shown that simr accuracy is consistently high for language pairs as diverse as frenchenglish and koreanenglish if necessary simr bitext maps can be efficiently converted into segment alignments using the geometric segment alignment algorithm which is also presented here simr has produced bitext maps for over 200 megabytes of frenchenglish bitexts gsa has converted these maps into alignments both the maps and the alignments are available from the texts that are available in two languages are becoming more and more plentiful both in private data warehouses and on publicly accessible sites on the world wide webas with other kinds of data the value of bitexts largely depends on the efficacy of the available data mining toolsthe first step in extracting useful information from bitexts is to find corresponding words andor text segment boundaries in their two halves this article advances the state of the art of bitext mapping by formulating the problem in terms of pattern recognitionfrom this point of view the success of a bitext mapping algorithm hinges on how well it performs three tasks signal generation noise filtering and searchthe smooth injective map recognizer algorithm presented here integrates innovative approaches to each of these tasksobjective evaluation has shown that simr accuracy is consistently high for language pairs as diverse as frenchenglish and koreanenglishif necessary simr bitext maps can be efficiently converted into segment alignments using the geometric segment alignment algorithm which is also presented heresimr has produced bitext maps for over 200 megabytes of frenchenglish bitextsgsa has converted these maps into alignmentsboth the maps and the alignments are available from the linguistic data consortiumexisting translations contain more solutions to more translation problems than any other existing resource although the above statement was made about translation problems faced by human translators recent research suggests that it also applies to problems in machine translationtexts that are available in two languages also play a pivotal role in various less automated applicationsfor example bilingual lexicographers can use bitexts to discover new crosslanguage lexicalization patterns students of foreign languages can use one half of a bitext to practice their reading skills referring to the other half for translation when they get stuck bitexts are of little use however without an automatic method for matching corresponding text units in their two halvesthe bitext mapping problem can be formulated in terms of pattern recognitionfrom this point of view the success of a bitext mapping algorithm hinges on three tasks signal generation noise filtering and searchthis article presents the smooth injective map recognizer a generic pattern recognition algorithm that is partica bitext space ularly well suited to mapping bitext correspondencesimr demonstrates that given effective signal generators and noise filters it is possible to map bitext correspondence with high accuracy in linear space and timeif necessary simr can be used with the geometric segment alignment algorithm which uses segment boundary information to reduce general bitext maps to segment alignmentsevaluations on preexisting gold standards have shown that simr bitext maps and gsa alignments are more accurate than those of comparable algorithms in the literaturethe article begins with a geometric interpretation of the bitext mapping problem and a discussion of previous worksimr is detailed in section 4 and evaluated in section 6section 7 discusses the formal relationship between bitext maps and segment alignmentsthe gsa algorithm for converting from the former to the latter is presented in section 7 and evaluated in section 8each bitext defines a rectangular bitext space as illustrated in figure 1the lower left corner of the rectangle is the origin of the bitext space and represents the two texts beginningsthe upper right corner is the terminus and represents the texts endsthe line between the origin and the terminus is the main diagonalthe slope of the main diagonal is the bitext slopeeach bitext space is spanned by a pair of axesthe lengths of the axes are the lengths of the two component textsthe axes of a bitext space are measured in characters because text lengths measured in characters correlate better than text lengths measured in tokens this correlation is important for geometric bitext mapping heuristics such as those described in section 44although the axes are measured in characters i will argue that word tokens are the optimum level of analysis for bitext mappingby convention each token is assigned the position of its median charactereach bitext space contains a number of true points of correspondence other than the origin and the terminustpcs exist both at the coordinates of matching text units and at the coordinates of matching text unit boundariesif a token at position p on the xaxis and a token at position q on the yaxis are translations of each other then the coordinate in the bitext space is a tpcif a sentence on the xaxis ends at character r and the corresponding sentence on the yaxis ends at character s then the coordinate is a tpcthe 5 is added because it is the intersentence boundaries that correspond rather than the last characters of the sentencessimilarly tpcs arise from corresponding boundaries between paragraphs chapters list items etcgroups of tpcs with a roughly linear arrangement in the bitext space are called chainsbitext maps are injective partial functions in bitext spacesa complete set of tpcs for a particular bitext is the true bitext map the purpose of a bitext mapping algorithm is to produce bitext maps that are the best possible approximations of each bitext tbmearly bitext mapping algorithms focused on finding corresponding sentences although sentence maps are too coarse for some bitext applications sentences were a relatively easy starting point because their order rarely changes during translationtherefore most sentence mapping algorithms ignore the possibility of crossing correspondences and aim to produce only an alignmentgiven parallel texts you and v an alignment is a segmentation of you and v into n segments each so that for each i1 i n you and v are mutual translationsan aligned segment pair a is an ordered pair thus an alignment a can also be defined as a sequence of aligned segments aa7 in 1991 two teams of researchers independently discovered that sentences from bitexts involving clean translations can be aligned with high accuracy just by matching sentence sequences with similar lengths both teams approached the alignment problem via maximumlikelihood estimation but using different modelsbrown lai and mercer formulated the problem as a hidden markov model based on a twostage generative processstage one generated some number of aligned segment pairs stage two decided how many segments from each half of the bitext to put in each aligned segment pairbrown lai and mercer took advantage of various lexical quotanchorsquot in the bitext that they were experimenting withthese anchors were also generated by the hmm according to their respective probability functionsall the hidden variables were estimated using the them algorithm gale and church began with a less structured model and proceeded to estimate its parameters through a series of approximationsgiven the set a of all possible alignments the maximumlikelihood alignment is a r g metic pr gale and church first assumed that the probability of any aligned segment pair is independent of any other segment pair next they assumed that the only feature of you and v that influences the probability of gale and church empirically estimated the distributions pr la and pr from a handaligned training bitext and then used dynamic programming to solve equation 5the lengthbased alignment algorithms work remarkably well on language pairs like frenchenglish and germanenglish considering how little information they usehowever length correlations are not as high when either of the languages involved does not use a phonetically based alphabet even in language pairs where the length correlation is high lengthbased algorithms can fumble in bitext regions that contain many segments of similar length like the vote record in table 1the only way to ensure a correct alignment in such cases is to look at the wordsfor this reason chen added a statistical translation model to the brown lai and mercer alignment algorithm and wu added a translation lexicon to the gale and church alignment algorithma translation lexicon t can be represented as a sequence of t entries where each entry is a pair of words t roughly speaking wu extended gale and church method with a matching function m which was equal to one whenever xj e you and yl e v for lexicon entry and zero otherwisethe information in the matching function was then used along with the information in d to condition the probability of alignments in equation 3 from this point wu proceeded along the lines of equations 4 and 5 and the dynamic programming solutionanother interesting approach is possible when partofspeech taggers are available for both languagesthe insight that parts of speech are usually preserved in translation enabled papageorgiou cranias and piperidis to design an alignment algorithm that maximizes the number of matching parts of speech in aligned segmentsit is difficult to compare this algorithm performance to that of other algorithms in the literature because results were only reported for a relatively easy bitexton this bitext the algorithm performance was nearly perfecta translation model between parts of speech would not help on bitext regions like the one in table 1the alignment algorithms described above work nearly perfectly given clean bitexts that have easily detectable sentence boundarieshowever bitext mapping at the sentence level is not an option for many bitexts sentences are often difficult to detect especially when punctuation is missing due to ocr errorsmore importantly bitexts often contain lists tables titles footnotes citations andor markup codes that foil sentence alignment methodschurch solution was to map bitext correspondence at the level of the smallest text unitscharacterscharacters match across languages to the extent that they participate in orthographic cognateswords with similar meanings and spellings in different languagessince there are far more characters than sentences in any bitext the quadratic computational complexity of this approach presented an efficiency problemchurch showed how to use a highband filter to find a rough bitext map quicklychurch rough bitext maps were intended for input into dagan church and gale slower algorithm for refinementdagan church and gale used the rough bitext map to define a distancebased model of cooccurrencethen they adapted brown et al statistical translation model 2 to work with this model of cooccurrence2 the information in the translation model was more reliable than characterlevel cognate information so it produced a higher signaltonoise ratio in the bitext spacetherefore dagan church and gale were able to filter out many of the imperfections of the initial bitext mapa limitation of church method and therefore also of dagan church and gale method is that orthographic cognates exist only among languages with similar alphabets fung investigated ways to make these methods useful when cognates cannot be foundfirst working with church she introduced the kvec algorithm which used a rough model of cooccurrence to bootstrap a small translation lexiconthe translation lexicon indicated points of correspondence in the bitext map much the same way as matching character ngramsthese points of correspondence could then be further refined using the methods previously developed by church and dagan church and gale later fung and mckeown improved on kvec by employing relative position offsets instead of a fixed model of cooccurrencethis strategy made the algorithm more robust for noisier bitextssimr borrows several insights from previous worklike the algorithms of gale and church and brown lai and mercer simr exploits the correlation between the lengths of mutual translationslike char_align simr infers bitext maps from likely points of correspondence between the two texts points that are plotted in a twodimensional space of possibilitiesunlike previous methods simr greedily searches for only a small chain of correspondence points at a timethe search begins in a small search rectangle in the bitext space whose diagonal is parallel to the main diagonalthe search for each chain alternates between a generation phase and a recognition phasein the generation phase simr generates candidate points of correspondence within the search rectangle that satisfy the supplied matching predicate as explained in section 42in the recognition phase simr invokes the chain recognition heuristic to select the most likely chain of true points of correspondence among the generated pointsthe most likely chain of tpcs is the set of points whose geometric arrangement most resembles the typical arrangement of tpcsthe parameters of the chain recognition heuristic are optimized on a small training bitextif no suitable chains are found the search rectangle is proportionally expanded by the minimum possible amount and the generationrecognition cycle is repeatedthe rectangle keeps expanding until at least one acceptable chain is foundif more than one acceptable chain is found in the same cycle simr accepts the chain whose points are least dispersed around its leastsquares lineeach time simr accepts a chain it moves the search rectangle to another region of the bitext space to search for the next chainsimr employs a simple heuristic to select regions of the bitext space to searchto a first approximation true bitext maps are monotonically increasing functionsthis means that if simr accepts one chain it should look for others either above and to the right or below and to the left of the one it has just foundall simr needs is a place to start the trace and a good place to start is at the beginningsince the origin of the bitext space is always a tpc the first search rectangle is anchored at the originsubsequent search rectangles are anchored at the top right corner of the previously found chain as shown in figure 2the expanding rectangle search strategy makes simr robust in the face of tbm discontinuitiesfigure 2 shows a segment of the tbm that contains a vertical gap as the search rectangle grows it will eventually intersect with the tbm even if the discontinuity is quite large the noise filter described in section 43 reduces the chances that simr will be led astray by false points of correspondencebefore simr can decide where to generate candidate points of correspondence it must be told which pairs of words have coordinates within the boundaries of the current search rectanglethe mapping from tokens to axis positions is performed by a languagespecific axis generator simr calls one of its matching predicates on each pair of tokens whose coordinate falls within the search rectanglea matching predicate is a heuristic for deciding whether two given tokens might be mutual translationstwo kinds of information that a matching predicate can rely on most often are cognates and translation lexiconstwo words are orthographic cognates if they have the same meaning and similar spellingssimilarity of spelling can be measured in more or less complicated waysthe first published attempt to exploit cognates for bitext mapping purposes deemed two alphabetic tokens cognates if their first four characters were identicalthis criterion proved surprisingly effective given its simplicity however like all heuristics it produced some false positives and some false simr quotexpanding rectanglequot search strategythe search rectangle is anchored at the top right corner of the previously accepted chainits diagonal remains parallel to the main diagonal negativesan example of a false negative is the word pair government and gouvernement the false positives were often words with a big difference in length like conseil and conservativethese examples suggest that a more accurate cognate criterion can be driven by approximate string matchingfor example mcenery and oakes threshold the dice coefficient of matching character bigrams in each pair of candidate cognatesthe matching predicates in simr current implementation threshold the longest common subsequence ratio the lcsr of two tokens is the ratio of the length of their longest common subsequence and the length of the longer tokenin symbols for example gouvernement which is 12 characters long has 10 characters that appear in the same order in governmentso the lcsr for these two words is 1012on the other hand the lcsr for conseil and conservative is only 612a simple dynamic programming algorithm can compute the lcs in 0a rather more complicated algorithm can compute it in 0 time on average when dealing with language pairs that have different alphabets the matching predicate can employ phonetic cognateswhen language ll borrows a word from language l2 the word is usually written in l1 similarly to the way it sounds in l2thus french and russian p0rtmana are cognates as are english sistam and japanese igisutemufor many languages it is not difficult to construct an approximate mapping from the orthography to its underlying phonological formgiven such a mapping for ll and l2 it is possible to identify cognates despite incomparable orthographiesknight and graehl have shown that it is possible to find phonetic cognates even between languages whose writing systems are as different as those of english and japanesethey have built a weighted finitestate automaton based on empirically estimated probability distributions for backtransliterating english loan words written in katakana into their original english formthe wfsa efficiently represents a large number of transliteration probabilities between words written in the katakana and roman alphabetsstandard finitestate techniques can efficiently find the most likely path through the wfsa from a japanese word written in katakana to an english wordthe weight of the most likely path is an estimate of the probability that the former is a transliteration of the latterthresholding this probability would lead to a phonetic cognate matching predicate for englishjapanese bitextsthe threshold would need to be optimized together with simr other parameters the same way the lcsr threshold is currently optimized cognates are more common in bitexts from more similar language pairs and from text genres where more word borrowing occurs such as technical textsin the nontechnical canadian hansards an lcsr cutoff of 58 finds cognates for roughly one quarter of all text tokenseven distantly related languages like english and czech will share a large number of orthographic cognates in the form of proper nouns numerals and punctuationwhen one or both of the languages involved is written in pictographs cognates can still be found among punctuation and numeralshowever these kinds of cognates are usually too sparse to build an accurate bitext map fromwhen the matching predicate cannot generate enough candidate correspondence points based on cognates its signal can be strengthened by a seed translation lexicon a simple list of word pairs that are believed to be mutual translationsseed translation lexicons can be extracted from machinereadable bilingual dictionaries in the rare cases where mrbds are availablein other cases they can be constructed automatically or semiautomatically using any of several published methods 3 a matching predicate based on a seed translation lexicon deems two candidate tokens to be mutual translations if the token pair appears in the lexiconsince the matching predicate need not be perfectly accurate the seed translation lexicons need not be perfectly accurate eitherall the matching predicates described above can be finetuned with stop lists for one or both languagesfor example closedclass words are unlikely to have cognatesindeed frenchenglish words like a an on and par often produce spurious points of correspondencethe same problem is caused by faux amis these are words with similar spellings but different meanings in different languagesfor example the french word librarie means bookstore not library and actuel means current not actuala matching predicate can use a list of closedclass words andor a list of pairs of faux amis to filter out spurious matches3 most published methods for automatically constructing translation lexicons require a preexisting bitext map which seems to render them useless for the purposes of bitext mapping algorithmsfortunately only one seed translation lexicon is required for each language pair or at worst for each sublanguageif we expect to map many bitexts in the same language pair then it becomes feasible to spend a few hours creating one bitext map by handmelamed explains how to do so quickly and efficientlybetter yet fung shows how it may be possible to extract a small translation lexicon and a rough bitext map simultaneouslyfrequent word types because false points of correspondence that line up in rows and columnsinspection of several bitext spaces has revealed a common noise pattern illustrated in figure 3it consists of correspondence points that line up in rows or columns associated with frequent word typesword types like the english article a can produce one or more correspondence points for almost every sentence in the opposite textonly one point of correspondence in each row and column can be correct the rest are noiseit is difficult to measure exactly how much noise is generated by frequent tokens and the proportion is different for every bitextinformal inspection of some bitext spaces indicated that frequent tokens are often responsible for the lion share of the noisereducing this source of noise makes it much easier for simr to stay on trackother bitext mapping algorithms mitigate this source of noise either by assigning lower weights to correspondence points associated with frequent word types or by deleting frequent word types from the bitext altogether however a word type that is relatively frequent overall can be rare in some parts of the textin those parts the word type can provide valuable clues to correspondenceon the other hand many tokens of a relatively rare type can be concentrated in a short segment of the text resulting in many false correspondence pointsthe varying concentration of identical tokens suggests that more localized noise filters would be more effectivesimr localized search strategy provides a vehicle for a localized noise filterthe filter is based on the maximum point ambiguity level parameterfor each point p let x be the number of points in column x within the search rectangle and let y be the number of points in row y within the search rectanglethe ambiguity level of p is defined as x y 2in particular if p is the only point in its row and in its column then its ambiguity level is zerothe chain recognition heuristic ignores simr noise filter makes an important contribution to the signaltonoise ratio in the bitext spaceeven if one chain of false points of correspondence slips by the chain recognition heuristic the expanding rectangle is likely to find its way back to the tbm trace before the chain recognition heuristic accepts another chain points whose ambiguity level is too highwhat makes this a localized filter is that only points within the search rectangle count toward each other ambiguity levelthe ambiguity level of a given point can change when the search rectangle expands or movesthe noise filter ensures that false points of correspondence are relatively sparse as illustrated in figure 4even if one chain of false points of correspondence slips by the chain recognition heuristic the expanding rectangle is likely to find its way back to the tbm trace before the chain recognition heuristic accepts another chainif the matching predicate generates a reasonably strong signal then the signaltonoise ratio will be high and simr is not likely to get lost even though it is a greedy algorithm with no ability to look aheadafter noise filtering most tpc chains conform to the pattern illustrated in figure 5the pattern can be characterized by three properties simr exploits these properties to decide which chains might be tpc chainsfirst chains that lack the injectivity property are rejected outrightthe remaining chains are filtered using two threshold parameters maximum point dispersal and maximum angle deviationthe linearity of each chain is measured as the root mean squared typical pattern of candidate points of correspondence in a bitext space after noise filteringthe true points of correspondence trace the true bitext map parallel to the main diagonal distance of the chain points from the chain leastsquares lineif this distance exceeds the maximum point dispersal threshold the chain is rejectedthe angle of each chain leastsquares line is compared to the arctangent of the bitext slopeif the difference exceeds the maximum angle deviation threshold the chain is rejectedin a search rectangle containing n points there are 211 possible chainstoo many to search by brute forcethe properties of tpcs listed above provide two ways to constrain the searchthe linearity property leads to a constraint on the chain sizechains of only a few points are unreliable because they often line up straight by coincidencechains that are too big will span too long a segment of the tbm to be well approximated by a linesimr uses a fixed chain size k 6 k 11the exact value of k is optimized together with the other parameters as described in section 5fixing the chain size at k reduces the number of candidate chains to nfor typical values of n and k can still reach into the millionsthe low variance of slope property suggests another constraint simr should consider only chains that are roughly parallel to the main diagonaltwo lines are parallel if the perpendicular displacement between them is constantso chains that are roughly parallel to the main diagonal will consist of points that all have roughly the same displacement from the main diagonalpoints with similar displacement can be grouped together by sorting as illustrated in figure 6then chains that are most parallel to the main diagonal will be contiguous subsequences of the sorted point sequencein a region of the bitext space containing n points there will be only n k 1 such subsequences of length k the most computationally expensive step in the chain recognition process is the insertion of candidate points into the sorted point sequencethe following subsections describe two of the more interesting enhancements in the current simr implementation461 overlapping chainssimr fixed chain size imposes a rather arbitrary fragmentation on the tbm traceeach chain starts at the topright corner of the previously found chain but these chain boundaries are independent of discontinuities or angle variations in the tbm tracetherefore simr is likely to miss tpcs wherever the tbm is not linearone way to make simr more robust is to start the search rectangle just above the lowest point of the previously found chain instead of just above the highest pointif the chain size is fixed at k then each linear stretch of s tpcs will result in s k 1 overlapping chainsunfortunately this solution introduces another problem two overlapping chains can be inconsistentthe injective property of tbms implies that whenever two chains overlap in the x or y dimensions but are not identical in the region of overlap then one of the chains must be wrongto resolve such conflicts simr employs a postprocessing algorithm to eliminate conflicting chains one at a time until all remaining chains are pairwise consistentthe conflict resolution algorithm is based on the heuristic that chains that conflict with a larger number of other chains are more likely to be wrongthe algorithm sorts all chains with respect to how many other chains they conflict with and eliminates them in this sort order one at a time until no conflicts remainwhenever two or more chains are tied in the sort order the conflict resolution algorithm eliminates all but the chain with the least point dispersal462 additional search passesto ensure that simr rejects spurious chains the maximum angle deviation threshold must be set lowhowever like any heuristic filter this one will reject some perfectly valid candidatesif a more precise bitext map is desired some of these valid chains can be recovered during an extra sweep through the bitext spacesince bitext maps are mostly injective valid chains that are rejected by the angle deviation filter usually occur between two accepted chains as shown in figure 7if chains c and d are accepted as valid then the slope of the tbm between the end of chain c and the start of chain d must be much closer to the slope of chain x than to the slope of the main diagonalchain x should be acceptedduring a second pass through the bitext space simr searches for sandwiched chains in any space between two accepted chains that is large enough to accommodate another chainthis subspace of the bitext space will have its own main diagonalthe slope of this local main diagonal can be quite different from the slope of the global main diagonalan additional search through the bitext space also enables simr to recover chains that were missed because of an inversion in the translationnonmonotonic tbm segments result in a characteristic map pattern as a consequence of the injectivity of bitext mapssimr has no problem with small nonmonotonic segments inside chainshowever the expanding rectangle search strategy can miss larger nonmonotonic segments that do not fit inside one chainin figure 8 the vertical range of segment j corresponds to a vertical gap in simr firstpass mapthe horizontal range of segment j corresponds to a horizontal gap in simr firstpass mapsimilarly any nonmonotonic segment of the tbm will occupy the intersection of a vertical gap and a horizontal gap in the monotonic firstpass mapfurthermore switched segments are usually adjacent and relatively shorttherefore to recover nonmonotonic segments of the tbm simr needs only to search gap intersections that are close to the firstpass mapthere are usually very few such intersections that are large enough to accommodate new chains segments i and j switched places during translationany nonmonotonic segment of the tbm will occupy the intersection of a vertical gap and a horizontal gap in the monotonic firstpass mapthese larger nonmonotonic segments can be recovered during a second sweep through the bitext space so the secondpass search requires only a small fraction of the computational effort of the first passsimr parametersthe fixed chain size the lcsr threshold used in the matching predicate and the thresholds for maximum point dispersal maximum angle deviation and maximum point ambiguityinteract in complicated waysideally simr should be reparameterized so that its parameters are pairwise independentthen it may be possible to optimize the parameters analytically or at least in a probabilistic frameworkfor now the easiest way to optimize these parameters is via simulated annealing a simple general framework for optimizing highly interdependent parameter setssimulated annealing requires an objective function to optimizethe objective function for bitext mapping should measure the difference between the tbm and the interpolated bitext maps produced with the current parameter setin geometric terms the difference is a distancethe tbm consists of a set of tpcsthe distance between a bitext map and each tpc can be defined in a number of waysthe simplest metrics are the horizontal distance or the vertical distance but these metrics measure the error with respect to only one language or the othera more robust average is the distance perpendicular to the main diagonalin order to penalize large errors more heavily root mean squared distance rather than mean distance should be minimizedthere is a slight complication in the computation of distances between two partial functions in that linear interpolation is not welldefined for nonmonotonic sets of pointsit would be incorrect to simply connect the dots left to right because the resulttwo text segments at the end of sentence a were switched during translation resulting in a nonmonotonic segmentto interpolate injective bitext maps nonmonotonic segments must be encapsulated in minimum enclosing rectangles a unique bitext map can then be interpolated by using the lower left and upper right corners of the mer instead of using the nonmonotonic correspondence points irtg function may not be injectiveto interpolate injective bitext maps nonmonotonic segments must be encapsulated in minimum enclosing rectangles as shown in figure 9a unique bitext map results from interpolating between the lower left and upper right corners of the mer instead of using the nonmonotonic correspondence pointssimr parameters were optimized by simulated annealing as described in the previous sectiona separate optimization was performed on separate training bitexts for each of three language pairssimr was then evaluated on previously unseen test bitexts in the three language pairsthe objective function for optimization and the evaluation metric were the root mean squared distance in characters between each tpc and the interpolated bitext map produced by simr where the distance was measured perpendicular to the main diagonaltables 2 and 3 report simr errors on the training and test bitexts respectivelythe tbm samples used for training and testing were derived from segment alignmentsall the bitexts had been manually aligned by bilingual annotators the alignments were converted into sets of coordinates in the bitext space by pairing the character positions at the ends of aligned segment pairsthis ibm sampling method artificially reduced the error estimatesmost of the aligned segments were sentences which ended with a periodwhenever simr matched the periods correctly the interpolated bitext map was pulled close to the tpc even though it may have been much farther off in the middle of the sentencethus the results in table 3 should be considered only relative to each other and to other results obtained under the same experimental conditionsit would be impressive indeed if any bitext mapping algorithm actual rms error were less than 1 character on bitexts involving languages with different word order such as englishkoreanthe matching predicates for frenchenglish and spanishenglish relied on an lcsr threshold to find cognatesthe korean text contained some roman character strings so the matching predicate for koreanenglish generated candidate points of correspondence whenever one of these strings coordinated in the search rectangle with an identical string in the english half of the bitexta seed translation lexicon was also used to strengthen the koreanenglish signalin addition english french spanish and korean stop lists were used to prevent matches of closedclass wordsthe translation lexicon and stop lists had been previously developed independently of the training and test bitextsthe frenchenglish part of the evaluation was performed on bitexts from the publicly available corpus de bitexte anglaisfrangais simr error distribution on the quotparliamentary debatesquot bitext in this collection is given in table 4this distribution can be compared to the error distributions reported for the same test set by dagan church and gale who reported parts of their error distribution in words rather than in characters quotin 55 of the cases there is no error in word_align output in 73 the distance from the correct alignment is at most 1 and in 84 the distance is at most 3quot these distances were measured horizontally from the bitext map rather than perpendicularly to the main diagonalgiven the bitext slope for that bitext and a conservative estimate of 6 characters per word each horizontal word of error corresponds to just over 4 characters of error perpendicular to the main diagonalthus dagan church and gale quotno errorquot is the same as comparison of error distributions for simr and word_align on the parliamentary debates bitexterror of at most error of at most error of at most algorithm 2 characters 6 characters 14 characters word_align 55 73 84 simr 93 97 98 2 characters of error or less ie less than half a wordone word of error is the same as an error of up to 6 characters and 3 words are equivalent to 4 31 14 characterson this basis table 5 compares the accuracy of simr and word_align5 another interesting comparison is in terms of maximum errorcertain applications of bitext maps such as the one described by melamed can tolerate many small errors but no large onesas shown in table 4 simr bitext map was never off by more than 185 characters from any of the 7123 segment boundaries185 characters is about 15 times the length of an average sentence the input to word_align is the output of char_align and dagan church and gale have reported that word_align cannot escape from char_align worst errorsan independent implementation of char_align erred by more than one thousand characters on the same bitextthe spanishenglish and koreanenglish bitexts were handaligned when simr was being ported to these language pairs6 the spanishenglish bitexts were drawn from the sun solaris answerbooks and handaligned by philip resnikthe korean english bitexts were provided by mit lincoln laboratories and handaligned by youngsuk leetable 3 shows that simr performance on spanishenglish and koreanenglish bitexts is no worse than its performance on frenchenglish bitextsthe results in table 3 were obtained using a version of simr that included all the enhancements described in section 46it is interesting to consider the degree to which each enhancement improves performancei remapped the frenchenglish bitexts listed in table 3 with two strippeddown versions of simrone version was basic simr without any enhancementsthe other version incorporated overlapping chains but performed only one search passthe deterioration in performance varied widelyfor example on the parliamentary debates bitext the rms error rose from 57 to 16 when only one search pass was allowed but rose only another 2 points to 18 using nonoverlapping chainsin contrast on the youn annual report bitext the extra search passes made no difference at all but nonoverlapping chains increased the rms error from 12 to 40for most of the other bitexts each enhancement reduced the rms error by a few characters compared to the basic versionhowever the improvement was not universal the rms error of the basic simr was 19 for the quotother technical reportquot on which the enhanced simr scored 21the expected value of the enhancements is difficult to predict because each enhancement is aimed at solving a particular pattern recognition problem and each problem may or may not occur in a given bitextthe relationship between geometric patterns in tpc chains and syntactic properties of bitexts is a ripe research topicsimr has no idea that words are often used to make sentencesit just outputs a series of corresponding token positions leaving users free to draw their own conclusions about how the texts larger units correspondhowever many existing translators tools and machine translation strategies depend on aligned sentences or other aligned text segmentswhat can simr do for themformally an alignment is a correspondence relation that does not permit crossing correspondencesthe rest of this article presents the geometric segment alignment algorithm which uses segment boundary information to reduce the correspondence relation in simr output to a segment alignmentthe gsa algorithm can be applied equally well to sentences paragraphs lists of items or any other text units for which boundary information is availablea set of correspondence points supplemented with segment boundary information expresses segment correspondence which is a richer representation than segment alignmentfigure 10 illustrates how segment boundaries form a grid over the bitext spaceeach cell in the grid represents the intersection of two segments one from each half of the bitexta point of correspondence inside cell indicates that some token in segment x corresponds with some token in segment y ie segments x and y correspondfor example figure 10 indicates that segment e corresponds with segments g and h in contrast to a correspondence relation quotan alignment is a segmentation of the segment boundaries form a grid over the bitext spaceeach cell in the grid represents the product of two segments one from each half of the bitexta point of correspondence inside cell indicates that some token in segment x corresponds with some token in segment y ie the segments x and y correspondso for example segment e corresponds with segment d the aligned blocks are outlined with solid lines two texts such that the nth segment of one text is the translation of the nth segment of the otherquot for example given the token correspondences in figure 10 the segment should be aligned with the segment if segments align with segments then figure 10 provides another illustrationif instead of the point in cell there was a point in cell the correct alignment for that region would still be if there were points of correspondence in both and the correct alignment would still be the sameyet the three cases are clearly differentif a lexicographer wanted to see a word in segment g in its bilingual context it would be useful to know whether segment f is relevantgiven a sequence of segment boundaries for each half of a bitext the geometric segment alignment algorithm reduces sets of correspondence points to segment alignmentsthe algorithm first step is to perform a transitive closure over the input correspondence relationfor instance if the input contains and then gsa adds the pairing next gsa forces all segments to be contiguous if segment y corresponds with segments x and z but not y the pairing is addedin geometric terms these two operations arrange all cells that contain points of correspondence into nonoverlapping rectangles while adding as few cells as possiblethe result is an alignment relationa complete set of tpcs together with appropriate boundary information guarantees a perfect alignmentalas the points of correspondence postulated by simr are neither complete nor noisefreesimr makes errors of omission and errors of commissionfortunately the noise in simr output causes alignment errors in predictable waysgsa employs several backingoff heuristics to reduce the number of errorstypical errors of commission are stray points of correspondence like the one in cell in figure 10this point indicates that and should form a 2 x 2 aligned block whereas the lengths of the component segments suggest that a pair of 1 x 1 blocks is more likelyin a separate development bitext i have found that simr is usually wrong in these casesto reduce such errors gsa asks gale church lengthbased alignment algorithm for a second opinion on any aligned block that is not 1 x 1whenever the lengthbased algorithm prefers a more finegrained alignment its judgement overrules simrtypical errors of omission are illustrated in figure 10 by the complete absence of correspondence points between segments and this empty block of segments is sandwiched between aligned blocksit is highly likely that at least some of these segments are mutual translations despite simr failure to find any points of correspondence between themtherefore gsa treats all sandwiched empty blocks as aligned blocksif an empty block is not 1x1 gsa realigns it using gale and church lengthbased algorithm just as it would realign any other manytomany aligned blockthe most problematic cases involve an error of omission adjacent to an error of commission as in blocks and if the point in cell should really be in cell then realignment inside the erroneous blocks would not solve the problema naive solution is to merge these blocks and then to realign them using a lengthbased methodunfortunately this kind of alignment pattern ie ox 1 followed by 2 x 1 is surprisingly often correctlengthbased methods assign low probabilities to such pattern sequences and usually get them wrongtherefore gsa also considers the confidence level with which the lengthbased alignment algorithm reports its realignmentif this confidence level is sufficiently high gsa accepts the lengthbased realignment otherwise the alignment indicated by simr points of correspondence is retainedthe minimum confidence at which gsa trusts the lengthbased realignment is a gsa parameter which has been optimized on a separate development bitext8evaluation of gsa gsa processed two bitext maps produced by simr using two different matching predicatesthe first matching predicate relied only on cognates that pass a certain lcsr threshold as described in section 42the second matching predicate was like the first except that it also generated a point of correspondence whenever the input token pair appeared as an entry in a translation lexiconthe translation lexicon was automatically extracted from an mrbd bitexts involving millions of segments are becoming more and more commonbefore comparing bitext alignment algorithms in terms of accuracy it is important to compare their asymptotic running timesin order to run a quadratictime alignment algorithm in a reasonable amount of time on a large bitext the bitext must be presegmented into a set of smaller bitextswhen a bitext contains no easily recognizable quotanchorsquot such as paragraphs or sections this firstpass alignment must be done manuallygiven a reasonably good bitext map gsa expected running time is linear in the number of input segment boundariesin all the bitexts on which gsa was trained and tested the points of correspondence in simr output were sufficiently dense and accurate that gsa backed off to a quadratictime alignment algorithm only for very small aligned blocksfor example when the seed translation lexicon was used in simr matching predicate the largest aligned block that needed to be realigned was 5 x 5 segmentswithout the seed translation lexicon the largest realigned block was 7x 7 segmentsthus gsa can obviate the need to manually prealign large bitextstable 6 compares gsa accuracy on the quoteasyquot and quothardquot frenchenglish bitexts with the accuracy of two other alignment algorithms as reported by simard foster and isabelle the error metric counts one error for each aligned block in the reference alignment that is missing from the test alignmentto account for the possibility of modularizing the overall alignment task into paragraph alignment followed by sentence alignment simard foster and isabelle have reported the accuracy of their sentence alignment algorithm when a perfect alignment at the paragraph level is givensimrgsa was also tested in this manner to enable the second set of comparisons in table 6due to the scarcity of handaligned training bitexts at my disposal gsa backingoff heuristics are somewhat ad hoceven so gsa performs at least as well as and usually better than other alignment algorithms for which comparable results have been publishedchen has also published a quantitative evaluation of his alignment algorithm on these reference bitexts but his evaluation was done post hocsince the results in this article are based on a gold standard they are not comparable to chen resultsamong other reasons error rates based on a gold standard are sometimes inflated by errors in the gold standard and this was indeed the case for the gold standard used here it is also an open question whether gsa performs better than the algorithm proposed by wu the two algorithms have not yet been evaluated on the same test datafor now i can offer only a theoretical reason why simrgsa should be more accurate than the algorithms of chen and wu bitext maps lead to alignment more directly than a translation model or a translation lexicon because both segment alignments and bitext maps are relations between token instances rather than between token typesmore important than gsa current accuracy is gsa potential accuracywith a bigger development bitext more effective backingoff heuristics can be developedbetter input can also make a difference gsa accuracy will improve in lockstep with simr accuracythe smooth injective map recognizer is based on innovative approaches to each of the three main components of a bitext mapping algorithm signal generation noise filtering and searchthe advances in signal generation stemmed from the use of wordbased matching predicateswhen wordpair coordinates are plotted in a cartesian bitext space the geometric heuristics of existing sentence alignment algorithms can be exploited just as easily and to a greater extent at the word levelthe cognate heuristic of characterbased bitext mapping algorithms also works better at the word level because cognateness can be defined more precisely in terms of words eg using the longest common subsequence ratiomost importantly matching heuristics based on existing translation lexicons can be defined only at the word levelwhen neither cognates nor sentence boundaries can be found we can still map bitexts in any pair of languages using a small handconstructed translation lexiconto complement wordbased matching predicates i have proposed localized noise filteringlocalized noise filters are more accurate than global ones because they are sensitive to local variations in noise distributionsthe combination of a strong signal and an accurate noise filter enables localized search heuristicslocalized search heuristics can directly exploit the geometric tendencies of tpc chains in order to search the bitext space in linear space and timethis level of efficiency is particularly important for large bitextssimr also advances the state of the art of bitext mapping on several other criteriaevaluation on preexisting gold standards has shown that simr can map bitexts with high accuracy in a variety of language pairs and text genres without getting lostsimr is robust in the face of translation irregularities like omissions and allows crossing correspondences to account for wordorder differencessimr encapsulates its languagespecific heuristics so that it can be ported to any language pair with a minimal effort these features make simr one of the most widely applicable bitext mapping algorithms published to datefor applications that require it simr bitext maps can be efficiently reduced to segment alignments using the geometric segment alignment algorithm presented hereadmittedly gsa is only useful when a good bitext map is availablein such cases there are three reasons to favor gsa over other options for alignment one it is simply more accuratetwo its expected running time is linear in the size of the bitexttherefore three it is not necessary to manually prealign large bitexts before input to gsathere are numerous ways to improve on the methods presented hereif simr can be reparameterized so that its parameters are pairwise independent then it may be possible to optimize these parameters analytically or at least within a wellfounded probabilistic frameworklikewise the parameters in gsa backingoff heuristics and the heuristics themselves were partially dictated by the scarcity of suitable training data at the time that gsa was being developedall of this is to say that the details of the current implementations of simr and gsa are less important than the general approach to bitext mapping advocated herereviewersthe majority of this work was done at the department of computer and information science of the university of pennsylvania where it was supported by an equipment grant from sun microsystems and partially funded by aro grant daal038900031 prime and by arpa grants n00014901863 and n6600194c6043
J99-1003
bitext maps and alignment via pattern recognitiontexts that are available in two languages are becoming more and more plentiful both in private data warehouses and on publicly accessible sites on the world wide webas with other kinds of data the value of bitexts largely depends on the efficacy of the available data mining toolsthe first step in extracting useful information from bitexts is to find corresponding words andor text segment boundaries in their two halves this article advances the state of the art of bitext mapping by formulating the problem in terms of pattern recognitionfrom this point of view the success of a bitext mapping algorithm hinges on how well it performs three tasks signal generation noise filtering and searchthe smooth injective map recognizer algorithm presented here integrates innovative approaches to each of these tasksobjective evaluation has shown that simr accuracy is consistently high for language pairs as diverse as frenchenglish and koreanenglishif necessary s imr bitext maps can be efficiently converted into segment alignments using the geometric segment alignment algorithm which is also presented heresimr has produced bitext maps for over 200 megabytes of frenchenglish bitextsgsa has converted these maps into alignments both the maps and the alignments are available from the linguistic data consortiumwe normalize lcs by dividing the length of the longest common subsequence by the length of the longer string and called it longest common subsequence ratio
supertagging an approach to almost parsing in this paper we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques our thesis is that the computation of linguistic structure can be localized if lexical items are associated with rich descriptions that impose complex constraints in a local context the supertags are designed such that only those elements on which the lexical item imposes constraints appear within a given supertag further each lexical item is associated with as many supertags as the number of different syntactic contexts in which the lexical item can appear this makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex thus increasing the local ambiguity for a parser but this local ambiguity can be resolved by using statistical distributions of supertag cooccurrences collected from a corpus of parses we have explored these ideas in the context of the lexicalized treeadjoining grammar framework the supertags in ltag combine both phrase structure information and dependency information in a single representation supertag disambiguation results in a representation that is effectively a parse and the parser need quotonlyquot combine the individual supertags this method of parsing can also be used to parse sentence fragments such as in spoken utterances where the disambiguated supertag sequence may not combine into a single structure in this paper we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniquesour thesis is that the computation of linguistic structure can be localized if lexical items are associated with rich descriptions that impose complex constraints in a local contextthe supertags are designed such that only those elements on which the lexical item imposes constraints appear within a given supertagfurther each lexical item is associated with as many supertags as the number of different syntactic contexts in which the lexical item can appearthis makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex thus increasing the local ambiguity for a parserbut this local ambiguity can be resolved by using statistical distributions of supertag cooccurrences collected from a corpus of parseswe have explored these ideas in the context of the lexicalized treeadjoining grammar frameworkthe supertags in ltag combine both phrase structure information and dependency information in a single representationsupertag disambiguation results in a representation that is effectively a parse and the parser need quotonlyquot combine the individual supertagsthis method of parsing can also be used to parse sentence fragments such as in spoken utterances where the disambiguated supertag sequence may not combine into a single structurein this paper we present a robust parsing approach called supertagging that integrates the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniquesthe idea underlying the approach is that the computation of linguistic structure can be localized if lexical items are associated with rich descriptions that impose complex constraints in a local contextthis makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex thus increasing the local ambiguity for a parserhowever this local ambiguity can be resolved by using statistical distributions of supertag cooccurrences collected from a corpus of parsessupertag disambiguation results in a representation that is effectively a parse in the linguistic context there can be many ways of increasing the complexity of descriptions of lexical itemsthe idea is to associate lexical items with descriptions that allow for all and only those elements on which the lexical item imposes constraints to be within the same descriptionfurther it is necessary to associate each lexical item with as many descriptions as the number of different syntactic contexts in which the lexical item can appearthis of course increases the local ambiguity for the parserthe parser has to decide which complex description out of the set of descriptions associated with each lexical item is to be used for a given reading of a sentence even before combining the descriptions togetherthe obvious solution is to put the burden of this job entirely on the parserthe parser will eventually disambiguate all the descriptions and pick one per lexical item for a given reading of the sentencehowever there is an alternate method of parsing that reduces the amount of disambiguation done by the parserthe idea is to locally check the constraints that are associated with the descriptions of lexical items to filter out incompatible descriptions1 during this disambiguation the system can also exploit statistical information that can be associated with the descriptions based on their distribution in a corpus of parseswe first employed these ideas in the context of lexicalized tree adjoining grammars in joshi and srinivas although presented with respect to ltag these techniques are applicable to other lexicalized grammars as wellin this paper we present vastly improved supertag disambiguation resultsfrom previously published 68 accuracy to 92 accuracy using a larger training corpus and better smoothing techniquesthe layout of the paper is as follows in section 2 we present an overview of the robust parsing approachesa brief introduction to lexicalized tree adjoining grammars is presented in section 3section 4 illustrates the goal of supertag disambiguation through an examplevarious methods and their performance results for supertag disambiguation are discussed in detail in section 5 and section 6in section 7 we discuss the efficiency gained in performing supertag disambiguation before parsinga robust and lightweight dependency analyzer that uses the supertag output is briefly presented in section 8in section 9 we will discuss the applicability of supertag disambiguation to other lexicalized grammarsin recent years there have been a number of attempts at robust parsing of natural languagethey can be broadly categorized under two paradigmsfinitestategrammarbased parsers and statistical parserswe briefly present these two paradigms and situate our approach to robust parsing relative to these paradigmsfinitestategrammarbased approaches to parsing are exemplified by the parsing systems in joshi abney appelt et al roche grishman hobbs et al joshi and hopely and karttunen et al these systems use grammars that are represented as cascaded finitestate regular expression recognizersthe regular expressions are usually handcraftedeach recognizer in the cascade provides a locally optimal outputthe output of these systems is mostly in the form of noun groups and verb groups rather than constituent structure often called a shallow parsethere are no clauselevel attachments or modifier attachments in the shallow parsethese parsers always produce one output since they use the longestmatch heuristic to resolve cases of ambiguity when more than one regular expression matches the input string at a given positionat present none of these systems use any statistical information to resolve ambiguitythe grammar itself can be partitioned into domainindependent and domainspecific regular expressions which implies that porting to a new domain would involve rewriting the domaindependent expressionsthis approach has proved to be quite successful as a preprocessor in information extraction systems pioneered by the ibm natural language group and later pursued by for example schabes roth and osborne jelinek et al magerman collins and charniak this approach decouples the issue of wellformedness of an input string from the problem of assigning a structure to itthese systems attempt to assign some structure to every input stringthe rules to assign a structure to an input are extracted automatically from handannotated parses of large corpora which are then subjected to smoothing to obtain reasonable coverage of the languagethe resultant set of rules are not linguistically transparent and are not easily modifiablelexical and structural ambiguity is resolved using probability information that is encoded in the rulesthis allows the system to assign the mostlikely structure to each inputthe output of these systems consists of constituent analysis the degree of detail of which is dependent on the detail of annotation present in the treebank that is used to train the systemthere are also parsers that use probabilistic information in conjunction with handcrafted grammars for example black et al nagao alshawi and carter and srinivas doran and kulick in these cases the probabilistic information is primarily used to rank the parses produced by the parser and not so much for the purpose of robustness of the systemlexicalized grammars are particularly wellsuited for the specification of natural language grammarsthe lexicon plays a central role in linguistic formalisms such as lfg gpsg hpsg ccg lexicon grammar ltag link grammar and some version of gb parsing lexical semantics and machine translation to name a few areas have all benefited from lexicalizatiortlexicalizatiort provides a clean interface for combining the syntactic and semantic information in the lexiconwe discuss the merits of lexicalization and other related issues in the context of partial parsing and briefly discuss featurebased lexicalized tree adjoining grammars as a representative of the class of lexicalized grammarsfeaturebased lexicalized tree adjoining grammar is a treerewriting grammar formalism unlike contextfree grammars and head grammars which are stringrewriting formalismsthe primitive elements of fbltags are called elementary treeseach elementary tree is associated with at least one lexical item on its frontierthe lexical item associated with an elementary tree is called the anchor of that treean elementary tree serves as a complex description of the anchor and provides a domain of locality over which the anchor can specify syntactic and semantic constraintselementary trees are of two kinds initial trees and auxiliary treesin an fbltag grammar for natural language initial trees are phrase structure trees of simple sentences containing no recursion while recursive structures are represented by auxiliary treeselementary trees are combined by substitution and adjunction operationsthe result of combining the elementary trees is the derived tree and the process of combining the elementary trees to yield a parse of the sentence is represented by the derivation treethe derivation tree can also be interpreted as a dependency tree with unlabeled arcs between words of the sentencea more detailed discussion of ltags with an example and some of the key properties of elementary trees is presented in appendix apartofspeech disambiguation techniques are often used prior to parsing to eliminate the partofspeech ambiguity the pos taggers are all local in the sense that they use information from a limited context in deciding which tag to choose for each wordas is well known these taggers are quite successfulin a lexicalized grammar such as the lexicalized tree adjoining grammar each lexical item is associated with at least one elementary structure the elementary structures of ltag localize dependencies including longdistance dependencies by requiring that all and only the dependent elements be present within the same structureas a result of this localization a lexical item may be associated with more than one elementary structurewe will call these elementary structures supertags in order to distinguish them from the standard partofspeech tagsnote that even when a word has a unique standard part of speech say a verb there will usually be more than one supertag associated with this wordsince there is only one supertag for each word when the parse is complete an ltag parser needs to search a large space of supertags to select the right one for each word before combining them for the parse of a sentenceit is this problem of supertag disambiguation that we address in this papersince ltags are lexicalized we are presented with a novel opportunity to eliminate or substantially reduce the supertag assignment ambiguity by using local information such as local lexical dependencies prior to parsingas in standard partofspeech disambiguation we can use local statistical information in the form of ngram models based on the distribution of supertags in an ltag parsed corpusmoreover since the supertags encode dependency information we can also use information about the distribution of distances between a given supertag and its dependent supertagsnote that as in standard partofspeech disambiguation supertag disambiguation could have been done by a parserhowever carrying out partofspeech disambiguation prior to parsing makes the job of the parser much easier and therefore speeds it upsupertag disambiguation reduces the work of the parser even furtherafter supertag disambiguation we would have effectively completed the parse and the parser need quotonlyquot combine the individual structures hence the term quotalmost parsingquot this method can also be used to associate a structure to sentence fragments and in cases where the supertag sequence after disambiguation may not combine into a single structureltags by virtue of possessing the extended domain of locality property associate with each lexical item one elementary tree for each syntactic environment that an noun phrase companies have not been profitable the lexical item may appear inas a result each lexical item is invariably associated with more than one elementary treewe call the elementary structures associated with each lexical item super partsofspeech or supertags3 figure 1 illustrates a few elementary trees associated with each word of the sentence the purchase price includes two ancillary companiestable 1 provides an example context in which each supertag shown in figure 1 would be usedthe example in figure 2 illustrates the initial set of supertags assigned to each word of the sentence the purchase price includes two ancillary companiesthe order of the supertags for each lexical item in the example is not relevantfigure 2 also shows the final supertag sequence assigned by the supertagger which picks the best supertag sequence using statistical information about individual supertags and their dependencies on other supertagsthe chosen supertags are combined to derive a parsewithout the supertagger the parser would have to process combinations of the entire set of trees with it the parser need only process combinations of 7 treesthe structure of the supertag can be best seen as providing admissibility constraints on syntactic environments in which it may be usedsome of these constraints can be checked locallythe following are a few constraints that can be used to determine the admissibility of a syntactic environment for a supertag4 a selection of the supertags associated with each word of the sentence the purchase price includes two ancillary companiessupertags with the builtin lexical item by that represent passive constructions are typically eliminated from being considered during the parse of an active sentencemore generally these constraints can be used to eliminate supertags that cannot have their features satisfied in the context of the input stringan example of this is the elimination of supertag that requires a wh np when the input string does not contain whwordstable 2 indicates the decrease in supertag ambiguity for 2012 wsj sentences by using the structural constraints relative to the supertag ambiguity without the structural constraints5 these filters prove to be very effective in reducing supertag ambiguitythe graph in figure 3 plots the number of supertags at the sentence level for sentences of length 2 to 50 words with and without the filtersas can be seen from the graph the supertag ambiguity is significantly lower when the filters are usedthe graph in figure 4 shows the percentage drop in supertag ambiguity due to filtering for sentences of length 2 to 50 wordsas can be seen the average reduction in supertag ambiguity is about 50this means that given a sentence close to 50 of the supertags can be eliminated even before parsing begins by just using structural constraints of the supertagsthis reduction in supertag ambiguity speeds up the parser significantlyin fact the supertag comparison of number of supertags with and without filtering for sentences of length 2 to 50 words ambiguity in xtag system is so large that the parser is prohibitively slow without the use of these filterstable 3 tabulates the reduction of supertag ambiguity due to the filters against various parts of speech6 verbs in all their forms contribute most to the problem of supertag ambiguity and most of the supertag ambiguity for verbs is due to light verbs and verb particlesthe filters are very effective in eliminating over 50 of the verb anchored supertagseven though structural constraints are effective in reducing supertag ambiguity the search space for the parser is still sufficiently largein the next few sections we present stochastic and rulebased approaches to supertag disambiguationpercentage drop in the number of supertags with and without filtering for sentences of length 2 to 50 wordsbefore proceeding to discuss the various models for supertag disambiguation we would like to trace the time course of development of this workwe do this not only to show the improvements made to the early work reported in our 1994 paper but also to explain the rationale for choosing certain models of supertag disambiguation over otherswe summarize the early work in the following subsectionas reported in joshi and srinivas we experimented with a trigram model as well as the dependency model for supertag disambiguationthe trigram model that was trained on pairs instead of pairs collected from the ltag derivations of 5000 wsj sentences and tested on 100 wsj sentences produced a correct supertag for 68 of the words in the test setwe have since significantly improved the performance of the trigram model by using a larger training set and incorporating smoothing techniqueswe present a detailed discussion of the model and its performance on a range of corpora in section 65in section 62 we briefly mention the dependency model of supertagging that was reported in the earlier workin an ngram model for disambiguating supertags dependencies between supertags that appear beyond the nword window cannot be incorporatedthis limitation can be overcome if no a priori bound is set on the size of the window but instead a bangalore and joshi supertagging probability distribution of the distances of the dependent supertags for each supertag is maintainedwe define dependency between supertags in the obvious way a supertag is dependent on another supertag if the former substitutes or adjoins into the latterthus the substitution and the foot nodes of a supertag can be seen as specifying dependency requirements of the supertagthe probability with which a supertag depends on another supertag is collected from a corpus of sentences annotated with derivation structuresgiven a set of supertags for each word and the dependency information between pairs of supertags the objective of the dependency model is to compute the most likely dependency linkage that spans the entire stringthe result of producing the dependency linkage is a sequence of supertags one for each word of the sentence along with the dependency informationsince first reported in joshi and srinivas we have not continued experiments using this model of supertagging primarily for two reasonswe are restrained by the lack of a large corpus of ltag parsed derivation structures that is needed to reliably estimate the various parameters of this modelwe are currently in the process of collecting a large ltag parsed wsj corpus with each sentence annotated with the correct derivationa second reason for the disuse of the dependency model for supertagging is that the objective of supertagging is to see how far local techniques can be used to disambiguate supertags even before parsing beginsthe dependency model in contrast is too much like full parsing and is contrary to the spirit of supertaggingwe have improved the performance of the trigram model by incorporating smoothing techniques into the model and training the model on a larger training corpuswe have also proposed some new models for supertag disambiguationin this section we discuss these developments in detailtwo sets of data are used for training and testing the models for supertag disambiguationthe first set has been collected by parsing the wall street journal ibm manual and atis corpora using the widecoverage english grammar being developed as part of the xtag system the correct derivation from all the derivations produced by the xtag system was picked for each sentence from these corporathe second and larger data set was collected by converting the penn treebank parses of the wall street journal sentencesthe objective was to associate each lexical item of a sentence with a supertag given the phrase structure parse of the sentencethis process involved a number of heuristics based on local tree contextsthe heuristics made use of information about the labels of a word dominating nodes labels of its siblings and siblings of its parentan example of the result of this conversion is shown in figure 5it must be noted that this conversion is not perfect and is correct only to a first order of approximation owing mostly to errors in conversion and lack of certain kinds of information such as distinction between adjunct and argument preposition phrases in the penn treebank parseseven though the converted supertag corpus can be refined further the corpus in its present form has proved to be an invaluable resource in improving the performance of the supertag models as is discussed in the following sectionsusing structural information to filter out supertags that cannot be used in any parse of the input string reduces the supertag ambiguity but obviously does not eliminate it completelyone method of disambiguating the supertags assigned to each word is to order the supertags by the lexical preference that the word has for themthe frequency with which a certain supertag is associated with a word is a direct measure of its lexical preference for that supertagassociating frequencies with the supertags and using them to associate a particular supertag with a word is clearly the simplest means of disambiguating supertagstherefore a unigram model is given by where thus the most frequent supertag that a word is associated with in a training corpus is selected as the supertag for the word according to the unigram modelfor the words that do not appear in the training corpus we back off to the part of speech of the word and use the most frequent supertag associated with that part of speech as the supertag for the word the previously discussed two sets of datathe words are first assigned standard parts of speech using a conventional tagger and then are assigned supertags according to the unigram modela word in a sentence is considered correctly supertagged if it is assigned the same supertag as it is associated with in the correct parse of the sentencethe results of these experiments are tabulated in table 4although the performance of the unigram model for supertagging is significantly lower than the performance of the unigram model for partofspeech tagging it performed much better than expected considering the size of the supertag set is much larger than the size of partofspeech tag setone of the reasons for this high performance is that the most frequent supertag for the most frequent words determiners nouns and auxiliary verbsis the correct supertag most of the timealso backing off to the part of speech helps in supertagging unknown words which most often are nounsthe bulk of the errors committed by the unigram model is incorrectly tagged verbs prepositions and nouns we first explored the use of trigram model of supertag disambiguation in joshi and srinivas the trigram model was trained on pairs collected from the ltag derivations of 5000 wsj sentences and tested on 100 wsj sentencesit produced a correct supertag for 68 of the words in the test seta major drawback of this early work was that it used no lexical information in the supertagging process as the training material consisted of pairssince that early work we have improved the performance of the model by incorporating lexical information and sophisticated smoothing techniques as well as training on larger training setsin this section we present the details and the performance evaluation of this modelin a unigram model a word is always associated with the supertag that is most preferred by the word irrespective of the context in which the word appearsan alternate method that is sensitive to context is the ngram modelthe ngram model takes into account the contextual dependency probabilities between supertags within a window of n words in associating supertags to wordsthus the most probable supertag sequence for an nword sentence is given by argmaxtpr pr where ti is the supertag for word k to compute this using only local information we approximate assuming that the probability of a word depends only on its supertag and also use an ngram approximation the term pr is known as the contextual probability since it indicates the size of the context used in the model and the term pr is called the word emit probability since it is the probability of emitting the word w given the tag tithese probabilities are estimated using a corpus where each word is tagged with its correct supertagthe contextual probabilities were estimated using the relative frequency estimates of the contexts in the training corpusto estimate the probabilities for contexts that do not appear in the training corpus we used the goodturing discounting technique combined with katz back off model the idea here is to discount the frequencies of events that occur in the corpus by an amount related to their frequencies and utilize this discounted probability mass in the back off model to distribute to unseen eventsthus the goodturing discounting technique estimates the frequency of unseen events based on the distribution of the frequency of the counts of observed events in the corpusif r is the observed frequency of an event and n is the number of events with the observed frequency r and n is the total number of events then the probability of an unseen event is given by n1 n furthermore the frequencies of the observed events are adjusted so that the total probability of all events sums to onethe adjusted frequency for observed events r is computed as once the frequencies of the observed events are discounted and the frequencies for unseen events are estimated katz back off model is usedin this technique if the observed frequency of an sequence is zero then its probability is computed based on the observed frequency of an gram sequencethus where a and 13 are constants to ensure that the probabilities sum to onethe word emit probability for the pairs that appear in the training corpus is computed using the relative frequency estimates as shown in equation 7for the pairs that do not appear in the corpus the word emit probability is estimated as shown in equation 8some of the word features used in our implebangalore and joshi supertagging mentation include prefixes and suffixes of length less than or equal to three characters capitalization and digit featuresthe counts for the pairs for the words that do not appear in the corpus is estimated using the leavingoneout technique a token unk is associated with each supertag and its count nunk is estimated by where n1 is the number of words that are associated with the supertag tj that appear in the corpus exactly oncen is the frequency of the supertag tj and nunk is the estimated count of unk in 71the constant n is introduced so as to ensure that the probability is not greater than one especially for supertags that are sparsely represented in the corpuswe use word features similar to the ones used in weischedel et al such as capitalization hyphenation and endings of words for estimating the word emit probability of unknown words651 experiments and resultswe tested the performance of the trigram model on various domains such as the wall street journal the ibm manual corpus and the atis corpusfor the ibm manual corpus and the atis domains a supertag annotated corpus was collected using the parses of the xtag system and selecting the correct analysis for each sentencethe corpus was then randomly split into training and test materialsupertag performance is measured as the percentage of words that are correctly supertagged by a model when compared with the key for the words in the test corpus data from the xtag parses and from the conversion of the penn treebank parses to evaluate the performance of the trigram modeltable 5 shows the performance on the two sets of datathe first data set data collected from the xtag parses was split into 8000 words of training and 3000 words of test materialthe data collected from converting the penn treebank was used in two experiments differing in the size of the training corpus200000 words and 1000000 words9and tested on 47000 wordsa total of 300 different supertags were used in these experiments mance of the trigram supertagger on the ibm manual corpus a set of 14000 words correctly supertagged was used as the training corpus and a set of 1000 words was used as a test corpusthe performance of the supertagger on this corpus is shown in table 6performance on the atis corpus was evaluated using a set of 1500 words correctly supertagged as the training corpus and a set of 400 words as a test corpusthe performance of the supertagger on the atis corpus is also shown in table 6as expected the performance on the atis corpus is higher than that of the wsj and the ibm manual corpus despite the extremely small training corpusalso the performance of the ibm manual corpus is better than the wsj corpus when the size of the training corpus is taken into accountthe baseline for the atis domain is remarkably high due to the repetitive constructions and limited vocabulary in that domainthis is also true for the ibm manual corpus although to a lesser extentthe trigram model of supertagging is attractive for limited domains since it performs quite well with relatively insignificant amounts of training materialthe performance of the supertagger can be improved in an iterative fashion by using the supertagger to supertag larger amounts of training material which can be quickly handcorrected and used to train a betterperforming supertagger most to the performance of a pos tagger since the baseline performance of assigning the most likely pos for each word produces 91 accuracy contextual information contributes relatively a small amount towards the performance improving it from 91 to 9697 a 55 improvementin contrast contextual information has greater effect on the performance of the supertaggeras can be seen from the above experiments the baseline performance of the supertagger is about 77 and the performance improves to about 92 with the inclusion of contextual information an bangalore and joshi supertagging improvement of 195the relatively low baseline performance for the supertagger is a direct consequence of the fact that there are many more supertags per word than there are pos tagsfurther since many combinations of supertags are not possible contextual information has a larger effect on the performance of the supertaggerin an errordriven transformationbased tagger a set of patternaction templates that include predicates that test for features of words appearing in the context of interest are definedthese templates are then instantiated with the appropriate features to obtain transformation rulesthe effectiveness of a transformation rule to correct an error and the relative order of application of the rules are learned using a corpusthe learning procedure takes a gold corpus in which the words have been correctly annotated and a training corpus that is derived from the gold corpus by removing the annotationsthe objective in the learning phase is to learn the optimum ordering of rule applications so as to minimize the number of tag mismatches between the training and the reference corpus661 experiments and resultsa edtb model has been trained using templates defined on a threeword windowwe trained the templates on 200000 words and tested on 47000 words of the wsj corpusthe model performed at an accuracy of 90the edtb model provides a great deal of flexibility to integrate domainspecific and linguistic information into the modelhowever a major drawback of this approach is that the training procedure is extremely slow which prevented us from training on the 1000000 word corpus7supertagging before parsing the output of the supertagger an almost parse has been used in a variety of applications including information retrieval and information extraction text simplification and language modeling to illustrate that supertags provide an appropriate level of lexical description needed for most applicationsthe output of the supertagger has also been used as a front end to a lexicalized grammar parseras mentioned earlier a lexicalized grammar parser can be conceptualized to consist of two stages in the first stage the parser looks up the lexicon and selects all the supertags associated with each word of the sentence to be parsedin the second stage the parser searches the lattice of selected supertags in an attempt to combine them using substitution and adjunction operations so as to yield a derivation that spans the input stringat the end of the second stage the parser would not only have parsed the input but would have associated a small set of supertags with each wordthe supertagger can be used as a front end to a lexicalized grammar parser so as to prune the searchspace of the parser even before parsing beginsit should be clear that by reducing the number of supertags that are selected in the first stage the searchspace for the second stage can be reduced significantly and hence the parser can be made more efficientsupertag disambiguation techniques as discussed in the previous sections attempt to disambiguate the supertags selected in the first pass based on lexical preferences and local lexical dependencies so as to ideally select one supertag for each wordonce the supertagger selects the appropriate supertag for each word the second stage of the parser is needed only to combine the individual supertags to arrive at the parse of the inputtested on about 1300 wsj sentences with each word in the sentence correctly supertagged the ltag parser took approximately 4 seconds per sentence to yield a parse in contrast the same 1300 wsj sentences without the supertag annotation took nearly 120 seconds per sentence to yield a parsethus the parsing speedup gained by this integration is a factor of about 30in the xtag system we have integrated the trigram supertagger as a front end to an ltag parser to pick the appropriate supertag for each word even before parsing beginshowever a drawback of this approach is that the parser would fail completely if any word of the input is incorrectly tagged by the supertaggerthis problem could be circumvented to an extent by extending the supertagger to produce nbest supertags for each wordalthough this extension would increase the load on the parser it would certainly improve the chances of arriving at a parse for a sentencein fact table 7 presents the performance of the supertagger that selects at most the top three supertags for each wordthe optimum number of supertags to output to balance the success rate of the parser against the efficiency of the parser must be determined empiricallya more serious limitation of this approach is that it fails to parse illformed and extragrammatical strings such as those encountered in spoken utterances and unrestricted textsthis is due to the fact that the earleystyle ltag parser attempts to combine the supertags to construct a parse that spans the entire stringin cases where the supertag sequence for a string cannot be combined into a unified structure the parser fails completelyone possible extension to account for illformed and extragrammatical strings is to extend the earley parser to produce partial parses for the fragments whose supertags can be combinedan alternate method of computing dependency linkages robustly is presented in the next sectionsupertagging associates each word with a unique supertagto establish the dependency links among the words of the sentence we exploit the dependency requirements bangalore and joshi supertagging encoded in the supertagssubstitution nodes and foot nodes in supertags serve as slots that must be filled by the arguments of the anchor of the supertaga substitution slot of a supertag is filled by the complements of the anchor while the foot node of a supertag is filled by a word that is being modified by the supertagthese argument slots have a polarity value reflecting their orientation with respect to the anchor of the supertagalso associated with a supertag is a list of internal nodes that appear in the supertagusing the structural information coupled with the argument requirements of a supertag a simple heuristicbased linear time deterministic algorithm produces dependency linkages not necessarily spanning the entire sentencethe lda can produce a number of partial linkages since it is driven primarily by the need to satisfy local constraints without being driven to construct a single dependency linkage that spans the entire inputthis in fact contributes to the robustness of lda and promises to be a useful tool for parsing sentence fragments that are rampant in speech utterances as exemplified by the switchboard corpustested on section 20 of the wall street journal corpus which contained 47333 dependency links in the gold standard the lda trained on 200000 words produced 38480 dependency links correctly resulting in a recall score of 823also a total of 41009 dependency links were produced by the lda resulting in a precision score of 938a detailed evaluation of the lda is presented in srinivas although we have presented supertagging in the context of ltag it is applicable to other lexicalized grammar formalisms such as ccg hpsg and lfg we have implemented a broad coverage ccg grammar containing about 80 categories based on the xtag english grammarthese categories have been used to tag the same training and test corpora used in the supertagging experiments discussed in this paper and a supertagger to disambiguate the ccg categories has been developedwe are presently analyzing the performance of the supertagger using the ltag trees and the ccg categoriesthe idea of supertagging can also be applied to a grammar in hpsg formalism indirectly by compiling the hpsg grammar into an ltag grammar a more direct approach would be to tag words with feature structures that represent supertags for lfg the lexicalized subset of fragments used in the lfgdop model can be seen as supertagsan approach that is closely related to supertagging is the reductionist approach to parsing that is being carried out under the constraint grammar framework in this framework each word is associated with the set of possible functional tags that it may be assigned in the languagethis constitutes the lexiconthe grammar consists of a set of rules that eliminate functional tags for words based on the context of a sentenceparsing a sentence in this framework amounts to eliminating as many implausible functional tags as possible for each word given the context of the sentencethe resultant output structure might contain significant syntactic ambiguity which may not have been eliminated by the rule applications thus producing almost parsesthus the reductionist approach to parsing is similar to supertagging in that both view parsing as tagging with rich descriptionshowever the key difference is that the tagging is done in a probabilistic setting in the supertagging approach while it is rule based in the constraint grammar approachwe are currently developing supertaggers for other languagesin collaboration with anne abeille and mariehelene candito of the university of paris using their french tag grammar we have developed a supertagger for frenchwe are currently working on evaluating the performance of this supertaggeralso the annotated corpora necessary for training supertaggers for korean and chinese are under development at the university of pennsylvaniaa version of the supertagger trained on the wsj corpus is available under gnu public license from http wwwcisupennedu xtag swreleasehtmlin this paper we have presented a novel approach to robust parsing distinguished from the previous approaches to robust parsing by integrating the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniquesby associating rich descriptions that impose complex constraints in a local context we have been able to use local computational models for effective supertag disambiguationa trigram supertag disambiguation model trained on 1000000 pairs of the wall street journal corpus performs at an accuracy level of 922after disambiguation we have effectively completed the parse of the sentence creating an almost parse in that the parser need only combine the selected structures to arrive at a parse for the sentencewe have presented a lightweight dependency analyzer that takes the output of the supertagger and uses the dependency requirements of the supertags to produce a dependency linkage for a sentencethis method can also serve to parse sentence fragments in cases where the supertag sequence after disambiguation may not combine to form a single structurethis approach is applicable to all lexicalized grammar parsersfeaturebased lexicalized tree adjoining grammar is a treerewriting grammar formalism unlike contextfree grammars and head grammars which are stringrewriting formalismsfbltags trace their lineage to tree adjunct grammars which were first developed in joshi levy and takahashi and later extended to include unificationbased feature structures and lexicalization for a more recent and comprehensive reference see joshi and schabes the primitive elements of fbltags are called elementary treeseach elementary tree is associated with at least one lexical item on its frontierthe lexical item associated with an elementary tree is called the anchor of that treean elementary tree serves as a complex description of the anchor and provides a domain of locality over which the anchor can specify syntactic and semantic constraintselementary trees are of two kinds initial trees and auxiliary treesin an fbltag grammar for natural language initial trees are phrase structure trees of simple sentences containing no recursion while recursive structures are represented by auxiliary treesexamples of initial trees and auxiliary trees are shown in figure 6nodes on the frontier of initial trees are marked as substitution sites by a quotiquot while exactly one node on the frontier of an auxiliary tree whose label matches the label of the root of the tree is marked as a foot node by a quotquotthe other nodes on the frontier of an auxiliary tree are marked as substitution siteseach node of an elementary tree is associated with two feature structures elementary trees for the sentence the company is being acquired the top and the bottomthe bottom fs contains information relating to the subtree rooted at the node and the top fs contains information relating to the supertree at that node13 features may get their values from three different sources the derivation process from unification with features from trees that adjoin or substituteelementary trees are combined by substitution and adjunction operationssubstitution inserts elementary trees at the substitution nodes of other elementary treesfigure 7 shows two elementary trees and the tree resulting from the substitution of one tree into the otherin this operation a node marked for substitution in an elementary tree is replaced by another elementary tree whose root label matches the label of the nodethe top fs of the resulting node is the result of unification of the top features of the two original nodes while the bottom fs of the resulting node is simply the bottom features of the root node of the substituting treein an adjunction operation an auxiliary tree is inserted into an elementary treefigure 7 shows an auxiliary tree adjoining into an elementary tree and the result of the adjunctionthe root and foot nodes of the auxiliary tree must match the node label at which the auxiliary tree adjoinsthe node being adjoined to splits and its top fs unifies with the top fs of the root node of the auxiliary tree while its bottom fs unifies with the bottom fs of the foot node of the auxiliary treefigure 7 shows an auxiliary tree and an elementary tree and the tree resulting from an adjunction operationfor a parse to be wellformed the top and bottom fs at each node should be unified at the end of a parsethe result of combining the elementary trees shown in figure 6 is the derived tree shown in figure 8the process of combining the elementary trees to yield a parse of the sentence is represented by the derivation tree shown in figure 8the nodes of the derivation tree are the tree names that are anchored by the appropriate lexical itemsthe combining operation is indicated by the type of the arcs while the address of the operation is indicated as part of the node labelthe derivation tree can also be interpreted as a dependency tree with unlabeled arcs between words of the sentence as shown in figure 8a broadcoverage grammar system xtag has been implemented in the ltag formalismin this section we briefly discuss some aspects related to xtag for the sake of completenessa more detailed report on xtag can be found in xtaggroup the xtag system consists of a morphological analyzer a partofspeech tagger a widecoverage ltag english grammar a predictive lefttoright earleystyle parser for ltag and an xwindows interface for grammar development the input sentence is subjected to morphological analysis and is tagged with parts of speech before being sent to the parserthe parser retrieves the elementary trees that the words of the sentence anchor and combines them by adjunction and substitution operations to derive a parse of the sentencethe grammar of xtag has been used to parse sentences from atis ibm manual and wsj corpora the resulting xtag corpus contains sentences from these domains along with all the derivations for each sentencethe derivations provide in this section we define the key properties of ltags lexicalization extended domain of locality and factoring of recursion from the domain of dependency and discuss how these properties are realized in natural language grammars written in ltagsa more detailed discussion about these properties is presented in joshi kroch and joshi schabes abeille and joshi and joshi and schabes a grammar is lexicalized if it consists of this property proves to be linguistically crucial since it establishes a direct link between the lexicon and the syntactic structures defined in the grammarin fact in lexicalized grammars all we have is the lexicon which projects the elementary structures of each lexical item there is no independent grammarthe extended domain of locality property has two parts part of edl allows the anchor to impose syntactic and semantic constraints on its arguments directly since they appear in the same elementary structure that it anchorshence all elements that appear within one elementary structure are considered to be localthis property also defines how large an elementary structure in a grammar can befigure 9 shows trees for the following example sentences figure 9 shows the elementary tree anchored by seem that is used to derive a raising analysis for sentence 1notice that the elements appearing in the tree are only those that serve as arguments to the anchor and nothing elsein particular the subject np does not appear in the elementary tree for seem since it does not serve as an argument for seemfigure 9 shows the elementary tree anchored by the transitive verb hit in which both the subject np and object np are realized within the same elementary treeltag is distinguished from other grammar formalisms by possessing part of the edl propertyin ltags there is one elementary tree for every syntactic environment that the anchor may appear ineach elementary tree encodes the linear order of the arguments of the anchor in a particular syntactic environmentfor example a transitive verb such as hit is associated with both the elementary tree shown in figure 9 for a declarative transitive sentence such as sentence 2 and the elementary tree shown in figure 9 for an object extracted transitive sentence such as sentence 3notice that the object noun phrase is realized to the left of the subject noun phrase in the object extraction treeas a consequence of the fact that ltags possess the part of the edl property the derivation structures in ltags contain the information of a dependency structureanother aspect of edl is that the arguments of the anchor can be filled in any orderthis is possible because the elementary structures allocate a slot for each argument of the anchor in each syntactic environment that the anchor appears inthere can be many ways of constructing the elementary structures of a grammar so as to possess the edl propertyhowever by requiring that the constructed elementary structures be quotminimalquot the third property of ltags namely factoring of recursion from the domain of dependencies follows as a corollary of edlfactoring of recursion from the domain of dependencies recursion is factored away from the domain for the statement of dependenciesin ltags recursive constructs are represented as auxiliary treesthey combine with elementary trees by the operation of adjunctionelementary trees define the domain for stating dependencies such as agreement subcategorization and fillergap dependenciesauxiliary trees by adjunction to elementary trees account for the longdistance behavior of these dependenciesan additional advantage of a grammar possessing frd and edl properties is that feature structures in these grammars are extremely simplesince the recursion has been factored out of the domain of dependency and since the domain is large enough for agreement subcategorization and fillergap dependencies feature structures in such systems do not involve any recursionin fact they reduce to typed terms that can be combined by simple termlike unificationthis work was done when the first author was at the university of pennsylvaniait was partially supported by nsf grant nsfstc sbr 8920230 arpa grant n0001494 and aro grant daah0494g0426we would like to thank steven abney raman chandrasekar christine doran beth ann hockey mark liberman mitch marcus and mark steedman for useful comments and discussions which have helped shape this workwe also thank the reviewers for their insightful comments and suggestions to improve an earlier version of this paper
J99-2004
supertagging an approach to almost parsingin this paper we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniquesour thesis is that the computation of linguistic structure can be localized iflexical items are associated with rich descriptions that impose complex constraints in a local contextthe supertags are designed such that only those elements on which the lexical item imposes constraints appear within a given supertagfurther each lexical item is associated with as many supertags as the number of different syntactic contexts in which the lexical item can appearthis makes the number of different descriptions for each lexical item much larger than when the descriptions are less complex thus increasing the local ambiguity for a parserbut this local ambiguity can be resolved by using statistical distributions of supertag cooccurrences collected from a corpus of parseswe have explored these ideas in the context of the lexicalized treeadjoining grammar frameworkthe supertags in ltag combine both phrase structure information and dependency information in a single representationsupertag disambiguation results in a representation that is effectively a parse and the parser need only combine the individual supertagsthis method of parsing can also be used to parse sentence fragments such as in spoken utterances where the disambiguated supertag sequence may not combine into a single structurewe indicate that correct disambiguation with supertagging ie assignment of lexical entries before parsing enable effective ltag parsing
functional centering grounding referential coherence in information structure considering empirical evidence from a freewordorder language we propose a revision of the principles guiding the ordering of discourse entities in the forwardlooking center list within the centering model we claim that grammatical role criteria should be replaced by criteria that reflect the functional information structure of the utterances these new criteria are based on the distinction between hearerold and hearernew discourse entities we demonstrate that such a functional model of centering can be successfully applied to the analysis of several forms of referential text phenomena viz pronominal nominal and functional anaphora our methodological and empirical claims are substantiated by two evaluation studies in the first one we compare success rates for the resolution of pronominal anaphora that result from a grammaticalroledriven centering algorithm and from a functional centering algorithm the second study deals with a new costbased evaluation methodology for the assessment of centering data one which can be directly derived from and justified by the cognitive load premises of the centering model considering empirical evidence from a freewordorder language we propose a revision of the principles guiding the ordering of discourse entities in the forwardlooking center list within the centering modelwe claim that grammatical role criteria should be replaced by criteria that reflect the functional information structure of the utterancesthese new criteria are based on the distinction between hearerold and hearernew discourse entitieswe demonstrate that such a functional model of centering can be successfully applied to the analysis of several forms of referential text phenomena viz pronominal nominal and functional anaphoraour methodological and empirical claims are substantiated by two evaluation studiesin the first one we compare success rates for the resolution of pronominal anaphora that result from a grammaticalroledriven centering algorithm and from a functional centering algorithmthe second study deals with a new costbased evaluation methodology for the assessment of centering data one which can be directly derived from and justified by the cognitive load premises of the centering modelthe problem of establishing referential coherence in discourse can be rephrased as the problem of determining the proper antecedent of a given anaphoric expression in the current or the preceding utterance and the rendering of both as referentially identical this task can be approached in a very principled way by stating general constraints on the grammatical compatibility of the expressions involved linguists have devoted a lot of effort to identifying conclusive syntactic and semantic criteria to reach this goal eg for intrasentential anaphora within the binding theory part of the theory of government and binding or for intersentential anaphora within the context of the discourse representation theory unfortunately these frameworks fail to uniquely determine anaphoric antecedents in a variety of casesas a consequence referentially ambiguous interpretations have to be dealt with in those cases in which several alternatives fulfill all the required syntactic and semantic constraintsit seems that syntactic and semantic criteria constitute only necessary but by no means sufficient conditions for identifying the valid antecedent among several possible candidateshence one is left with a preferential choice problem that falls outside of the scope of those strict grammaticality constraints relating to the level of syntax or semantics onlyits solution requires considering patterns of language use and thus introduces the level of discourse context and further pragmatic factors as a complementary description levelcomputational linguists have recognized the need to account for referential ambiguities in discourse and have developed various theories centered around the notion of discourse focus in a seminal paper grosz and sidner wrapped up the results of their research and formulated a model in which three levels of discourse coherence are distinguishedattention intention and discourse segment structurewhile this paper gives a comprehensive picture of a complex yet not explicitly spelledout theory of discourse coherence the centering model marked a major step in clarifying the relationship between attentional states and discourse segment structuremore precisely the centering model accounts for the interactions between local coherence and preferential choices of referring expressionsit relates differences in coherence to varying demands on inferences as required by different types of referring expressions given a particular attentional state of the hearer in a discourse setting the claim is made then that the lower the inference load put on the hearer the more coherent the underlying discourse appearsthe centering model as formulated by grosz joshi and weinstein refines the structure of quotcentersquot of discourse which are conceived as the representational device for the attentional state at the local level of discoursethey distinguish two basic types of centers which can be assigned to each utterance youa single backwardlooking center cb and a partially ordered set of discourse entities the forwardlooking centers cfthe ordering on cf is relevant for determining the cbit can be viewed as a salience ranking that reflects the assumption that the higher the ranking of a discourse entity in cf the more likely it will be mentioned again in the immediately following utterancethus given an adequate ordering of the discourse entities in cf the costs of computations necessary to establish local coherence are minimizedgiven that the ordering on the cf list is crucial for determining the cb it is no surprise that there has been much discussion among researchers about the ranking criteria appropriate for different languagesin fact walker iida and cote hypothesize that the cf ranking criteria are the only languagedependent factors within the centering modelthough evidence for many additional criteria for the cf ranking have been brought forward in the literature to some extent consensus has emerged that grammatical roles play a major role in making ranking decisions our own work on the centering model brings in evidence from german a freewordorder language in which grammatical role information is far less predictive of the organization of centers than for fixedwordorder languages such as englishin establishing proper referential relations we found the functional information structure of the utterances to be much more relevantby this we mean indicators of whether or not a discourse entity in the current utterance refers to another discourse entity already introduced by previous utterances in the discourseborrowing terminology from prince an entity that does refer to another discourse entity already introduced is called discourseold or hearerold while an entity that does not refer to another discourse entity is called discoursenew or hearernewbased on evidence from empirical studies in which we considered german as well as english texts from different domains and genres we make three contributions to the centering approachthe first the introduction of functional notions of information structure into the centering model is purely methodological in nature and concerns the centering approach as a theory of local coherencethe second deals with an empirical issue in that we demonstrate how a functional model of centering can be successfully applied to the analysis of different forms of anaphoric text phenomena namely pronominal nominal and functional anaphorafinally we propose a new evaluation methodology for centering data in terms of a costbased evaluation approach that can be directly derived from and justified by the cognitive load premises of the centering modelat the methodological level we develop arguments that grammatical role criteria should be replaced by functional role criteria since they seem to more adequately account for the ordering of discourse entities in the cf listin section 4 we elaborate on particular information structure criteria underlying such a functional center orderingwe also make a second more general methodological claim for which we have gathered some preliminary though still not conclusive evidencebased on a reevaluation of centering analyses of some challenging language data that can be found in the literature on centering we will argue that exchanging grammatical for functional criteria might also be a reasonable strategy for fixedwordorder languageswhat makes this proposal so attractive is the obvious gain in the generality of the modelgiven a functional framework fixed and freewordorder languages might be accounted for by the same ordering principlesthe second major contribution of this paper is related to the unified treatment of different text coherence phenomenait consists of an equally balanced treatment of intersentential nominal anaphora and inferables the latter phenomenon is usually only sketchily dealt with in the centering literature eg by asserting that the entity in question quotis realized but not directly realizedquot furthermore the distinction between these two kinds of realization is not part of the centering mechanisms but delegated to the underlying semantic theorywe will develop arguments for how to discern inferable discourse entities and relate them properly to their antecedent at the center levelthe ordering constraints we supply account for all of the types of anaphora mentioned above including nominal anaphora this claim will be validated by a substantial body of empirical data in section 5our third contribution relates to the way the results of centeringbased anaphora resolution are usually evaluatedbasically we argue that rather than counting resolution rates for anaphora or comparing isolated transition types holding among head positions in the center listspreferred transition types stand for a high degree of local coherence while less preferred ones signal that the underlying discourse might lack coherenceone should consider adjacent transition pairs and annotate such pairs with the processing costs they incurthis way we define a dual theoryinternal metric of inference load by distinguishing between quotcheapquot and quotexpensivequot transition typesbased on this distinction some transition types receiving bad marks in isolation are ranked quotcheapquot when they occur in the appropriate context and vice versathe article is organized as follows in section 2 we introduce the different types of anaphora we consider subsequently viz pronominal nominal and functional anaphorawe then turn to the proposed modification of the centering modelafter a brief introduction into what we call the quotgrammaticalquot centering model in section 3 we turn in section 4 to our approach the functional model of centeringin section 5 we present the methodological framework and the empirical data from two evaluation studies we carried outin section 6 we relate our work to alternative approaches dealing with local text coherencein section 7 we discuss some remaining unsolved problemsin this paper we consider anaphora as a textual phenomenon only and deal with anaphoric relations that hold between adjacent utterances 2 text phenomena are a challenging issue for the design of a text parser for any textunderstanding system since recognition facilities that are imperfect or altogether lacking result in referentially incomplete invalid or incohesive text knowledge representation structures incomplete knowledge structures emerge when references to already established discourse entities are simply not recognized as in the case of conceptually neutral pronominal anaphora cospecifying with 316lt a particular notebook introduced in example invalid knowledge structures emerge when each entity that has a different denotation at the text surface is also treated as a formally distinct item at the level of text knowledge representation although they all refer literally to the same entitythese false referential descriptions result from unresolved nominal anaphora cospecifies with 316lt in finally incohesive or artificially fragmented knowledge structures emerge when entities that are linked by various conceptual relations at the knowledge level occur in a text such that an implicit reference to these relations can be made without the need for explicit signaling at the text surface levelcorresponding referential relations cannot be established at the text representation level since these inferables remain unsolved and respectivelythe linking conceptual relation between these two discourse elements has to be inferred in order to make it explicit at the level of text knowledge representation structures note an interesting asymmetric relationship between these three types of anaphorapronominal anaphora are constrained by morphosyntactic and grammatical agreement criteria between the pronoun and the antecedent and no conceptual constraints applynominal anaphora are only constrained by number compatibility between the anaphoric expression and the antecedent while at the conceptual level the anaphoric expression is related to its antecedent in terms of a conceptual generalization relationfinally no grammatical constraints apply to inferables while conceptual constraints typically require a nongeneralization relation to hold between the inferable and its antecedentof course contextual conceptual constraints are introduced for both nominal and pronominal anaphora by sortal requirements set up eg by the case roles of the main verblet us illustrate these different types of phenomena by considering the following text fragment the status of the rechargeable battery celligennom is to the userldat signalledthe status of the rechargeable battery cell is signalled to the user c ca30 minuten vor der entleerung beginnt der rechner 5 sekunden zu piepenapproximately 30 minutes before discharge starts the nmaanisc computer for 5 seconds to beepapproximately 30 minutes before discharge the computer beeps for 5 seconds d 5 minuten bevor er sich ausschaltet fangt die lowbatteryled an zu blinken5 minutes before itnnloag itself turns off begins the lowbatterylednom to flash5 minutes before it turns off the lowbatteryled begins to flashcommon to all the varieties of anaphora we discuss is the search for the proper antecedent in previous utterances the correct determination of which is considered to be the task of the centering mechanismthe kinds of anaphora we treat can be distinguished however in terms of the criteria being evaluated for referentialityin the case of inferables the missing conceptual link must be inferred in order to establish local coherence between the utterances involvedin the surface form of utterance the information that akkus rechargeable battery cell links up with 316lt is missing while due to obvious conceptual constraints it cannot link up with reservebatteriepack for examplethe underlying relation can only be made explicit if conceptual knowledge about the domain viz the relation partof between the concepts rechargebatterycell and 316lt is available in the case of nominal anaphors a conceptual specialization relation has to be determined between the specific antecedent and the more general anaphoric expression for example between 316lt and rechner computer in and respectivelyfinally the resolution of pronominal anaphors need not take conceptual constraints into account at all but is restricted to grammatical constraints as illustrated by the masculine gender of rechner computermascc and er wmasc in and respectivelycertainly the types of phenomena we discuss cover only a limited range of anaphorain particular we leave out the whole range of quantificational studies on anaphora deictic phenomena etc which significantly complicate matterswe return to these unresolved issues in section 7the centering model is intended to describe the relationship between local coherence and the use of referring expressionsthe model requires two constructs a single backwardlooking center and a list of forwardlooking centers as well as a few rules and constraints that govern the interpretation of centersit is assumed that discourses are composed of constituent segments each of which consists of a sequence of utteranceseach utterance li in a given discourse segment ds is assigned a list of forwardlooking centers cf and a unique backwardlooking center cbthe forwardlooking centers of you depend only on the discourse entities that constitute the ith utterance previous utterances provide no constraints on cfa ranking imposed on the elements of the cf reflects the assumption that the most highly ranked element of cf the preferred center cp will most likely be the cbthe most highly ranked element of cf that is finally realized in uii is the actual cbsince in this paper we will not discuss the topics of global coherence and discourse macro segmentation we assume a priori that any centering data structure is assigned an utterance in a given discourse segment and simplify the notation of centers to cb and cfgrosz joshi and weinstein state that the items in the cf list have to be ranked according to a number of factors including grammatical role text position and lexical semanticsas far as their discussion of concrete english discourse phenomena is concerned they nevertheless restrict their ranking criteria to those solely based on grammatical roles which we repeat in table 1the centering model in addition defines transition relations across pairs of adjacent utterances these transitions differ from each other according to whether backwardlooking centers of successive utterances are identical or not and if they are identical whether they match the most highly ranked element of the current forwardlooking center list the cp or notgrosz joshi and weinstein also define two rules on center movement and realization if any element of cf is realized by a pronoun in you1 then the cb must be realized by a pronoun alsosequences of continuation are to be preferred over sequences of retaining and sequences of retaining are to be preferred over sequences of shiftingrule 1 states that no element in an utterance can be realized by a pronoun unless the backwardlooking center is realized by a pronoun toothis rule is intended to capture one function of the use of pronominal anaphorsa pronoun in the cb signals to the hearer that the speaker is continuing to refer to the same discourserule 2 should reflect the intuition that a pair of utterances that have the same theme is more coherent than another pair of utterances with more than one themethe theory claims above all that to the extent that a discourse adheres to these rules and constraints its local coherence will increase and the inference load placed upon the hearer will decreasethe basic unit for which the centering data structures are generated is the utterance yousince grosz joshi and weinstein and brennan friedman and pollard do not give a reasonable definition of utterance we follow kameyama method for dividing a sentence into several centerupdating units her intrasentential centering mechanisms operate at the clause levelwhile tensed clauses are defined as utterances on their own untensed clauses are processed with the main clause so that the cf list of the main clause contains the elements of the untensed embedded clausekameyama further distinguishes for tensed clauses between sequential and hierarchical centeringexcept for direct and reported speech nonreport complements and relative clauses all other types of tensed clauses build a chain of utterances at the same levelthough the centering model was not originally intended to be used as a blueprint for anaphora resolution several applications tackling this problem have made use ofthe model neverthelessone interpretation is due to brennan friedman and pollard who utilize rule 2 for computing preferences for antecedents of pronouns in this section we will specify a simple algorithm that uses the cf list directly for providing preferences for the antecedents of pronounsthe algorithm consists of two steps which are triggered independentlywe may illustrate this algorithm by referring to the text fragment in example 6 athe sentry was not deadtable 4 gives the centering analysis for this text fragment using the algorithm from table 37 since is the first sentence in this fragment it has no cbin and in the discourse entity sentry is referred to by the personal pronoun hesince we assume a cf ranking by grammatical roles in this example sentry is ranked highest in these sentences in the discourse entity mike is introduced by a proper name in subject positionthe pronoun him is resolved to the most highly ranked element of cf namely sentrysince mike occupies the subject position it is ranked higher in the cf than sentrytherefore the pronoun he in can be resolved correctly to mikethis example not only illustrates anaphora resolution using the basic algorithm from table 3 but also incorporates the application of rule 1 of the centering model contains the pronoun him which is the cb of this utterancein the cb is also realized as a pronoun while sentry is realized by the definite noun phrase the man which is allowed by rule 1the centering algorithm described by brennan friedman and pollard interprets the centering model in a certain way and applies it to the resolution of pronounsthe most obvious difference between grosz joshi and transition types according to bfpweinstein and brennan friedman and pollard is that the latter use two shift transitions instead of only one smoothshift8 requires the cb to equal cp while roughshift requires inequality brennan friedman and pollard also allow the cb to remain undefinedbrennan friedman and pollard extend the ordering constraints in cf in the following way quotwe rank the items in cf by obliqueness of grammatical relations of the subcategorized functions of the main verb that is first the subject object and object2 followed by other subcategorized functions and finally adjunctsquot in order to apply the centering model to pronoun resolution they use rule 2 in making predictions for pronominal reference and redefine the rules as follows if some element of cf is realized as a pronoun in you then so is cb figurative names were introduced by walker iida and cote bfpalgorithmtransition states are orderedcontinue is preferred to retain is preferred to smoothshift is preferred to roughshifttheir algorithm consists of three basic steps 9 in order to illustrate this algorithm we use example from above and supply the corresponding cbcf data in table 7let us focus on the interpretation of utterance where the centering data diverges when one compares the basic and the bfp algorithmsafter step 2 the algorithm has produced two readings which are rated by the corresponding transitions in step 3since smoothshift is preferred over roughshift the pronoun he is resolved to mike the highestranked element of cfalso rule 1 would be violated in the rejected readingthe crucial point underlying functional centering is to relate the ranking of the forwardlooking centers and the information structure of the corresponding utteranceshence a proper correspondence relation between the basic centering data structures and the relevant functional notions has to be established and formally rephrased in terms of the centering modelin this section we first discuss two studies in which the information structure of utterances is already integrated into the centering model using these proposals as a point of departure we shall develop our own proposalfunctional centering as far as the centering model is concerned the first account involving information structure criteria was given by kameyama and further refined by walker iida and cote in their study on the use of zero pronouns and topic markers in japanesethis led them to augment the grammatical ranking conditions for the forwardlooking centers by additional functional notionsa deeper consideration of information structure principles and their relation to the centering model has been proposed in two studies concerned with the analysis of german and turkish discourserambow was the first to apply the centering methodology to german aiming at the description of information structure aspects underlying scrambling and topicalizationas a side effect he used centering to define the utterance theme and rheme in the sense of the functional sentence perspective viewed from this perspective the themerhemehierarchy of utterance 11 is determined by the cfelements of ui that are contained in cf are less rhematic than those not contained in cfhe then concludes that the cb must be the theme of the current utterancerambow does not exploit the information structure of utterances to determine the cf ranking but formulates it on the basis of linear textual precedence among the relevant discourse entitiesin order to analyze turkish texts hoffman distinguishes between the information structure of utterances and centering since both constructs are assigned different functions for text understandinga hearer exploits the information structure of an utterance to update his discourse model and he applies the centering constraints in order to connect the current utterance to the previous discoursehoffman describes the information structure of an utterance in terms of topic and comment the comment is split again into focus and ground based on previous work about turkish hoffman argues that in this language the sentenceinitial position corresponds to the topic the position that immediately precedes the verb yields the focus and the remainder of the sentence is to be considered the groundfurthermore hoffman relates this notion of information structure of utterances to centering claiming that the topic corresponds to the cb in most caseswith the exception of segmentinitial utterances which do not have a cbhoffman does not say anything about the relation between information structure and the ranking of the cf listin her approach this ranking is achieved by thematic roles both rambow as well as hoffman argue for a correlation between the information structure of utterances and centeringboth of them find a correspondence between the cb and the theme or the topic of an utterancethey refrain however from establishing a strong link between the information structure and centering as we suggest in our model one that mirrors the influence of information structure in the way the forwardlooking centers are actually rankedgrosz joshi and weinstein admit that several factors may have an influence on the ranking of the cf but limit their exposition to the exploitation of grammatical roles onlywe diverge from this proposal and claim that at least for languages with relatively free word order the functional information structure of the utterance is crucial for the ranking of discourse entities in the cf listoriginally in strube and hahn we defined the cf ranking criteria in terms of contextboundednessin this paper we redefine the functional cf ranking criteria by making reference to prince work on the assumed familiarity of discourse entities and information status the term contextbound in strube and hahn corresponds to the term evoked used by princewe briefly list the major claims of our approach to centeringin the following sections we elaborate on these claims in particular the ranking of the forwardlooking centersin contrast to the bfp algorithm the model of functional centering requires neither a backwardlooking center nor transitions nor transition ranking criteria for anaphora resolutionfor text interpretation at least functional centering also makes no commitments to further constraints and rulesin this section we introduce the functional cf ranking criteriawe first describe a basic version which is valid for a wide range of text genres in which pronominal reference is the predominant text phenomenonthis is the type of discourse to which centering was mainly applied in previous approaches we then describe the extended version of the functional cf ranking constraintsthe two versions differ with respect to the incorporation of inferables in the second version and hence with respect to the requirements 10 in strube and hahn we assumed that the information status of a discourse entity has the main impact on its saliencein particular evoked discourse entities were ranked higher in the cf list than brandnew discourse entities we also restricted the category of the most salient discourse entities to evoked discourse entitiesin this article we extend this category to hearerold discourse entities which includes besides evoked discourse entities unused ones information status and familiarity relating to the availability of world knowledge which is needed to properly account for inferablesthe extended version assumes a detailed treatment of a particular subset of inferables socalled functional anaphora we claim that the extended version of ranking constraints is necessary to analyze texts from certain genres eg texts from technical or medical domainsin these areas pronouns are used rather infrequently while functional anaphors are the major text phenomena to achieve local coherence431 basic cf rankingusually the cf ranking is represented by an ordering relation on a single set of elements eg grammatical relations we use a layered representation for our criteriafor the basic cf ranking criteria we distinguish between two different sets of expressions hearerold discourse entities in li and hearernew discourse entities in you these sets can be further split into the elements of prince familiarity scalethe set of hearerold discourse entities consists of evoked and unused discourse entities while the set of hearernew discourse entities consists of brandnew discourse entitiesfor the basic cf ranking criteria it is sufficient to assign inferable containing inferable and anchored brandnew discourse entities to the set of hearernew discourse entities 11 see figure 2 for an illustration of prince familiarity scale and its relation to the two setsnote that the elements of each set are indistinguishable with respect to their information statusevoked and unused discourse entities for example have the same information status because they belong to the set of hearerold discourse entitiesso the basic cf ranking in figure 2 boils down to the preference of old discourse entities over new onesfor an operationalization of prince terms we state that evoked discourse entities are simply cospecifying expressions ie pronominal and nominal anaphora relative pronouns previously mentioned proper names etcunused discourse entities are proper names and titlesin texts brandnew proper names are usually accompanied by a relative clause or an appositive that relates them to the hearer knowledgethe corresponding discourse entity is evoked only after this elaborationwhenever these linguistic devices are missing we treat proper names as unusedin the following we give some examples of evoked unused and brandnew 11 quoting prince quotinferrables are like hearernew entities in that the hearer is not expected to already have in hisher head the entity in questionquot 12 for examples of brandnew proper names and how they are introduced see for example the beginning of articles in the quotobituariesquot section of the new york times discourse entities though in naturally occurring texts these phenomena rarely show up unadulteratedthe remaining categories will be explained subsequentlyexample 3 in example buildings is introduced as a discoursenew discourse entity which is brandnew in the definite np the buildings cospecifies the discourse entity from hence buildings in is evoked just as is they in certain proper names are assumed to be known by any hearertherefore these proper names need no further explanationwinnie madikizela mandela in example is unused ie it is discoursenew but heareroldother proper names have to be introduced because they are discoursenew and hearernewin example marianne kador is introduced by means of a lengthy appositive that relates the brandnew proper name to the knowledge of the hearerin particular the noun phrase the apartment buildings is discourseold a defiant winnie madikizela mandelayou testified for more than 10 hours today dismissing all evidence that quothe was an undervalued person all his lifequot said marianne kador a social worker for selthelp community services which operates the apartment buildings in queensin table 8 we define various sets which are used for the specification of the cf ranking criteria in table 9we distinguish between two different sets of discourse entities hearerold discourse entities and hearernew discourse entities for any two discourse entities and with x and y denoting the linguistic surface expression of those entities as they occur in the discourse and posx and posy indicating their respective text position posx 0 posy in table 9 we define the basic ordering constraints on elements in the forwardlooking centers cffor any utterance you the ordering of discourse entities in the cf that can be derived from the above definitions and the ordering constraints to are denoted by the relation ordering constraint characterizes the basic relation for the overall ranking of the elements in the cfaccordingly any hearerold expression in utterance li is given the highest preference as a potential antecedent for an anaphoric expression in u11any hearernew expression is ranked below hearerold expressionsordering constraint captures the ordering for the sets old or new when they contain elements of the same typein this case the elements of each set are ranked according to their text position432 extended cf rankingwhile the basic cf ranking criteria are sufficient for texts with a high proportion of pronouns and nominal anaphora it is necessary to refine the ranking criteria in order to deal with expository texts eg test reports discharge summariesthese texts usually contain few pronouns and are characterized by a large number of inferrables which are often the major glue in achieving local coherencein order to accommodate the centering model to texts from these genres we distinguish a third set of expressions mediated discourse entities in you on prince familiarity scale the set of hearerold discourse entities remains the same as before ie it consists of evoked and unused discourse entities while the set of hearernew discourse entities now consists only of brandnew discourse entitiesinferable containing inferable and anchored brandnew discourse entities which make up the set of mediated discourse entities have a status between hearerold and hearernew discourse entitiessee figure 3 for prince familiarity scale and its relation to the three setsagain the elements of this set are indistinguishable with respect to their information statusfor instance inferable and anchored brandnew discourse entities have the same information status because they belong to the set of mediated discourse entitieshence the extended cf ranking depicted in figure 3 will prefer old discourse entities over mediated ones and mediated ones will be preferred over new oneswe assume that the difference between containing inferables and anchored brandnew discourse entities is negligibleprince 1992 she abandoned the second termtherefore we conflate them into the category of anchored brandnew discourse entitiesthese discourse entities require that the anchor modifies a brandnew head and that the anchor is either an evoked or an unused discourse entityin the following we give examples of inferrables and anchored brandnew discourse entitiesin example 6 the relation between the definite np the family and the context has to be inferred therefore the family belongs to the category inferable it is marked by definiteness but it is not anaphoric since there is no anaphoric antecedentthough inferables are often marked by definiteness it is possible that they are indefinite like an uncle in example with respect to inferables there exist only a few computational treatments all of which are limited in scopewe here restrict inferables to the particular subset defined by hahn markert and strube which we call functional anaphora in the following we will limit our discussion of inferables to those which figure as functional anaphorsin table 10 we define the sets needed for the specification of the extended cf ranking criteria in table 11we distinguish between three different sets of discourse entities hearerold discourse entities mediated discourse entities and hearernew discourse entities note that the antecedent of a functional anaphor is included in the set of hearerold discourse entitiessets of discourse entities for the extended cf rankingde the set of discourse entities in the set of evoked discourse entities in 111 the set of unused discourse entities in ul faante the set of antecedents of functional anaphors in 11 fa the set of functional anaphors in li bna the set of anchored brandnew discourse entities in li extended functional ranking constraints on the cf listfor any two discourse entities and with x and y denoting the linguistic surface expression of those entities as they occur in the discourse and posx and posy indicating their respective text position posx posy in table 11 we define the extended functional ordering constraints on elements in the forwardlooking centers cfin the following for any utterance 111 the ordering of discourse entities in the cf that can be derived from the above definitions and the ordering constraints to are denoted by the relation quotquotordering constraint characterizes the basic relation for the overall ranking of the elements in the cfaccordingly any hearerold expression in utterance you is given the highest preference as a potential antecedent for an anaphoric or functional anaphoric expression in 1111any mediated expression is ranked just below hearerold expressionsany hearernew expression is ranked lowestordering constraint fixes the ordering when the sets old med or new contain elements of the same typein these cases the elements of each set are ranked according to their text positionin table 12 we show the analysis of text fragment using the basic algorithm see table 3 with the basic functional cf ranking constraints the fragment starts with the evoked discourse entity sentry in the pronouns he in and are evoked while signs and tunic are brandnewwe assume mike in to be evoked too mike is the leftmost evoked discourse entity in hence ranked highest in the cf and the most preferred antecedent for the pronoun he in in this section we discuss two evaluation experiments on naturally occurring datawe first compare the success rate of the functional centering algorithm with that of the bfp algorithmthis evaluation uses the basic cf ranking constraints from table 9we then introduce a new costbased evaluation method which we use for comparing the extended cf ranking constraints from table 11 with several other approaches511 datain order to compare the functional centering algorithm with the bfp algorithm we analyzed a sample of english and german textsthe test set consisted of the begirmings of three short stories by ernest hemingway three articles from the new york times 16 the first three chapters of a novel by uwe johnson the first two chapters of a short story by heiner muller and seven articles from the frankfurter allgemeine zeitung 19 by a smallscale discourse annotation toolwe used the following guidelines for our evaluation we did not assume any world knowledge as part of the anaphora resolution processonly agreement criteria and sortal constraints were appliedwe did not account for false positives and error chains but marked the latter we use kameyama specifications for dealing with complex sentences following walker a discourse segment is defined as a paragraph unless its first sentence has a pronoun in subject position or a pronoun whose syntactic features do not match the syntactic features of any of the preceding sentenceinternal noun phrasesalso at the beginning of a segment anaphora resolution is preferentially performed within the same utteranceaccording to the preference for intersentential candidates in the original centering model we defined the following anaphora resolution strategy since clauses are short in general step 2 of the algorithm only rarely applies513 resultsthe results of our evaluation are given in table 14the first row gives the number of third person pronouns and possessive pronouns in the datathe upper part of the table shows the results for the bfp algorithm the lower part those for the func algorithmoverall the data are consistently in favor of the funcc algorithm though no significance judgments can be made the overall error rate of each approach is given in the rows labeled as quotwrongquotwe also tried to determine the major sources of errors and were able to distinguish three different typesone class of errors relates to the algorithm strategyin the case of the bfp algorithm the corresponding row also contains the number of ambiguous cases generated by this algorithm a second class of errors results from error chains mainly caused by the strategy of each approach or by ambiguities in the bfp algorithma third error class is caused by the intersentential specifications eg the correct antecedent is not accessible because it is realized in an embedded clause finally other errors were mainly caused by split antecedents reference to events and cataphora514 interpretationwhile the rate of errors caused by the specifications for complex sentences and by other reasons is almost identical there is a remarkable difference between the algorithms with respect to strategic errors and error chainsstrategic errors occur whenever the preference given by the algorithm under consideration leads to an errormost of the strategic errors implied by the func algorithm also show up as errors for the bfp algorithmwe interpret this finding as an indication that these errors are caused by a lack of semantic or world knowledgethe remaining errors of the bfp algorithm are caused by the strictly local definition of its criteria and because the bfp algorithm cannot deal with some particular configurations leading to ambiguitiesthe func algorithm has fewer error chains not only because it yields fewer strategic errors but also because it is more robust with respect to real textsan utterance u1 for instance which intervenes between lti_i and 111 without any relation to ui_i does not affect the preference decisions in ui2 for func although it does affect them for the bfp algorithm since the latter cannot assign the cbalso error chains are sometimes shorter in the func analysesexample illustrates how the local restrictions as defined by the original centering model and the bfp algorithm result in errors and lead to rather lengthy error chains the discourse entity sentence which is cospecified by the pronoun er in is the cbtherefore it is the most preferred antecedent for the pronoun ihn in which causes a strategic errorthis error in turn is the reason for a consequent error in because there are no semantic cues that enforce the correct interpretation ie the coreferentiality between ihn and giulianithe possible interruption of the error chain indicated by the alternative interpretation in is ruled out however by the preference for retain over roughshift transitions ader satz mit dem ruth messinger eine der fernsehdebatten i am burgermeisterwahlkampf in new york eroffnete wird der einzige sein der von ihr in erinnerung bleibt opened will the only one be which of her in memory remainsthe sentence with which ruth messinger opened one of the tv debates will be the only one which will be recollected of her bam nahezu sicheren wahlsieg des amtsinhabers rudolph giuliani am dienstag wird er nichts andernof the almost certain victory in the election of the officeholder rudolph giulianimascandaisucnct on tuesday will itl smuabjcect nothing alterof the officeholder rudolph giuliani almost certain victory in the election on tuesday it will alter nothing c alle zeitungen der stadt unterstiitzen ihnhe is supported by all newspapers of the cityhe is backed up by the unionsthe nonlocal definition of hearerold discourse entities enables the func algorithm to compute the correct antecedent for the pronoun ihn in preventing it from running into an error chain giuliani who was mentioned earlier in the text is the leftmost evoked discourse entity in and therefore the most preferred antecedent for the pronoun in though there is a pronoun of the same gender in we encountered problems with kameyama specifications for complex sentencesthe differences between clauses that are accessible from a higher syntactic level and clauses that are not could not be verified by our analysesalso her approach is sometimes too coarsegrained and sometimes too finegrained521 datathe test set for our second evaluation experiment consisted of three different text genres 15 product reviews from the information technology domain one article from the german news magazine der spiegel and the first two chapters of a short story by the german writer heiner mullertable 17 summarizes the total number of nominal anaphors functional anaphors utterances and words in the test set522 method given these sample texts we compared three approaches to the ranking of the cf a model whose ordering principles are based on grammatical role indicators only an quotintermediatequot model which can be considered a quotnaivequot approach to freewordorder languages and the functional model based on the information structure constraints stated in table 11for reasons discussed below slightly modified versions of the naive and the grammatical approaches will also be consideredthey are characterized by the additional constraint that antecedents of functional anaphors are ranked higher than the functional anaphors themselvesas in section 51 the evaluation was carried out manually by the authorssince most of the anaphors in these texts are nominal anaphors the resolution of which is much more restricted than that of pronominal anaphors the success rate for the whole anaphora resolution process is not distinctive enough for a proper evaluation of the functional constraintsthe reason for this lies in the fact that nominal anaphors are far more constrained by conceptual criteria than pronominal onesthus the chance of properly resolving a nominal anaphor even when ranked at a lower position in the center lists is greater than for pronominal anaphorsby shifting our evaluation criteria away from resolution success data to structural conditions reflecting the proper ordering of center lists these criteria are intended to compensate for the high proportion of nominal anaphora in our sampletable 5 enumerates the types of centering transitions we consider of centering transitions between the utterances in the three test setsthe first column contains those generated by the naive approach we simply ranked the elements of cif according to their text positionwhile it is usually assumed that the functional anaphor is ranked above its antecedent we assume the oppositethe second column contains the results of this modification with respect to the naive approachin the third column of table 18 we give the numbers of transitions generated by the grammatical constraints stated by grosz joshi and weinstein the fourth column supplies the results of the same modification as was used for the naive approach namely antecedents of functional anaphors are ranked higher than the corresponding anaphoric expressionsthe fifth column shows the results generated by the functional constraints from table 11524 interpretation the centering model assumes a preference order among transition typescontinue ranks above retain and retain ranks above shiftthis preference order reflects the presumed inference load put on the hearer to coherently decode a discoursesince the functional approach generates more continue transitions we interpret this as preliminary evidence that this approach provides for a more efficient processing than its competitorsin particular the observation of a predominance of continues holds irrespective of the various text genres we considered for functional centering and to a lesser degree for the modified grammatical ranking constraints525 method the arguments we have given so far do not seem to be entirely convincingcounting single occurrences of transition types in general does not reveal the entire validity of the center listsconsidering adjacent transition pairs as an indicator of validity should give a more reliable picture since depending on the text genre considered certain sequences of transition types may be entirely plausible though they include transitions which when viewed in isolation seem to imply considerable inferencing load for instance a continue transition that follows a continue transition is a sequence that requires the lowest processing costsbut a continue transition that follows a retain transition implies higher processing costs than a smoothshift transition following a retain transitionthis is due to the fact that a retain transition ideally predicts a smoothshift in the following utterancehence we claim that no one particular centering transition should be preferred over anotherinstead we advocate the idea that certain centering transition pairs are to be preferred over othersfollowing this line of argumentation we propose here to classify all occurrences of centering transition pairs with respect to the quotcostsquot they implythe costbased evaluation of different cf orderings refers to evaluation criteria that form an intrinsic part of the centering modeltransition pairs hold for three immediately successive utteranceswe distinguish between two types of transition pairs cheap ones and expensive ones immediately preceding utterance ie cb cp in particular chains of the retain transition in passages where the cb does not change show that the grammatical ordering constraints for the forwardlooking centers are not appropriate526 results the numbers of centering transition pairs generated by the different approaches are shown in table 19in general the functional approach reveals the best results while the naive and the grammatical approaches work reasonably well for the literary text but exhibit a remarkably poorer performance for the texts from the it domain and to a lesser degree from the news magazinethe results for the latter approaches improve only slightly with the modification of ranking the antecedent of an functional anaphor above the functional anaphor itself in any case they do not compare to the results of the functional approachour use of the centering transitions led us to the conclusion that continue and smoothshift are not completely specified by grosz joshi and weinstein and brennan friedman and pollard according to brennan friedman and pollard definition it is possible that a transition is labeled smoothshift even if cp cpsuch a shift is less smooth because it contradicts the intuition that a smoothshift fulfills what a retain predictedthe same applies to a continue with this characteristichence we propose to extend the set of transitions as shown in tacontexpcontretsmooths expsmooths roughs cheap exp cont cheap cheap exp expexpcont exp exp exp expret exp exp exp cheap exp expsmooths cheap exp exp exp exp expexpsmooths exp exp exp exp exp exproughs exp exp exp cheap exp exp ble 20the definitions of continue and smoothshift are extended by the condition that cp cp while expcontinue and expsmoothshift require the oppositeretain and roughshift fulfill cp cp without further extensionstable 21 contains a complete overview of the transition pairsonly those whose second transition fulfills the criterion cp cp are labeled as quotcheapquot grosz joshi and weinstein define rule 2 of the centering model on the basis of sequences of transitionssequences of continue transitions are preferred over sequences of retain transitions which are preferred over sequences of shift transitionsbrennan friedman and pollard utilize this rule for anaphora resolution but restrict it to single transitionsbased on the preceding discussion of cheap and expensive transition pairs we propose to redefine rule 2 in terms of the costs of transition typesrule 2 then reads as follows rule 2quot cheap transition pairs are preferred over expensive oneswe believe that this definition of rule 2 allows for a far better assessment of referential coherence in discourse than a definition in terms of sequences of transitionsfor anaphora resolution we interpret rule 2quot such that the preference for antecedents of anaphors in li can be derived directly from the cfthe higher a discourse entity is ranked in the cf the more likely it is the antecedent of a pronounwe see the redefinition of rule 2 as the theoretical basis for a centering algorithm for pronoun resolution that simply uses the cf as a preference ranking device like the basic centering algorithm shown in table 3in this algorithm the metaphor of costs translates into the number of elements of the cf that have to be tested until the correct antecedent is foundif the cp of the previous utterance is the correct one then the costs are indeed very lowwe were also interested in finding out whether the functional criteria we propose might explain the linguistic data in a more satisfactory way than the grammaticalrolebased criteria discussed so farso we screened sample data from the literature which were already annotated by centering analyses we achieved consistent results for the grammatical and the functional approach for all the examples contained in grosz joshi and weinstein but found diverging analyses for some examples discussed by brennan friedman and pollard while the retainshift combination in examples and did not indicate a difference between the approaches for the retaincontinue combination in examples and the two approaches led to different results a brennan drives an alfa romeowithin the functional approach the proper name friedman is unused and therefore the leftmost hearerold discourse entity of hence friedman is the most preferred antecedent for the pronoun she in and but is subjecthood really the decisive factorwhen we replace friedman with a hearernew discourse entity eg a professional driver as in the pronoun she is resolved to brennan because of the preference for continue over smoothshiftin she is resolved to driver because smoothshift is preferred over roughshift 10ca professional driver races her on weekendswithin the functional approach the evoked phrase her in is ranked higher than the brandnew phrase a professional drivertherefore the preference changes between example and in and the pronoun she is resolved to brennan the discourse entity denoted by her we find the analyses of functional centering to match our intuitions about the underlying referential relations more closely than those that are computed by grammatically based centering approacheshence in the light of this still preliminary evidence we answer the question we posed at the beginning of this subsection in the affirmativefunctional centering indeed explains the data in a more satisfying manner than other wellknown centering principlesto summarize the results of our empirical evaluation we claim first that our proposal based on functional criteria leads to substantially improved andwith respect to the inference load placed on the text understander whether human or machinemore plausible results for languages with free word order than the structural constraints given by grosz joshi and weinstein and those underlying the naive approachwe base these observations on an evaluation study that considers transition pairs in terms of the inference load specific pairs implysecond we have gathered preliminary evidence still far from conclusive that the functional constraints on centering seem to explain linguistic data more satisfactorily than the common grammaroriented constraintshence we hypothesize that these functional constraints might constitute a general framework for treating free and fixedwordorder languages by the same methodologythis claim without doubt has to be further substantiated by additional crosslinguistic empirical studiesthe costbased evaluation we focused on in this section refers to evaluation criteria that form an intrinsic part of the centering modelas a consequence we have redefined rule 2 of the centering constraints appropriatelywe replaced the characterization of a preference for sequences of continue over sequences of retain and similarly sequences of retain over sequences of shift by one in which cheap transitions are to be preferred over expensive onesapproaches to anaphora resolution based on focus devices partly use the information status of discourse entities to determine the current discourse focushowever a common area of criticism of these approaches is the diversity of data structures they requirethese data structures are likely to hide the underlying linguistic regularities because they promote the mix of preference and data structure considerations in the focusing algorithmsas an example sidner distinguishes between an actor focus and a discourse focus as well as corresponding lists vizpotential actor focus list and potential discourse focus listsufi and mccoy in their raftrapr approach use grammatical roles for ordering the focus lists and make a distinction between subject focus current focus and corresponding listsboth focusing algorithms prefer an element that represents the focus to the elements in the list when the anaphoric expression under consideration is not the agent or the subject relating these approaches to our proposal they already exhibit a weak preference for a single hearerold discourse elementdahl and ball describing the anaphora resolution module of the pundit system improve the focusing mechanism by simplifying its underlying data structuresthus their proposal is more closely related to the centering model than any other focusing mechanismfurthermore if there is a pronoun in the sentence for which the focus list is built the corresponding evoked discourse entity is shifted to the front of the listthe following elements of the focus list are ordered by grammatical roles againhence their approach still relies upon grammatical information for the ordering of the centering list while we use only the functional information structure as the guiding principlegiven its embedding in a cognitive theory of inference loads imposed on the hearer and even more importantly its fundamental role in a more comprehensive theory of discourse understanding based on linguistic attentional and intentional layers the centering model can be considered the first principled attempt to deal with preference orders for plausible antecedent selection for anaphorsits predecessors were entirely heuristic approaches to anaphora resolutionthese were concerned with various criteriabeyond strictly grammatical constraints such as agreementfor the optimization of the referent selection process based on preferential choicesan elaborate description of several of these preference criteria is supplied by carbonell and brown who discuss among others heuristics involving case role filling semantic and pragmatic alignment syntactic parallelism syntactic topicalization and intersentential recencygiven such a wealth of criteria one may either try to order them a priori in terms of importance oras was proposed by the majority of researchers in this field define several scoring functions that compute flexible orderings on the flythese combine the variety of available evidence each one usually annotated by a specific weight factor and finally map the weights to a single salience score these heuristics helped to improve the performance of discourseunderstanding systems through significant reductions of the available searchspace for antecedentstheir major drawback is that they require a great deal of skilled handcrafting that unfortunately usually does not scale in broader application domainshence proposals were made to replace these highlevel quotsymbolicquot categories by statistically interpreted occurrence patterns derived from large text corpora preferences then reflect patterns of statistically significant lexical usage rather than introspective abstractions of linguistic patterns such as syntactic parallelism or pragmatic alignmentamong the heuristic approaches to anaphora resolution those which consider the identification of heuristics a machine learning problem are particularly interesting since their heuristics dynamically adapt to the textual datafurthermore ml procedures operate on incomplete parses which distinguishes them from the requirements of perfect information and high data fidelity imposed by almost any other anaphora resolution schemeconnolly burger and day treat anaphora resolution as an ml classification problem and compare seven classifier approaches with the solution quality of a naive handcrafted algorithm whose heuristics incorporate the wellknown agreement and recency indicatorsaone and bennett outline an approach where they consider more than 60 features automatically obtained from the machinery of the host natural language processing system the learner is embedded inthe features under consideration include lexical ones like categories syntactic ones like grammatical roles semantic ones like semantic classes and text positional ones eg the distance between anaphor and antecedentthese features are packed in feature vectorsfor each pair of an anaphor and its possible antecedentand used to train a decision tree employing quinlan c45 algorithm or a whole battery of alternative classifiers in which hybrid variants yield the highest scores though still not fully worked out it is interesting to note that in both studies mlderived heuristics tend to outperform those that were carefully developed by human experts this indicates at least that heuristically based methods using simple combinations of features benefit from being exposed to and having to adapt to training datamlbased mechanisms might constitute an interesting perspective for the further tuning of ordering criteria for the forwardlooking centersthese mixed heuristic approaches using multidimensional metrics for ranking antecedent candidates diverge from the assumption that underlies the centering model that a single type of criterionthe attentional state and its representation in terms of the backward and forwardlooking centersis crucial for referent selectionby incorporating functional considerations in terms of the information structure of utterances into the centering model we actually enrich the types of knowledge that go into centered anaphora resolution decisions ie we extend the quotdimensionalityquot of the centering model toobut unlike the numerical scoring approaches our combination remains at the symbolic computation level preserves the modularity of criteria and in particular is linguistically justifiedalthough functional centering is not a complete theory of preferential anaphora resolution one should clearly stress the different goals behind heuristicsbased systems such as the ones just discussed and the model of centeringheuristic approaches combine introspectively acquired descriptive evidence and attempt to optimize reference resolution performance by proper evidence quotengineeringquotthis is often done in an admittedly ad hoc way requiring tricky retuning when new evidence is added on the other hand many of these systems work in a realworld environment in which noisy data and incomplete sometimes even faulty analysis results have to be accounted forthe centering model differs from these considerations in that it aims at unfolding a unified theory of discourse coherence at the linguistic attentional and intentional level hence the search for a more principled theorybased solution but also the need for perfect linguistic analyses in terms of parsing and semantic interpretationin this paper we provided a novel account for ordering the forwardlooking center list a major construct of the centering modelthe new formulation is entirely based on functional notions grounded in the information structure of utterances in a discoursewe motivated our proposal by the constraints that hold for a freewordorder language such as german and derived our results from empirical studies of realworld textswe also augmented the ordering criteria of the forwardlooking center list such that it accounts not only for nominal anaphora but also for inferables an issue that up to now has only been sketchily dealt with in the centering frameworkthe extensions we proposed were validated by the empirical analysis of various texts of considerable length selected from different domains and genresthe quotevaluation metricquot we used refers to a new costbased model of interpreting the validity of centering datathe distinction between cognitively cheap and expensive transition pairs led us to replace rule 2 from the original model by a formulation that explicitly incorporates this costoriented distinctiona resolution module for nominal anaphora and one for functional anaphora based on this functional centering model has been implemented as part of parsetalk a comprehensive text parser for german in our groupall these modules are fully operational and integrated within the textunderstanding backbone of syndik ate a largescale text knowledge acquisition system for the two realworld domains of information technology and medicine despite the progress made so far many research problems remain open for further consideration in the centering frameworkthe following list mentions only the most pertinent issues that have come to our attention and complements the list given by grosz joshi and weinstein 1the centering model is rather agnostic about the intricacies of complex sentences such as relative clauses subordinate clauses coordinations and complex noun phrasesthe problem caused by these structures for the centering model is how to decompose a complex sentence into centerupdating units and how to process complex utterances consisting of multiple clausesa first proposal is due to kameyama who breaks a complex sentence into a hierarchy of centerupdating unitsfurthermore she distinguishes several types of constructions in order to decide which part of the sentence is relevant for the resolution of an intersentential anaphor in the following sentencestrube and suri and mccoy describe similar approaches and provide algorithms for the interaction of the resolution of inter and intrasentential anaphora but the topic has certainly not been dealt with exhaustivelythe problem of complex nps was pointed out by walker and prince since the grammatical functions in a sentence may be realized by a complex np it is not clear how to rank these phrases in the cf listwalker and prince propose a quotworking hypothesisquot based on the surface orderstrube provides a complete specification for dealing with complex sentences but this approach departs significantly from the centering model2it seems that there exist only a few fully operational implementations of centeringbased algorithms since the interaction of the algorithm with global and local ambiguities generated by a sentence parser has not received much attention until nowa first proposal for how to deal with center ambiguity in an incremental text parser has been made by hahn and strube 3the centering model covers the standard cases of anaphora ie pronominal and nominal anaphora and even functional anaphora based on the proposal we have developed in this articleit does not however take into account several quothardquot issues such as plural anaphora generic definite noun phrases propositional anaphora and deictic forms these shortcomings might be traced back to the fact that the centering model up to now did not consider the role of the verb of the utterance under scrutinyother cases such as vp anaphora temporal anaphora have already been examined within the centering modelthe particular phenomenon of paycheck anaphora is described by hardt though he uses only a rather simplified centering model for this workother cases are only dealt with in the focusing framework such as propositional anaphora 4evaluations of the centering model have so far only been carried out manuallythis is clearly no longer rewarding so appropriate computational support environments have to be providedwhat we have in mind is a kind of discourse structure bank and associated workbenches comparable to grammar workbenches and parse treebanksaone and bennett for example report on a guibased discourse tagging tool that allows a user to link an anaphor with its antecedent and specify the type of the anaphor the tagged result can be written out to an sgmlmarked filearguing for the need for discourse taggers this also implies the development of a discourse structure interlingua for describing discourse structures in a common format in order to ease nonproblematic exchange and worldwide distribution of discourse structure data setssuch an environment would provide excellent conditions for further testing for example of our assumption that the information structure constraints we suggest might apply in a universal mannerin addition an explicit relation to basic notions from speech act theory is also missing though it should be considered vital for the global coherence of discourse in general it might become increasingly necessary to integrate very deep forms of reasoning perhaps even nonmonotonic or abductive inference mechanisms into the anaphora resolution processthis might become a sheer necessity when incrementality of processing receives a higher level of attention in the centering communitywe would like to thank our colleagues from the computational linguistics group in freiburg and at the university of pennsylvania for fruitful discussions in particular norbert broker miriam eckert aravind joshi manfred klenner nobo komagata katja markert peter neuhaus ellen prince rashmi prasad owen rambow susanne schacht and bonnie webberwe also owe special thanks to the four reviewers whose challenges and suggestions have considerably improved the presentation of our ideas about functional centering in this articlethe first author was partially funded by lgfg badenwurttemberg a postdoctoral grant from dfg and a postdoctoral fellowship award from the institute for research in cognitive science at the university of pennsylvania
J99-3001
functional centering grounding referential coherence in information structureconsidering empirical evidence from a freewordorder language we propose a revision of the principles guiding the ordering of discourse entities in the forwardlooking center list within the centering modelwe claim that grammatical role criteria should be replaced by criteria that reflect the functional information structure of the utterancesthese new criteria are based on the distinction between hearerold and hearernew discourse entitieswe demonstrate that such a functional model of centering can be successfully applied to the analysis of several forms of referential text phenomena viz pronominal nominal and functional anaphoraour methodological and empirical claims are substantiated by two evaluation studies in the first one we compare success rates for the resolution of pronominal anaphora that result from a grammatical roledriven centering algorithm and from a functional centering algorithmthe second study deals with a new costbased evaluation methodology for the assessment of centering data one which can be directly derived from and justified by the cognitive load premises of the centering modelwe introduce functional centering a variant of centering theory which utilizes information status distinctions between hearerold and hearernew entities
semiring parsing we synthesize work on parsing algorithms deductive parsing and the theory of algebra applied to formal languages into a general system for describing parsers each parser performs abstract computations using the operations of a semiring the system allows a single simple representation to be used for describing parsers that compute recognition derivation forests viterbi nbest inside values and other values simply by substituting the operations of different semirings we also show how to use the same representation interpreted differently to compute outside values the system can be used to describe a wide variety of parsers including earley algorithm tree adjoining grammar parsing graham harrison ruzzo parsing and prefix value computation we synthesize work on parsing algorithms deductive parsing and the theory of algebra applied to formal languages into a general system for describing parserseach parser performs abstract computations using the operations of a semiringthe system allows a single simple representation to be used for describing parsers that compute recognition derivation forests viterbi nbest inside values and other values simply by substituting the operations of different semiringswe also show how to use the same representation interpreted differently to compute outside valuesthe system can be used to describe a wide variety of parsers including earley algorithm tree adjoining grammar parsing graham harrison ruzzo parsing and prefix value computationfor a given grammar and string there are many interesting quantities we can computewe can determine whether the string is generated by the grammar we can enumerate all of the derivations of the string if the grammar is probabilistic we can compute the inside and outside probabilities of components of the stringtraditionally a different parser description has been needed to compute each of these valuesfor some parsers such as cky parsers all of these algorithms strongly resemble each otherfor other parsers such as earley parsers the algorithms for computing each value are somewhat different and a fair amount of work can be required to construct each onewe present a formalism for describing parsers such that a single simple description can be used to generate parsers that compute all of these quantities and othersthis will be especially useful for finding parsers for outside values and for parsers that can handle general grammars like earleystyle parsersalthough our description format is not limited to contextfree grammars we will begin by considering parsers for this common formalismthe input string will be denoted w1 w2 wnwe will refer to the complete string as the sentencea cfg g is a 4tuple where n is the set of nonterminals including the start symbol s e is the set of terminal symbols and r is the set of rules each of the form a a for a c n and a e we will use the symbol for immediate derivation and for its reflexive transitive closurewe will illustrate the similarity of parsers for computing different values using the cky algorithm as an examplewe can write this algorithm in its iterative form as shown in figure 1here we explicitly construct a boolean chart chart1nlini 1n 1element char a j contains true if and only if a we w1_1the algorithm consists of a first set of loops to handle the singleton productions a second set of loops to handle the binary productions and a return of the start symbol chart entrynext we consider probabilistic grammars in which we associate a probability with every rule pthese probabilities can be used to associate a probability for 1 2 to n length shortest to longest for s 1 to n 1 start position cky inside algorithm with a particular derivation equal to the product of the rule probabilities used in the derivation or to associate a probability with a set of derivations a w equal to the sum of the probabilities of the individual derivationswe call this latter probability the inside probability of iajwe can rewrite the cky algorithm to compute the inside probabilities as shown in figure 2 notice how similar the inside algorithm is to the recognition algorithm essentially all that has been done is to substitute for v x for a and p and p for truefor many parsing algorithms this or a similarly simple modification is all that is needed to create a probabilistic version of the algorithmon the other hand a simple substitution is not always sufficientto give a trivial example if in the cky recognition algorithm we had written charts a s1 charts a s1 v charts b st a chartst c s1 instead of the less natural charts a s 1 charts a s 1 v chartsb st a chartstc s 1 a true larger changes would be necessary to create the inside algorithmbesides recognition four other quantities are commonly computed by parsing algorithms derivation forests viterbi scores number of parses and outside probabilitiesthe first quantity a derivation forest is a data structure that allows one to efficiently compute the set of legal derivations of the input stringthe derivation forest is typically found by modifying the recognition algorithm to keep track of quotback pointersquot for each cell of how it was producedthe second quantity often computed is the viterbi score the probability of the most probable derivation of the sentencethis can typically be computed by substituting x for a and max for v less commonly computed is the total number of parses of the sentence which like the inside values can be computed using multiplication and addition unlike for the inside values the probabilities of the rules are not multiplied into the scoresthere is one last commonly computed quantity the outside probabilities which we will describe later in section 4one of the key points of this paper is that all five of these commonly computed quantities can be described as elements of complete semirings the relationship between grammars and semirings was discovered by chomsky and schtitzenberger and for parsing with the cky algorithm dates back to teitelbaum a complete semiring is a set of values over which a multiplicative operator and a commutative additive operator have been defined and for which infinite summations are definedfor parsing algorithms satisfying certain conditions the multiplicative and additive operations of any complete semiring can be used in place of a and v and correct values will be returnedwe will give a simple normal form for describing parsers then precisely define complete semirings and the conditions for correctnesswe now describe our normal form for parsers which is very similar to that used by shieber schabes and pereira and by sikkel this work can be thought of as a generalization from their work in the boolean semiring to semirings in generalin most parsers there is at least one chart of some formin our normal form we will use a corresponding equivalent concept itemsrather than for instance a chart element chart i a j we will use an item i afurthermore rather than use explicit procedural descriptions such as charts a s 1 charts a s 1 v chartsb s t a chartst c 91 a true we will use inference rules such as the meaning of an inference rule is that if the top line is all true then we can conclude the bottom linefor instance this example inference rule can be read as saying that if a because and b w wk_i and c wk w_i then a the general form for an inference rule will be where if the conditions a1 ak are all true then we infer that b is also truethe a can be either items or rules such as rwe write r rather than a because to indicate that we could be interested in a value associated with the rule such as the probability of the rule if we were computing inside probabilitiesif an a is in the form r we call it a ruleall of the a must be rules or items when we wish to refer to both rules and items we use the word termswe now give an example of an itembased description and its semanticsfigure 3 gives a description of a ckystyle parserfor this example we will use the inside semiring whose additive operator is addition and whose multiplicative operator is multiplicationwe use the input string xxx to the following grammar the effect of the unary rule will exactly parallel the first set of loops in the cky inside algorithmwe will instantiate the free variables of the unary rule in every possible wayfor instance we instantiate the free variable i with the value 1 and the free variable a with the nontermirtal xsince w1 x the instantiated rule is then because the value of the top line of the instantiated unary rule r has value 08 we deduce that the bottom line 1 x 2 has value 08we instantiate the rule in two other ways and compute the following chart values the effect of the binary rule will parallel the second set of loops for the cky inside algorithmconsider the instantiation i 1 k 2 j 3 a x b x c x we use the multiplicative operator of the semiring of interest to multiply together the values of the top line deducing that i x 3 02 x 08 x 08 0128similarly there are two more ways to instantiate the conditions of the binary rule the first has the value 1 x 08 x 0128 01024 and the second also has the value 01024when there is more than one way to derive a value for an item we use the additive operator of the semiring to sum them upthus 1 s 4 02048since 1 s 4 is the goal item for the cky parser we know that the inside value for xxx is 02048the goal item exactly parallels the return statement of the cky inside algorithmmany parsers are much more complicated than the cky parser and we will need to expand our notation a bit to describe themearley algorithm exhibits most of the complexities we wish to discussearley algorithm is often described as a bottomup parser with topdown filteringin a probabilistic framework the bottomup sections compute probabilities while the topdown filtering nonprobabilistically removes items that cannot be derivedto capture these differences we expand our notation for deduction rules to the following ci ci are side conditions interpreted nonprobabilistically while a1 ak are main conditions with values in whichever semiring we are usingwhile the values of all main conditions are multiplied together to yield the value for the item under the line the side conditions are interpreted in a boolean manner if all of them are nonzero the rule can be used but if any of them are zero it cannot beother than for checking whether they are zero or nonzero their values are ignoredfigure 4 gives an itembased description of earley parserwe assume the addition of a distinguished nonterminal s with a single rule s s an item of the form i a a 3j asserts that a a3 4 w the prediction rule includes a side condition making it a good examplethe rule is through the prediction rule earley algorithm guarantees that an item of the form jb y can only be produced if s w1 wi_ibs for some b this topdown filtering leads to significantly more efficient parsing for some grammars than the cky algorithmthe prediction rule combines side and main conditionsthe side condition i a a bo j provides the topdown filtering ensuring that only items that might be used later by the completion rule can be predicted while the main condition r provides the probability of the relevant rulethe side condition is interpreted in a boolean fashion while the main condition actual probability is usedunlike the cky algorithm barley algorithm can handle grammars with epsilon unary and nary branching rulesin some cases this can significantly complicate parsingfor instance given unary rules a b and b a a cycle existsthis kind of cycle may allow an infinite number of different derivations requiring an infinite summation to compute the inside probabilitiesthe ability of itembased parsers to handle these infinite loops with relative ease is a major attractionthis paper will simplify the development of new parsers in three important waysfirst it will simplify specification of parsers the itembased description is simpler than a procedural descriptionsecond it will make it easier to generalize parsers across tasks a single itembased description can be used to compute values for a variety of applications simply by changing semiringsthis will be especially advantageous for parsers that can handle loops resulting from rules like a a and computations resulting from productions both of which typically lead to infinite sumsin these cases the procedure for computing an infinite sum differs from semiring to semiring and the fact that we can specify that a parser computes an infinite sum separately from its method of computing that sum will be very helpfulthe third use of these techniques is for computing outside probabilities values related to the inside probabilities that we will define laterunlike the other quantities we wish to compute outside probabilities cannot be computed by simply substituting a different semiring into either an iterative or itembased descriptioninstead we will show how to compute the outside probabilities using a modified interpreter of the same itembased description used for computing the other valuesin the next section we describe the basics of semiring parsingin section 3 we derive formulas for computing most of the values in semiring parsers except outside values and then in section 4 show how to compute outside values as well hi section 5 we give an algorithm for interpreting an itembased description followed in section 6 by examples of using semiring parsers to solve a variety of problemssection 7 discusses previous work and section 8 concludes the paperin this section we first describe the inputs to a semiring parser a semiring an itembased description and a grammarnext we give the conditions under which a semiring parser gives correct resultsat the end of this section we discuss three especially complicated and interesting semiringsin this subsection we define and discuss semirings a semiring has two operations ed and 0 that intuitively have most of the properties of the conventional and x operations on the positive integersin particular we require the following properties 0 is associative and commutative 0 is associative and distributes over edif 0 is commutative we will say that the semiring is commutativewe assume an additive identity element which we write as 0 and a multiplicative identity element which we write as 1both addition and multiplication can be defined over finite sets of elements if the set is empty then the value is the respective identity element 0 or 1we also assume that x x 0 for all xin other words a semiring is just like a ring except that the additive operator need not have an inversewe will write to indicate a semiring over the set a with additive operator 0 multiplicative operator 0 additive identity 0 and multiplicative identity 1for parsers with loops ie those in which an item can be used to derive itself we will also require that sums of an infinite number of elements be well definedin particular we will require that the semirings be complete this means that sums of an infinite number of elements should be associative and commutative just like finite sums and that multiplication should distribute over infinite sums just as it does over finite onesall of the semirings we will deal with in this paper are complete2 all of the semirings we discuss here are also cocontinuousintuitively this means that if any partial sum of an infinite sequence is less than or equal to some value recognition string probability prob of best derivation number of derivations set of derivations best derivation best n derivations then the infinite sum is also less than or equal to that value3 this important property makes it easy to compute or at least approximate infinite sumsthere will be several especially useful semirings in this paper which are defined in figure 5we will write rb to indicate the set of real numbers from a to b inclusive with similar notation for the natural numbers n we will write e to indicate the set of all derivations in some canonical form and 2e to indicate the set of all sets of derivations in canonical formthere are three derivation semirings the derivation forest semiring the viterbiderivation semiring and the viterbinbest semiringthe operators used in the derivation semirings will be described later in section 25the inside semiring includes all nonnegative real numbers to be closed under addition and includes infinity to be closed under infinite sums while the viterbi semiring contains only numbers up to 1 since under max this still leads to closurethe three derivation forest semirings can be used to find especially important values the derivation forest semiring computes all derivations of a sentence the viterbiderivation semiring computes the most probable derivation and the viterbinbest semiring computes the n most probable derivationsa derivation is simply a list of rules from the grammarfrom a derivation a parse tree can be derived so the derivation forest semiring is analogous to conventional parse forestsunlike the other semirings all three of these semirings are noncommutativethe additive operation of these semirings is essentially union or maximum while the multiplicative operation is essentially concatenationthese semirings are described in more detail in section 25a semiring parser requires an itembased description of the parsing algorithm in the form given earlierso far we have skipped one important detail of semiring parsingin a simple recognition system as used in deduction systems all that matters is whether an item can be deduced or notthus in these simple systems the order of processing items is relatively unimportant as long as some simple constraints are meton the other hand for a semiring such as the inside semiring there are important ordering constraints we cannot compute the inside value of an item until the inside values of all of its children have been computedthus we need to impose an ordering on the items in such a way that no item precedes any item on which it dependswe will assign each item x to a quotbucketquot b writing bucket b and saying that item x is associated with bwe order the buckets in such a way that if item y depends on item x then bucket b and b a rules243 value of item derivationthe value of an item derivation d v is the product of the value of its rules r in the same order that they appear in the item derivation treesince rules occur only in the leaves of item derivation trees the order is precisely determinedfor an item derivation tree d with rule values d1 d2 d1 as its leaves alternatively we can write this equation recursively as r if d is a rule v oki 1 v if v _ continuing our example the value of the item derivation tree of figure 6 is the same as the value of the first grammar derivationlet inner represent the set of all item derivation trees headed by an item xthen the value of x is the sum of all the values of all item derivation trees headed by xformally the value of a sentence is just the value of the goal item v244 isovalued derivationsin certain cases a particular grammar derivation and a particular item derivation will have the same value for any semiring and any rule value function r in this case we say that the two derivations are isovaluedin particular if and only if the same rules occur in the same order in both derivations then their values will always be the same and they are isovaluedin figure 6 the grammar derivation and item derivation meet this conditionin some cases a grammar derivation and an item derivation will have the same value for any commutative semiring and any rule value functionin this case we say that the derivations are commutatively isovaluedfinishing our example the value of the goal item given our example sentence is just the sum of the values of the two itembased derivations 245 conditions for correctnesswe can now specify the conditions for an itembased description to be correctgiven an itembased description i if for every grammar g there exists a onetoone correspondence between the item derivations using i and the grammar derivations and the corresponding derivations are isovalued then for every complete semiring the value of a given input w1 wn is the same according to the grammar as the value of the goal itemthe proof is very simple essentially each term in each sum occurs in the otherby hypothesis for a given input there are grammar derivations e1 ek for a cfg constitute the primitive elements of the semiringthe additive operator you produces a union of derivations and the multiplicative operator produces the concatenation one derivation concatenated with the nextthe concatenation operation 0 is defined on both derivations and sets of derivations when applied to a set of derivations it produces the set of pairwise concatenationsthe additive identity is simply the empty set 0 union with the empty set is an identity operationthe multiplicative identity is the set containing the empty derivation concatenation with the empty derivation is an identity operationderivations need not be completefor instance for cfgs is a valid element as is in fact is a valid element although it could not occur in a valid grammar derivation or in a correctly functioning parseran example of concatenation potentially derivation forests are sets of infinitely many itemshowever it is still possible to store them using finitesized representationselsewhere we show how to implement derivation forests efficiently using pointers in a manner analogous to the typical implementation of parse forests and also similar to the work of billot and lang using these techniques both union and concatenation can be implemented in constant time and even infinite unions will be reasonably efficient probable derivation of the sentence given a probabilistic grammarelements of this semiring are a pair a real number v and a derivation forest e ie the set of derivations with score v we define max the additive operator as in typical practical viterbi parsers when two derivations have the same value one of the derivations is arbitrarily chosenin practice this is usually a fine solution and one that could be used in a realworld implementation of the ideas in this paper but from a theoretical viewpoint the arbitrary choice destroys the associative property of the additive operator maxto preserve associativity we keep derivation forests of all elements that tie for bestthe definition for max is only defined for two elementssince the operator is associative it is clear how to define max for any finite number of elements but we also v need infinite summations to be definitedwe use the supremum sup the supremum of a set is the smallest value at least as large as all elements of the set that is it is a maximum that is defined in the infinite casewe can now define max for the case of infinite sumslet vit where e d represents the concatenation of the two derivation forests best semiring which is used for constructing nbest listsintuitively the value of a string using this semiring will be the n most likely derivations of that string in practice this is actually how a viterbinbest semiring would typically be implementedfrom a theoretical viewpoint however this implementation is inadequate since we must also define infinite sums and be sure that the distributive property holdselsewhere we give a mathematically precise definition of the semiring that handles these casesrecall that the value of an item x is just v deinner11 the sum of the values of all derivation trees headed by xthis definition may require summing over exponentially many or even infinitely many termsin this section we give relatively efficient formulas for computing the values of itemsthere are three cases that must be handledfirst is the base case when x is a rulein this case inner is trivially the set containing the single derivation tree xthus v gdemner 17 the second and third cases occur when x is an itemrecall that each item is associated with a bucket and that the buckets are orderedeach item x is either associated with a nonlooping bucket in which case its value depends only on the values of items in earlier buckets or with a looping bucket in which case its value depends potentially on the values of other items in the same bucketin the case when the item is associated with a nonlooping bucket if we compute items in the same order as their buckets we can assume that the values of items al ak contributing to the value of item b are knownwe give a formula for computing the value of item b that depends only on the values of items in earlier bucketsfor the final case in which x is associated with a looping bucket infinite loops may occur when the value of two items in the same bucket are mutually dependent or an item depends on its own valuethese infinite loops may require computation of infinite sumsstill we can express these infinite sums in a relatively simple form allowing them to be efficiently computed or approximatedif an item x is not in a looping bucket then let us expand our notion of inner to include deduction rules inner is the set of all derivation trees of the form for any item derivation tree that is not a simple rule there is some al akb such that d e innerthus for any item x substituting this back into equation 6 we get completing the proof0 now we address the case in which x is an item in a looping bucketthis case requires computation of an infinite sumwe will write out this infinite sum and discuss how to compute it exactly in all cases except for one where we approximate itconsider the derivable items x1 xn in some looping bucket bif we build up derivation trees incrementally when we begin processing bucket b only those trees with no items from bucket b will be available what we will call zeroth generation derivation treeswe can put these zeroth generation trees together to form first generation trees headed by elements in bwe can combine these first generation trees with each other and with zeroth generation trees to form second generation trees and so onformally we define the generation of a derivation tree headed by x in bucket b to be the largest number of items in b we can encounter on a path from the root to a leafconsider the set of all trees of generation at most g headed by xcall this set inner 1 the proof parallels that of theorem 2 a formula for v b and b ain this case equation 8 forms a set of linear equations that can be solved by matrix inversionin the more general case as is likely to happen with epsilon rules we get a set of nonlinear equations and must solve them by approximation techniques such as simply computing successive generations for many iterationsstolcke provides an excellent discussion of these cases including a discussion of sparse matrix inversion useful for speeding up some computations5 note that even in the case where we can only use approximation techniques this algorithm is relatively efficientby assumption in this case there is at least one deduction rule with two items in the current generation thus the number of deduction trees over which we are summing grows exponentially with the number of generations a linear amount of computation yields the sum of the values of exponentially many treesthe previous section showed how to compute several of the most commonly used values for parsers including boolean inside viterbi counting and derivation forest values among othersnoticeably absent from the list are the outside probabilities which we define belowin general computing outside probabilities is significantly more complicated than computing inside probabilitiesin this section we show how to compute outside probabilities from the same itembased descriptions used for computing inside valuesoutside probabilities have many uses including for reestimating grammar probabilities for improving parser performance on some criteria for speeding parsing in some formalisms such as dataoriented parsing and for good thresholding algorithms we will show that by substituting other semirings we can get values analogous to the outside probabilities for any commutative semiring elsewhere we have shown that we can get similar values for many noncommutative semirings as wellwe will refer to these analogous quantities as reverse valuesfor instance the quantity analogous to the outside value for the viterbi semiring will be called the reverse viterbi valuenotice that the inside semiring values of a hidden markov model correspond to the forward values of hmms and the reverse inside values of an hmm correspond to the backwards valuescompare the outside algorithm given in figure 7 to the inside algorithm of figure 2notice that while the inside and recognition algorithms are very similar the outside algorithm is quite a bit differentin particular while the inside and recognition algorithms looped over items from shortest to longest the outside algorithm loops over items in the reverse order from longest to shortestalso compare the inside algorithm main loop formula to the outside algorithm main loop formulawhile there is clearly a relationship between the two equations the exact pattern of the relationship is not obviousnotice that the outside formula is about twice as complicated as the inside formulathis doubled complexity is typical of outside formulas and partially explains why the itembased description format is so useful descriptions for the simpler inside values can be developed with relative ease and then automatically used to compute the twiceascomplicated outside valuesitem derivation tree of goal and outer tree of bfor a contextfree grammar using the cky parser of figure 3 recall that the inside probability for an item i a j is pthe outside probability for the same item is pthus the outside probability has the property that when multiplied by the inside probability it gives the probability that the start symbol generates the sentence using the given item p this probability equals the sum of the probabilities of all derivations using the given itemformally letting p represent the probability of a particular derivation and c represent the number of occurrences of item i x j in derivation d the reverse values in general have an analogous meaninglet c represent the number of occurrences of item x in item derivation tree d then for an item x the reverse value z should have the property notice that we have multiplied an element of the semiring v by an integer cthis multiplication is meant to indicate repeated addition using the additive operator of the semiringthus for instance in the viterbi semiring multiplying by a count other than 0 has no effect since x x max x while in the inside semiring it corresponds to actual multiplicationthis value represents the sum of the values of all derivation trees that the item x occurs in if an item x occurs more than once in a derivation tree d then the value of d is counted more than onceto formally define the reverse value of an item x we must first define the outer trees outerconsider an item derivation tree of the goal item containing one or more instances of item xremove one of these instances of x and its children too leaving a gap in its placethis tree is an outer tree of xfigure 8 shows an item derivation tree of the goal item including a subderivation of an item b derived from terms a1 akit also shows an outer tree of b with b and its children removed the spot b was removed from is labeled parse regular grammars and tend to be less usefulthus in most parsers of interest k 1 and the complexity of outside equations when the sum is written out is at least doubledfor an outer tree d e outer we define its value z to be the product of the values of all rules in d ored r then the reverse value of an item can be formally defined as next we argue that this last expression equals the expression on the righthand side of equation 9 edd vcpxfor an item x any outer part of an item derivation tree for x can be combined with any inner part to form a complete item derivation treethat is any 0 e outer and any i e inner can be combined to form an item derivation tree d containing x and any item derivation tree d containing x can be decomposed into such outer and inner treesthus the list of all combinations of outer and inner trees corresponds exactly to the list of all item derivation trees containing xin fact for an item derivation tree d containing c instances of x there are c ways to form d from combinations of outer and inner treesalso notice that for d combined from 0 and i completing the proof0 there is a simple recursive formula for efficiently computing reverse valuesrecall that the basic equation for computing forward values not involved in loops was at this point for conciseness we introduce a nonstandard notationwe will soon be using many sequences of the form 1 2 j2j1j1j 2 k 1kwe denote such sequences by 1 k by extension we will also write f to indicate a sequence of the form f f ff f ff f now we can give a simple formula for computing reverse values z not involved in loops theorem 5 for items x e b where b is nonlooping the simple case is when x is the goal itemsince an outer tree of the goal item is a derivation of the goal item with the goal item and its children removed and since we assumed in section 22 that the goal item can only appear in the root of a derivation tree the outer trees of the goal item are all emptythus as mentioned in section 21 the value of the empty product is the multiplicative identitynow we consider the general casewe need to expand our concept of outer to include deduction rules where outer is an item derivation tree of the goal item with one subtree removed a subtree headed by al whose parent is b and whose siblings are headed by al aknotice that for every outer tree d e outer there is exactly one al ak and b such that x aj and d e outer this corresponds to the deduction rule used at the spot in the tree where the subtree headed by x was deletedfigure 9 illustrates the idea of putting together an outer tree of b with inner trees for al 7 ak to form an outer tree of x al using this observation akb st al ak a xaj deouter akcombining an outer tree with inner trees to form an outer treenow consider all of the outer trees outerfor each item derivation tree dai e inner and for each outer tree db e outer there will be one outer tree in the set outer similarly each tree in outer can be decomposed into an outer tree in outer and derivation trees for for some parsers this technique has optimal time complexity although poor space complexity in particular for the cky algorithm the time complexity is optimal but since it requires computing and storing all possible 0 dependencies between the items it takes significantly more space than the 0 space required in the best implementationin general the bruteforce technique raises the space complexity to be the same as the time complexityfurthermore for some algorithms such as earley algorithm there could be a significant time complexity added as wellin particular earley algorithm may not need to examine all possible itemsfor certain grammars earley algorithm examines only a linear number of items and a linear number of dependencies even though there are 0 possible items and 0 possible dependenciesthus the bruteforce approach would require 0 time and space instead of 0 time and space for these grammarsthe next approach to finding the bucketing solves the time complexity problemin this approach we first parse in the boolean semiring using the agenda parser described by shieber schabes and pereira and then we perform a topological sortthe techniques that shieber schabes and pereira use work well for the boolean semiring where items only have value true or false but cannot be used directly for for current first bucket to last bucket if current is a looping bucket other semiringsfor other semirings we need to make sure that the values of items are not computed until after the values of all items they depend on are computedhowever we can use the algorithm of shieber schabes and pereira to compute all of the items that are derivable and to store all of the dependencies between the itemsthen we perform a topological sort on the itemsthe time complexity of both the agenda parser and the topological sort will be proportional to the number of dependencies which will be proportional to the optimal time complexityunfortunately we still have the space complexity problem since again the space used will be proportional to the number of dependencies rather than to the number of itemsthe third approach to bucketing is to create algorithmspecific bucketing code this results in parsers with both optimal time and optimal space complexityfor instance in a ckystyle parser we can simply create one bucket for each length and place each item into the bucket for its lengthfor some algorithms such as earley algorithm specialpurpose code for bucketing might have to be combined with code to make sure all and only derivable items are considered in order to achieve optimal performanceonce we have the bucketing the parsing step is fairly simplethe basic algorithm appears in figure 10we simply loop over each item in each bucketthere are two types of buckets looping buckets and nonlooping bucketsif the current bucket is a looping bucket we compute the infinite sum needed to determine the bucket values in a working system we substitute semiringspecific code for this section as described in section 32if the bucket is not a looping bucket we simply compute all of the possible instantiations that could contribute to the values of items in that bucketfinally we return the value of the goal itemthe reverse semiring parser interpreter is very similar to the forward semiring parser interpreterthe differences are that in the reverse semiring parser interpreter we traverse the buckets in reverse order and we use the formulas for the reverse values rather than the forward valueselsewhere we give a simple inductive proof to show that both interpreters compute the correct valuesthere are two other implementation issuesfirst for some parsers it will be possible to discard some itemsthat is some items serve the role of temporary variables and can be discarded after they are no longer needed especially if only the forward values are going to be computedalso some items do not depend on the input string but only on the rule value function of the grammarthe values of these items can be precomputedin this section we survey other results that are described in more detail elsewhere including examples of formalisms that can be parsed using itembased descriptions and other uses for the technique of semiring parsingnondeterministic finitestate automata and hmms turn out to be examples of the same underlying formalism whose values are simply computed in different semiringsother semirings lead to other interesting valuesfor hmms notice that the forward values are simply the forward inside values the backward values are the reverse values of the inside semiring and viterbi values are the forward values of the viterbi semiringfor nfas we can use the boolean semiring to determine whether a string is in the language of an nfa we can use the counting semiring to determine how many state sequences there are in the nfa for a given string and we can use the derivation forest semiring to get a compact representation of all state sequences in an nfa for an input stringa single itembased description can be used to find all of these valuesfor language modeling it may be useful to compute the prefix probability of a stringthat is given a string wn we may wish to know the total probability of all sentences beginning with that string where 01 vk represent words that could possibly follow w1 wnjelinek and lafferty and stolcke both give algorithms for computing these prefix probabilitieselsewhere we show how to produce an itembased description of a prefix parserthere are two main advantages to using an itembased description ease of derivation and reusabilityfirst the conventional derivations are somewhat complex requiring a fair amount of insidesemiringspecific mathematicsin contrast using itembased descriptions we only need to derive a parser that has the property that there is one item derivation for each grammar derivation that would produce the prefixthe value of any prefix given the parser will then automatically be the sum of all grammar derivations that include that prefixthe other advantage is that the same description can be used to compute many values not just the prefix probabilityfor instance we can use this description with the viterbiderivation semiring to find the most likely derivation that includes this prefixwith this most likely derivation we could begin interpretation of a sentence even before the sentence was finished being spoken to a speech recognition systemwe could even use the viterbinbest semiring to find the n most likely derivations that include this prefix if we wanted to take into account ambiguities present in parses of the prefixthere has been quite a bit of previous work on the intersection of formal language theory and algebra as described by kuich among othersthis previous work has made heavy use of the fact that there is a strong correspondence between algebraic equations in certain noncommutative semirings and cfgsthis correspondence has made it possible to manipulate algebraic systems rather than grammar systems simplifying many operationson the other hand there is an inherent limit to such an approach namely a limit to contextfree systemsit is then perhaps slightly surprising that we can avoid these limitations and create itembased descriptions of parsers for weakly contextsensitive grammars such as tree adjoining grammars we avoid the limitations of previous approaches using two techniquesone technique is to compute derivation trees rather than parse trees for tagscomputing derivation trees for tags is significantly easier than computing parse trees since the derivation trees are contextfreethe other trick we use is to create a set of equations for each grammar and string length rather than creating a set of equations for each grammar as earlier formulations didbecause the number of equations grows with the string length with our technique we can recognize strings in weakly contextsensitive languagesgoodman gives a further explication of this subject including an itembased description for a simple tag parserour goal in this section has been to show that itembased descriptions can be used to simply describe almost all parsers of interestone parsing algorithm that would seem particularly difficult to describe is tomita graphstructuredstack lr parsing algorithmthis algorithm at first glance bears little resemblance to other parsing algorithmsdespite this lack of similarity sikkel gives an itembased description for a tomitastyle parser for the boolean semiring which is also more efficient than tomita algorithmsikkel parser can be easily converted to our format where it can be used for wcontinuous semirings in generalgraham harrison and ruzzo describe a parser similar to earley but with several speedups that lead to significant improvementsessentially there are three improvements in the ghr parserfirst epsilon productions are precomputed second unary productions are precomputed and finally completion is separated into two steps allowing better dynamic programminggoodman gives a full itembased description of a ghr parserthe forward values of many of the items in our parser related to unary and epsilon productions can be computed offline once per grammar which is an idea due to stolcke since reverse values require entire strings the reverse values of these items cannot be computed until the input string is knownbecause we use a single itembased description for precomputed items and nonprecomputed items and for forward and reverse values this combination of offline and online computation is easily and compactly specifiedwe can apply the same techniques to grammar transformations that we have so far applied to parsingconsider a grammar transformation such as the chomsky normal form grammar transformation which takes a grammar with epsilon unary and nary branching productions and converts it into one in which all productions are of the form a because or a afor any sentence w1 wn its value under the original grammar in the boolean semiring is the same as its value under a transformed grammartherefore we say that this grammar transformation is value preserving under the boolean semiringwe can generalize this concept of value preserving to other semiringselsewhere we show that using essentially the same itembased descriptions we have used for parsing we can specify grammar transformationsthe concept of value preserving grammar transformation is already known in the intersection of formal language theory and algebra our contribution is to show that these value preserving transformations can be written as simple itembased descriptions allowing the same computational machinery to be used for grammar transformations as is used for parsing and to some extent showing the relationship between certain grammar transformations and certain parsers such as that of graham harrison and ruzzo this uniform method of specifying grammar transformations is similar to but clearer than similar techniques used with covering grammars the previous work in this area is extensive including work in deductive parsing work in statistical parsing and work in the combination of formal language theory and algebrathis paper can be thought of as synthetic combining the work in all three areas although in the course of synthesis several general formulas have been found most notably the general formula for reverse valuesa comprehensive examination of all three areas is beyond the scope of this paper but we can touch on a few significant areas of eachfirst there is the work in deductive parsingthis work in some sense dates back to earley in which the use of items in parsers is introducedmore recent work demonstrates how to use deduction engines for parsingfinally both shieber schabes and pereira and sikkel have shown how to specify parsers in a simple interpretable itembased formatthis format is roughly the format we have used here although there are differences due to the fact that their work was strictly in the boolean semiringwork in statistical parsing has also greatly influenced this workwe can trace this work back to research in hmms by baum and his colleagues in particular the work of baum developed the concept of backward probabilities as well as many of the techniques for computing in the inside semiringviterbi developed corresponding algorithms for computing in the viterbi semiringbaker extended the work of baum and his colleagues to pcfgs including to computation of the outside values baker work is described by lan i and young baker work was only for pcfgs in cnf avoiding the need to compute infinite summationsjelinek and lafferty showed how to compute some of the infinite summations in the inside semiring those needed to compute the prefix probabilities of pcfgs in cnfstolcke showed how to use the same techniques to compute inside probabilities for earley parsing dealing with the difficult problems of unary transitions and the more difficult problems of epsilon transitionshe thus solved all of the important problems encountered in using an itembased parser to compute the inside and outside values he also showed how to compute the forward viterbi valuesthe final area of work is in formal language theory and algebraalthough it is not widely known there has been quite a bit of work showing how to use formal power series to elegantly derive results in formal language theory dating back to chomsky and schiitzenberger the major classic results can be derived in this framework but with the added benefit that they apply to all commutative wcontinuous semiringsthe most accessible introduction to this literature we have found is by kuich there are also books by salomaa and soittola and kuich and salomaa one piece of work deserves special mentionteitelbaum showed that any semiring could be used in the cky algorithm laying the foundation for much of the work that followedin summary this paper synthesizes work from several different related fields including deductive parsing statistical parsing and formal language theory we emulate and expand on the earlier synthesis of teitelbaumthe synthesis here is powerful by generalizing and integrating many results we make the computation of a much wider variety of values possiblethere has also been recent similar work by tendeau tendeau gives an earleylike algorithm that can be adapted to work with complete semirings satisfying certain conditionsunlike our version of earley algorithm tendeau version requires time o where l is the length of the longest righthand side as opposed to 0 for the classic version and for our descriptionwhile one could split righthand sides of rules to make them binary branching speeding tendeau version up this would then change values in the derivation semiringstendeau introduces a parse forest semiring similar to our derivation forest semiring in that it encodes a parse forest succinctlyto implement this semiring tendeau version of rule value functions take as their input not only a nonterminal but also the span that it covers this is somewhat less elegant than our versiontendeau gives a generic description for dynamic programming algorithmshis description is very similar to our itembased descriptions except that it does not include side conditionsthus algorithms such as earley algorithm cannot be described in tendeau formalism in a way that captures their efficiencythere are some similarities between our work and the work of koller mcallester and pfeffer who create a general formalism for handling stochastic programs that makes it easy to compute inside and outside probabilitieswhile their formalism is more general than itembased descriptions in that it is a good way to express any stochastic program it is also less compact than ours for expressing most dynamic programming algorithmsour formalism also has advantages for approximating infinite sums which we can do efficiently and in some cases exactlyit would be interesting to try to extend itembased descriptions to capture some of the formalisms covered by koller mcallester and pfeffer including bayes netsin this paper we have given a simple itembased description format that can be used to describe a very wide variety of parsersthese parsers include the cky algorithm earley algorithm prefix probability computation a tag parsing algorithm graham harrison ruzzo parsing and hmm computationswe have shown that this description format makes it easy to find parsers that compute values in any wcontinuous semiringthe same description can be used to find reverse values in commutative wcontinuous semirings and in many noncommutative ones as wellthis description format can also be used to describe grammar transformations including transformations to cnf and gnf which preserve values in any commutative wcontinuous semiringwhile theoretical in nature this paper is of some practical valuethere are three reasons the results of this paper would be used in practice first these techniques make computation of the outside values simple and mechanical second these techniques make it easy to show that a parser will work in any wcontinuous semiring and third these techniques isolate computation of infinite sums in a given semiring from the parser specification processperhaps the most useful application of these results is in finding formulas for outside valuesfor parsers such as cky parsers finding outside formulas is not particularly burdensome but for complicated parsers such as tag parsers ghr parsers and others it can require a fair amount of thought to find these equations through conventional reasoningwith these techniques the formulas can be found in a simple mechanical waythe second advantage comes from clarifying the conditions under which a parser can be converted from computing values in the boolean semiring to computing values in any wcontinuous semiringwe should note that because in the boolean semiring infinite summations can be computed trivially and because repeatedly adding a term does not change results it is not uncommon for parsers that work in the boolean semiring to require significant modification for other semiringsfor parsers like cky parsers verifying that the parser will work in any semiring is trivial but for other parsers the conditions are more complexwith the techniques in this paper all that is necessary is to show that there is a onetoone correspondence between item derivations and grammar derivationsonce that has been shown any wcontinuous semiring can be usedthe third use of this paper is to separate the computation of infinite sums from the main parsing processinfinite sums can come from several different phenomena such as loops from productions of the form a a productions involving c and left recursionin traditional procedural specifications the solution to these difficult problems is intermixed with the parser specification and makes the parser specific to semirings using the same techniques for solving the summationsit is important to notice that the algorithms for solving these infinite summations vary fairly widely depending on the semiringon the one hand boolean infinite summations are nearly trivial to computefor other semirings such as the counting semiring or derivation forest semiring more complicated computations are required including the detection of loopsfinally for the inside semiring in most cases only approximate techniques can be used although in some cases matrix inversion can be usedthus the actual parsing algorithm if specified procedurally can vary quite a bit depending on the semiringon the other hand using our techniques makes infinite sums easier to deal with in two waysfirst these difficult problems are separated out relegated conceptually to the parser interpreter where they can be ignored by the constructor of the parsing algorithmsecond because they are separated out they can be solved once rather than again and againboth of these advantages make it significantly easier to construct parserseven in the case where for efficiency loops are precomputed offline as in ghr parsing the same itembased representation and interpreter can be usedin summary the techniques of this paper will make it easier to compute outside values easier to construct parsers that work for any wcontinuous semiring and easier to compute infinite sums in those semiringsin 1973 teitelbaum wrote we have pointed out the relevance of the theory of algebraic power series in noncommuting variables in order to minimize further piecemeal rediscovery many of the techniques needed to parse in specific semirings continue to be rediscovered and outside formulas are derived without observation of the basic formulas given herewe hope this paper will bring about teitelbaum wish
J99-4004
semiring parsingwe synthesize work on parsing algorithms deductive parsing and the theory of algebra applied to formal languages into a general system for describing parserseach parser performs abstract computations using the operations of a semiringthe system allows a single simple representation to be used for describing parsers that compute recognition derivation forests viterbi nbest inside values and other values simply by substituting the operations of different semiringswe also show how to use the same representation interpreted differently to compute outside valuesthe system can be used to describe a wide variety of parsers including earley algorithm tree adjoining grammar parsing graham harrison ruzzo parsing and prefix value computationwe show how a parsing logic can be combined with various semirings to compute different kinds of information about the inputwe augment such logic programs with semiring weights giving an algebraic explanation for the intuitive connections among classes of algorithms with the same logical structure
decoding complexity in wordreplacement translation models statistical machine translation is a relatively new approach to the longstanding problem of translating human languages by computer current statistical techniques uncover translation rules from bilingual training texts and use those rules to translate new texts the general architecture is the sourcechannel model an english string is statistically generated then statistically transformed into french in order to translate a french string we look for the most likely english source we show that for the simplest form of statistical models this problem is npcomplete ie probably exponential in the length of the observed sentence we trace this complexity to factors not present in other decoding problems statistical machine translation is a relatively new approach to the longstanding problem of translating human languages by computercurrent statistical techniques uncover translation rules from bilingual training texts and use those rules to translate new textsthe general architecture is the sourcechannel model an english string is statistically generated then statistically transformed into french in order to translate a french string we look for the most likely english sourcewe show that for the simplest form of statistical models this problem is npcomplete ie probably exponential in the length of the observed sentencewe trace this complexity to factors not present in other decoding problemsstatistical models are widely used in attacking natural language problemsthe sourcechannel framework is especially popular finding applications in partofspeech tagging accent restoration transliteration speech recognition and many other areasin this framework we build an underspecified model of how certain structures are generated and transformedwe then instantiate the model through training on a database of sample structures and transformationsrecently brown et al built a sourcechannel model of translation between english and frenchthey assumed that english strings are produced according to some stochastic process and transformed stochastically into french strings to translate french to english it is necessary to find an english source string that is likely according to the modelswith a nod to its cryptographic antecedents this kind of translation is called decodingthis paper looks at decoding complexitythe prototype sourcechannel application in natural language is partofspeech tagging we review it here for purposes of comparison with machine translationsource strings comprise sequences of partofspeech tags like noun verb etca simple source model assigns a probability to a tag sequence ti tm based on the probabilities of the tag pairs inside ittarget strings are english sentences eg w1 wmthe channel model assumes each tag is probabilistically replaced by a word without considering contextmore concretely we have we can assign partsofspeech to a previously unseen word sequence w1 win by finding the sequence ti 4 that maximizes pby bayes rule we can equivalently maximize pp which we can calculate directly from the b and s tables abovethree interesting complexity problems in the sourcechannel framework are the first problem is solved in 0 time for partofspeech taggingwe simply count tag pairs and wordtag pairs then normalizethe second problem seems to require enumerating all 0 potential source sequences to find the best but can actually be solved in 0 time with dynamic programmingwe turn to the third problem in the context of another application cryptanalysisin a substitution cipher a plaintext message like hello world is transformed into a ciphertext message like eoppx yxapf via a fixed lettersubstitution tableas with tagging we can assume an alphabet of v source tokens a bigram source model a substitution channel model and an mtoken coded textif the coded text is annotated with corresponding english then building source and channel models is trivially 0comparing the situation to partofspeech tagging then the problem becomes one of acquiring a channel model ie a table s with an entry for each codeletterplaintextletter pairstarting with an initially uniform table we can use the estimationmaximization algorithm to iteratively revise s so as to increase the probability of the observed corpus pfigure 1 shows a naive them implementation that runs in 0 timethere is an efficient 0 them implementation based on dynamic programming that accomplishes the same thingonce the s table has been learned there is a similar 0 algorithm for optimal decodingsuch methods can break english lettersubstitution ciphers of moderate sizegiven coded text f of length m a plaintext vocabulary of v tokens and a source model b a naive application of the them algorithm to break a substitution cipherit runs in 0 timein our discussion of substitution ciphers we were on relatively sure groundthe channel model we assumed in decoding is actually the same one used by the cipher writer for encodingthat is we know that plaintext is converted to ciphertext letter by letter according to some tablewe have no such clear conception about how english gets converted to french although many theories existbrown et al recently cast some simple theories into a sourcechannel framework using the bilingual canadian parliament proceedings as training datawe may assume bilingual texts seem to exhibit english words getting substituted with french ones though not oneforone and not without changing their orderthese are important departures from the two applications discussed earlierin the main channel model of brown et al each english word token e in a source sentence is assigned a quotfertilityquot 0 which dictates how many french words it will producethese assignments are made stochastically according to a table nthen actual french words are produced according to s and permuted into new positions according to a distortion table dhere j and i are absolute targetsource word positions within a sentence and m and i are targetsource sentence lengthsinducing n s and d parameter estimates is easy if we are given annotations in the form of word alignmentsan alignment is a set of connections between english and french words in a sentence pairin brown et al alignments are asymmetric each french word is connected to exactly one english wordwordaligned data is usually not available but large sets of unaligned bilingual sentence pairs do sometimes exista single sentence pair will have right now possible alignmentsfor each french word position 1 m there is a choice of i english positions to connect toa naive them implementation will collect n s and d counts by considering each alignment but this is expensive traininglacking a polynomial reformulation brown et al decided to collect counts only over a subset of likely alignmentsto bootstrap they required some initial idea of what alignments are reasonable so they began with several iterations of a simpler channel model that has nicer computational propertiesin the following description of model 1 we represent an alignment formally as a vector al with values al ranging over english word positions 1 1 model 1 channel parameters c and sgiven a source sentence e of length 1 because the same e may produce the same f by means of many different alignments we must sum over all of them to obtain p figure 2 illustrates naive them training for model 1if we compute p once per iteration outside the quotfor aquot loops then the complexity is 0 per sentence pair per iterationmore efficient 0 training was devised by brown et al instead of prowe next consider decodingwe seek a string e that maximizes p or equivalently maximizes p pa naive algorithm would evaluate all possible source strings whose lengths are potentially unboundedif we limit our search to strings at most twice the length m of our observed french then we have a naive 0 method given a string f of length m we may now hope to find a way of reorganizing this computation using tricks like the ones aboveunfortunately we are unlikely to succeed as we now showfor proof purposes we define our optimization problem with an associated yesno decision problemgiven a string f of length m and a set of parameter tables return a string e of length 1 kwe will leave the relationship between these two problems somewhat open and intuitive noting only that m1decide intractability does not bode well for mloptimizeto show inclusion in np we need only nondeterministically choose e for any problem instance and verify that it has the requisite p p in 0 timenext we give separate polynomialtime reductions from two npcomplete problemseach reduction highlights a different source of complexitythe hamilton circuit problem asks given a directed graph g with vertices labeled 0 n does g have a path that visits each vertex exactly once and returns to its starting pointwe transform any hamilton circuit instance into an m1decide instance as followsfirst we create a french vocabulary fn associating word fi with vertex i in the graphwe create a slightly larger english vocabulary eo en with eo serving as the quotboundaryquot word for source model scoringultimately we will ask mldecide to decode the string fi fnwe create channel model tables as follows these tables ensure that any decoding e of fi ft will contain the n words el en we now create a source modelfor every pair such that 0 this year comma my 4 birthday falls on a thursday boundaryif word pairs have probabilities attached to them then word ordering resembles the finding the leastcost circuit also known as the traveling salesman problemsalesman problemit introduces edge costs c111 and seeks a minimumcost circuitby viewing edge costs as log probabilities we can cast the traveling salesman problem as one of optimizing p that is of finding the best source word order in model 1 decoding42 reduction 2 the minimum set cover problem asks given a collection c of subsets of finite set s and integer n does c contain a cover for s of size 0 must existwe know that e must contain n or fewer wordsotherwise p 0 by the e tablefurthermore the s table tells us that every word fi is covered by at least one english word in e through the onetoone correspondence between elements of e and c we produce a set cover of size n for s likewise if m1decide returns no then all decodings have p p 0because there are no zeroes in the source table b every e has p 0therefore either the length of e exceeds n or some fi is left uncovered by the words in e because source words cover target words in exactly the same fashion as elements of c cover s we conclude that there is no set cover of size n for s figure 4 illustrates the intuitive correspondence between source word selection and minimum set coveringthe two proofs point up separate factors in mt decoding complexityone is wordorder selectionbut even if any word order will do there is still the problem of picking a concise decoding in the face of overlapping bilingual dictionary entriesthe former is more closely tied to the source model and the latter to the channel model though the complexity arises from the interaction of the twowe should note that model 1 is an intentionally simple translation model one whose primary purpose in machine translation has been to allow bootstrapping into more complex translation models it is easy to show that the intractability results also apply to stronger quotfertilitydistortionquot models we assign zero probability to fertilities other than 1 and we set up uniform distortion tablessimple translation models like model 1 find more direct use in other applications so their computational properties are of wider interestthe proofs we presented are based on a worstcase analysisreal s e and b tables may have properties that permit faster optimal decoding than the artificial tables constructed aboveit is also possible to devise approximation algorithms like those devised for other npcomplete problemsto the extent that word ordering is like solving the traveling salesman problem it is encouraging substantial progress continues to be made on traveling salesman algorithmsfor example it is often possible to get within two percent of the optimal tour in practice and some researchers have demonstrated an optimal tour of over 13000 yous citiesso far statistical translation research has either opted for heuristic beamsearch algorithms or different channel modelsfor example some researchers avoid bag generation by preprocessing bilingual texts to remove wordorder differences while others adopt channels that eliminate syntactically unlikely alignmentsfinally expensive decoding also suggests expensive training from unannotated texts which presents a challenging bottleneck for extending statistical machine translation to language pairs and domains where large bilingual corpora do not exist
J99-4005
decoding complexity in wordreplacement translation modelsstatistical machine translation is a relatively new approach to the longstanding problem of translating human languages by computercurrent statistical techniques uncover translation rules from bilingual training texts and use those rules to translate new textsthe general architecture is the sourcechannel model an english string is statistically generated then statistically transformed into french in order to translate a french string we look for the most likely english sourcewe show that for the simplest form of statistical models this problem is npcomplete ie probably exponential in the length of the observed sentencewe trace this complexity to factors not present in other decoding problemswe proved that the exact decoding problem is nphard when the language model is a bigram modelwe show that the decoding problem for smt as well as some bilingual tiling problems are npcomplete so no efficient algorithm exists in the general case
the penn discourse treebank 20 this paper deals with the relationship between weblog content and time with the proposed temporal mutual information we analyze the collocations in time dimension and the interesting collocations related to special events the temporal mutual information is employed to observe the strength of termtoterm associations over time an event detection algorithm identifies the collocations that may cause an event in a specific timestamp an event summarization algorithm retrieves a set of collocations which describe an event we compare our approach with the approach without considering the time interval the experimental results demonstrate that the temporal collocations capture the real world semantics and real world events over time 1 2 compared with traditional media such as online news and enterprise websites weblogs have several unique characteristics eg containing abundant life experiences and public opinions toward different topics highly sensitive to the events occurring in the real world and associated with the personal information of bloggerssome works have been proposed to leverage these characteristics eg the study of the relationship between the content and bloggersprofiles and content and real events in this paper we will use temporal collocation to model the termtoterm association over timein the past some useful collocation models have been proposed such as mean and variance hypothesis test mutual information etc some works analyze the weblogs from the aspect of time like the dynamics of weblogs in time and location the weblog posting behavior the topic extraction etc the impacts of events on social media are also discussed eg the change of weblogs after london attack the relationship between the warblog and weblogs etc this paper is organized as followssection 2 defines temporal collocation to model the strength of termtoterm associations over timesection 3 introduces an event detection algorithm to detect the events in weblogs and an event summarization algorithm to extract the description of an event in a specific time with temporal collocationssection 4 shows and discusses the experimental resultssection 5 concludes the remarkstemporal collocations we derive the temporal collocations from shannons mutual information which is defined as follows definition 1 the mutual information of two terms x and y is defined as is the cooccurrence probability of x and y and p and p denote the occurrence probability of x and y respectivelyfollowing the definition of mutual information we derive the temporal mutual information modeling the termtoterm association over time and the definition is given as followsdefinition 2 given a timestamp t and a pair of terms x and y the temporal mutual information of x and y in t is defined as is the probability of cooccurrence of terms x and y in timestamp t p and p denote the probability of occurrences of x and y in timestamp t respectivelyto measure the change of mutual information in time dimension we define the change of temporal mutual information as followsdefinition 3 given time interval t1 t2 the change of temporal mutual information is defined as 12 12 21 is the change of temporal mutual information of terms x and y in time interval t1 t2 i and i are the temporal mutual information in time t1 and t2 respectively3event detectionevent detection aims to identify the collocations resulting in events and then retrieve the description of eventsfigure 1 sketches an example of event detectionthe weblog is parsed into a set of collocationsall collocations are processed and monitored to identify the plausible eventshere a regular event mothers dayand an irregular event typhoon chanchuare detectedthe event typhoon chanchuis described by the words figure 1 an example of event detection typhoon chanchu 2k eye pathand chinaphillippinethe architecture of an event detection system includes a preprocessing phase for parsing the weblogs and retrieving the collocations an event detection phase detecting the unusual peak of the change of temporal mutual information and identifying the set of collocations which may result in an event in a specific time duration and an event summarization phase extracting the collocations related to the seed collocations found in a specific time durationthe most important part in the preprocessing phase is collocation extractionwe retrieve the collocations from the sentences in blog poststhe candidates are two terms within a window sizedue to the size of candidates we have to identify the set of tracking terms for further analysisin this paper those candidates containing stopwords or with low change of temporal mutual information are removedin the event detection phase we detect events by using the peak of temporal mutual information in time dimensionhowever the regular pattern of temporal mutual information may cause problems to our detectiontherefore we remove the regular pattern by seasonal index and then detect the plausible events by measuring the unusual peak of temporal mutual informationif a topic is suddenly discussed the relationship between the related terms will become highertwo alternatives including change of temporal mutual information and relative change of temporal mutual information are employed to detect unusual eventsgiven timestamps t1 and t2 with temporal mutual information mi1 and mi2 the change of temporal mutual information is calculated by the relative change of temporal mutual information is calculated by mi1for each plausible event there is a seed collocation eg typhoon chanchuin the event description retrieval phase we try to select the collocations with the highest mutual information with the word w in a seed collocationthey will form a collocation network for the eventinitially the seed collocation is placed into the networkwhen a new collocation is added we compute the mutual information of the multiword collocations by the following formula where n is the number of collocations in the network up to now n imininformatiomutualmultiwo if the multiword mutual information is lower than a threshold the algorithm stops and returns the words in the collocation network as a description of the eventfigure 2 sketches an examplethe collocations chanchus path typhoon eye and chanchu affectsare added into the network in sequence based on their miwe have two alternatives to add the collocations to the event descriptionthe first method adds the collocations which have the highest mutual information as discussed abovein contrast the second method adds the collocations which have the highest product of mutual information and change of temporal mutual informationfigure 2 an example of collocation network 441experiments and discussions temporal mutual information versus mutual information in the experiments we adopt the icwsm weblog data set this data set collected from may 1 2006 through may 20 2006 is about 20 gbwithout loss of generality we use the english weblog of 2734518 articles for analysisto evaluate the effectiveness of time information we made the experiments based on mutual information and temporal mutual information the former called the incremental approach measures the mutual information at each time point based on all available temporal information at that timethe latter called the intervalbased approach considers the temporal mutual information in different time stampsfigures 3 and 4 show the comparisons between intervalbased approach and incremental approach respectively in the event of da vinci codewe find that tom hankshas higher change of temporal mutual information compared to da vinci codecompared to the incremental approach in figure 4 the intervalbased approach can reflect the exact release date of da vinci coderd i 1 42evaluation of event detection we consider the events of may 2006 listed in wikipedia1 as gold standardon the one hand the events posted in wikipedia are not always complete so that we adopt recall rate as our evaluation metricon the other hand the events specified in wikipedia are not always discussed in weblogsthus we search the contents of blog post to verify if the events were touched on in our blog corpusbefore evaluation we remove the events listed in wikipedia but not referenced in the weblogsfigure 3 intervalbased approach in da vinci code figure 4 incremental approach in da vinci code gure 5 sketches the idea of evaluationthe left side of t s figure shows the collocations detected by our event dete tion system and the right side shows the events liste in wikipediaafter matching these two lists we can find that the first three listed events were correctly identified by our systemonly the event nepal civil warwas listed but not foundthus the recall rate is 75 in this casefigure 5 evaluation of event detection phase as discussed in section 3 we adopt change of temporal mutual information and relative change of temporal mutual information to detect the peakin figure 6 we compare the two methods to detect the events in weblogsthe relative change of temporal mutual information achieves better performance than the change of temporal mutual information1 httpenwikipediaorgwikimay_2006 table 1 and table 2 list the top 20 collocations based on these two approaches respectivelythe results of the first approach show that some collocations are related to the feelings such as fell leftand time such as saturday nightin contrast the results of the second approach show more interesting collocations related to the news events at that time such as terrorists zacarias moussaouiand paramod mahajanthese two persons were killed in may 3besides geena davisgot the golden award in may 3that explains why the collocations detected by relative change of temporal mutual information are better than those detected by change of temporal mutual information20 15 10 5 0 5 10 1 3 5 7 9 11 13 15 17 19 time m ut ua l i nf or m at io n davinci tom hanks figure 6 performance of event detection phase 15 10 5 0 5 10 1 3 5 7 9 11 13 15 17 19 time m ut ua l i nf or m at io n davinci tom hanks collocations cmi collocations cmi may 03 927608 current music 184267 illegal immigrants 583317 hate studying 172232 feel left 541157 stephen colbert 170959 saturday night 415529 thursday night 167878 past weekend 240532 cannot believe 153333 white house 220889 feel asleep 142818 red sox 220843 ice cream 137323 album tool 212030 oh god 136952 sunday morning 200678 illegalimmigration 136812 1656 f cmi 3250 3163 2909 2845 2834 2813sunday night 199237 pretty cool 13 table 1 top 20 collocations with highest change o temporal mutual information collocations cmi collocations casinos online 61836 diet sodas zacarias moussaoui 15468 ving rhames tsunami warning 10793 stock picks conspirator zacarias 7162 happy hump artist formerly 5704 wong kan federal jury 4178 sixapartcom movabletype wed 3 3920 aaron echolls 2748 pramod mahajan 3541 phnom penh 2578 bbc version 3521 livejournal sixapartcom 2383 fi hi c dgeena davis 3364 george yeo 2034 table 2 top 20 collocations with highest relative change of mutual information 43evaluation of event summarizationas discussed in section 3 we have two methods to include collocations to the event descriptionmethod 1 employs the highest mutual information and method 2 utilizes the highest product of mutual information and change of temporal mutual informationfigure 7 shows the performance of method 1 and method 2we can see that the performance of method 2 is better than that of method 1 in most casesfigure 7 overall performance of event summarization the results of event summarization by method 2 are shown in figure 8typhoon chanchu appeared in the pacific ocean on may 10 2006 passed through philippine and china and resulted in disasters in these areas on may 13 and 18 2006the appearance of the typhoon chanchu cannot be found from the events listed in wikipedia on may 10however we can identify the appearance of typhoon chanchu from the description of the typhoon appearance such as typhoon namedand typhoon eyein addition the typhoon chanchus path can also be inferred from the retrieved collocations such as philippine chinaand near chinathe response of bloggers such as unexpected typhoonand 8 typhoonsis also extractedfigure 8 event summarization for typhoon chanchu 5concluding remarksthis paper introduces temporal mutual information to capture termterm association over time in weblogsthe extracted collocation with unusual peak which is in terms of relative change of temporal mutual information is selected to represent an eventwe collect those collocations with the highest product of mutual information and change of temporal mutual information to summarize the specific eventthe experiments on icwsm weblog data set and evaluation with wikipedia event lists at the same period as weblogs demonstrate the feasibility of the proposed temporal collocation model and event detection algorithmscurrently we do not consider user groups and locationsthis methodology will be extended to model the collocations over time and location and the relationship between the userpreferred usage of collocations and the profile of usersacknowledgments research of this paper was partially supported by national science council taiwan and excellent research projects of national taiwan university
L08-1093
the penn discourse treebank 20we present the second version of the penn discourse treebank pdtb20 describing its lexicallygrounded annotations of discourse relations and their two abstract object arguments over the 1 million word wall street journal corpuswe describe all aspects of the annotation including the argument structure of discourse relations the sense annotation of the relations and the attribution of discourse relations and each of their argumentswe list the differences between pdtb10 and pdtb20we present representative statistics for several aspects of the annotation in the corpuswe present the penn discourse treebank such a corpus which provides a discourselevel annotation on top of the penn treebank following a predicate argument approach
transformation based learning in the fast lane transformationbased learning has been successfully employed to solve many natural language processing problems it achieves stateoftheart performance on many natural language processing tasks and does not overtrain easily however it does have a serious drawback the training time is often intorelably long especially on the large corpora which are often used in nlp in this paper we present a novel and realistic method for speeding up the training time of a transformationbased learner without sacrificing performance the paper compares and contrasts the training time needed and performance achieved by our modified learner with two other systems a standard transformationbased learner and the ica system the results of these experiments show that our system is able to achieve a significant improvement in training time while still achieving the same performance as a standard transformationbased learner this is a valuable contribution to systems and algorithms which utilize transformationbased learning at any part of the execution much research in natural language processing has gone into the development of rulebased machine learning algorithmsthese algorithms are attractive because they often capture the linguistic features of a corpus in a small and concise set of rulestransformationbased learning is one of the most successful rulebased machine learning algorithmsit is a flexible method which is easily extended to various tasks and domains and it has been applied to a wide variety of nlp tasks including part of speech tagging noun phrase chunking parsing phrase chunking spelling correction prepositional phrase attachment dialog act tagging segmentation and message understanding furthermore transformationbased learning achieves stateoftheart performance on several tasks and is fairly resistant to overtraining despite its attractive features as a machine learning algorithm tbl does have a serious drawback in its lengthy training time especially on the largersized corpora often used in nlp tasksfor example a wellimplemented transformationbased partofspeech tagger will typically take over 38 hours to finish training on a 1 million word corpusthis disadvantage is further exacerbated when the transformationbased learner is used as the base learner in learning algorithms such as boosting or active learning both of which require multiple iterations of estimation and application of the base learnerin this paper we present a novel method which enables a transformationbased learner to reduce its training time dramatically while still retaining all of its learning powerin addition we will show that our method scales better with training data sizethe central idea of transformationbased learning is to learn an ordered list of rules which progressively improve upon the current state of the training setan initial assignment is made based on simple statistics and then rules are greedily learned to correct the mistakes until no net improvement can be madethe following definitions and notations will be used throughout the paper where since we are not interested in rules that have a negative objective function value only the rules that have a positive good need be examinedthis leads to the following approach the system thus learns a list of rules in a greedy fashion according to the objective functionwhen no rule that improves the current state of the training set beyond a preset threshold can be found the training phase endsduring the application phase the evaluation set is initialized with the initial class assignmentthe rules are then applied sequentially to the evaluation set in the order they were learnedthe final classification is the one attained when all rules have been appliedas was described in the introductory section the long training time of tbl poses a serious problemvarious methods have been investigated towards ameliorating this problem and the following subsections detail two of the approachesone of the most timeconsuming steps in transformationbased learning is the updating stepthe iterative nature of the algorithm requires that each newly selected rule be applied to the corpus and the current state of the corpus updated before the next rule is learnedramshaw marcus attempted to reduce the training time of the algorithm by making the update process more efficienttheir method requires each rule to store a list of pointers to samples that it applies to and for each sample to keep a list of pointers to rules that apply to itgiven these two sets of lists the system can then easily these two processes are performed multiple times during the update process and the modification results in a significant reduction in running timethe disadvantage of this method consists in the system having an unrealistically high memory requirementfor example a transformationbased text chunker training upon a modestlysized corpus of 200000 words has approximately 2 million rules active at each iterationthe additional memory space required to store the lists of pointers associated with these rules is about 450 mb which is a rather large requirement to add to a systeml the ica system aims to reduce the training time by introducing independence assumptions on the training samples that dramatically reduce the training time with the possible downside of sacrificing performanceto achieve the speedup the ica system disallows any interaction between the learned rules by enforcing the following two assumptions we need to note that the 200kword corpus used in this experiment is considered small by nlp standardsmany of the available corpora contain over 1 million wordsas the size of the corpus increases so does the number of rules and the additional memory space required state change per samplein other words at most one rule is allowed to apply to each samplethis mode of application is similar to that of a decision list where an sample is modified by the first rule that applies to it and not modified again thereafterin general this assumption will hold for problems which have high initial accuracy and where state changes are infrequentthe ica system was designed and tested on the task of partofspeech tagging achieving an impressive reduction in training time while suffering only a small decrease in accuracythe experiments presented in section 4 include ica in the training time and performance comparisonssamuel proposed a monte carlo approach to transformationbased learning in which only a fraction of the possible rules are randomly selected for estimation at each iterationthe µtbl system described in lager attempts to cut down on training time with a more efficient prolog implementation and an implementation of quotlazyquot learningthe application of a transformationbased learning can be considerably spedup if the rules are compiled in a finitestate transducer as described in roche and schabes the approach presented here builds on the same foundation as the one in instead of regenerating the rules each time they are stored into memory together with the two values good and bad the following notations will be used throughout this section t and t tsg the samples on which the rule applies and changes them to the correct classification therefore good jgj t and cs t sg the samples on which the rule applies and changes the classification from correct to incorrect similarly bad jbjgiven a newly learned rule b that is to be applied to s the goal is to identify the rules r for which at least one of the sets g b is modified by the application of rule bobviously if both sets are not modified when applying rule b then the value of the objective function for rule r remains unchanged2the algorithm was implemented by the the authors following the description in hepple the presentation is complicated by the fact that in many nlp tasks the samples are not independentfor instance in pos tagging a sample is dependent on the classification of the preceding and succeeding 2 samples let v denote the quotvicinityquot of a sample the set of samples on whose classification the sample s might depend on if samples are independent then v fsglet s be a sample on which the best rule b applies 6 c swe need to identify the rules r that are influenced by the change s b let r be such a rule f needs to be updated if and only if there exists at least one sample s such that each of the above conditions corresponds to a specific update of the good or bad countswe will discuss how rules which should get their good or bad counts decremented and can be generated the other two being derived in a very similar fashionthe key observation behind the proposed algorithm is when investigating the effects of applying the rule b to sample s only samples s in the set v need to be checkedany sample s that is not in the set sib changes s can be ignored since s blet s 2 v be a sample in the vicinity of s there are 2 cases to be examined one in which b applies to s and one in which b does not case i c c we note that the condition and the formula s 2 b and b 2 b is equivalent to and b please refer to florian and ngai these formulae offer us a method of generating the rules r which are influenced by the modification s b if p false then decrease good where r is the rule created with predicate p st target t s if p false then for all the rules r whose predicate is p3 and tr c s decrease bad the algorithm for generating the rules r that need their good counts or bad counts increased can be obtained from the formulae by switching the states s and b and making sure to add all the new possible rules that might be generated case ii c s c b in this case the formula is transformed into the case of however is much simplerit is easy to notice that c s c b and s e b implies that b e b indeed a necessary condition for a sample s to be in a set c b tr in formula and removing the test altogether for case of the formulae used to generate rules r that might have their counts increased and are obtained in the same fashion as in case iat every point in the algorithm we assumed that all the rules that have at least some positive outcome 0 are stored and their score computedtherefore at the beginning of the algorithm all the rules that correct at least one wrong classification need to be generatedthe bad counts for these rules are then computed by generation as well in every position that has the correct classification the rules that change the classification are generated as in case 4 and their bad counts are incrementedthe entire fasttbl algorithm is presented in figure 1note that when the bad counts are computed only rules that already have positive good counts are selected for evaluationthis prevents the generation of useless rules and saves computational timethe number of examined rules is kept close to the minimumbecause of the way the rules are generated most of them need to modify either one of their countssome additional space is necessary for representing the rules in a predicate hash in order to for all samples s that satisfy c s t s generate all rules r that correct the classification of s increase good for all samples s that satisfy c s t s generate all predicates p st p true for each rule r st pr p and tr c s increase bad 1 find the rule b argmaxrer f have a straightforward access to all rules that have a given predicate this amount is considerably smaller than the one used to represent the rulesfor example in the case of text chunking task described in section 4 only approximately 30mb additional memory is required while the approach of ramshaw and marcus would require approximately 450mbas mentioned before the original algorithm has a number of deficiencies that because it to run slowlyamong them is the drastic slowdown in rule learning as the scores of the rules decreasewhen the best rule has a high score which places it outside the tail of the score distribution the rules in the tail will be skipped when the bad counts are calculated since their good counts are small enough to cause them to be discardedhowever when the best rule is in the tail many other rules with similar scores can no longer be discarded and their bad counts need to be computed leading to a progressively longer running time per iterationour algorithm does not suffer from the same problem because the counts are updated at each iteration and only for the samples that were affected by the application of the latest rule learnedsince the number of affected samples decreases as learning progresses our algorithm actually speeds up considerably towards the end of the training phaseconsidering that the number of lowscore rules is a considerably higher than the number of highscore rules this leads to a dramatic reduction in the overall running timethis has repercussions on the scalability of the algorithm relative to training data sizesince enlarging the training data size results in a longer score distribution tail our algorithm is expected to achieve an even more substantial relative running time improvement over the original algorithmsection 4 presents experimental results that validate the superior scalability of the fasttbl algorithmsince the goal of this paper is to compare and contrast system training time and performance extra measures were taken to ensure fairness in the comparisonsto minimize implementation differences all the code was written in c and classes were shared among the systems whenever possiblefor each task the same training set was provided to each system and the set of possible rule templates was kept the samefurthermore extra care was taken to run all comparable experiments on the same machine and under the same memory and processor load conditionsto provide a broad comparison between the systems three nlp tasks with different properties were chosen as the experimental domainsthe first task partofspeech tagging is one where the commitment assumption seems intuitively valid and the samples are not independentthe second task prepositional phrase attachment has examples which are independent from each otherthe last task is text chunking where both independence and commitment assumptions do not seem to be valida more detailed description of each task data and the system parameters are presented in the following subsectionsfour algorithms are compared during the following experiments the goal of this task is to assign to each word in the given sentence a tag corresponding to its part of speecha multitude of approaches have been proposed to solve this problem including transformationbased learning maximum entropy models hidden markov models and memorybased approachesthe data used in the experiment was selected from the penn treebank wall street journal and is the same used by brill and wu the training set contained approximately 1m words and the test set approximately 200k wordstable 1 presents the results of the experiment4all the algorithms were trained until a rule with a score of 2 was reachedthe fasttbl algorithm performs very similarly to the regular tbl while running in an order of magnitude fasterthe two assumptions made by the ica algorithm result in considerably less training time but the performance is also degraded also present in table 1 are the results of training brill tagger on the same datathe results of this tagger are presented to provide a performance comparison with a widely used taggeralso worth mentioning is that the tagger achieved an accuracy of 9676 when trained on the entire data5 a maximum entropy tagger achieves 9683 accuracy with the same training datatest dataprepositional phrase attachment is the task of deciding the point of attachment for a given prepositional phrase as an example consider the following two sentences in sentence 1 the pp quotwith soap and waterquot describes the act of washing the shirtin sentence 2 however the pp quotwith pocketsquot is a description for the shirt that was washedmost previous work has concentrated on situations which are of the form vp np1 p np2the problem is cast as a classification task and the sentence is reduced to a 4tuple containing the preposition and the noninflected base forms of the head words of the verb phrase vp and the two noun phrases np1 and np2for example the tuple corresponding to the two above sentences would be many approaches to solving this this problem have been proposed most of them using standard machine learning techniques including transformationbased learning decision trees maximum entropy and backoff estimationthe transformationbased learning system was originally developed by brill and resnik the data used in the experiment consists of approximately 13000 quadruples extracted from penn treebank parsesthe set is split into a test set of 500 samples and a training set of 12500 samplesthe templates used to generate rules are similar to the ones used by brill and resnik and some include wordnet featuresall the systems were trained until no more rules could be learnedtable 2 shows the results of the experimentsagain the ica algorithm learns the rules very fast but has a slightly lower performance than the other two tbl systemssince the samples are inherently independent there is no performance loss because of the independence assumption therefore the performance penalty has to come from the commitment assumptionthe fast tbl algorithm runs again in a order of magnitude faster than the original tbl while preserving the performance the time ratio is only 13 in this case due to the small training size text chunking is a subproblem of syntactic parsing or sentence diagrammingsyntactic parsing attempts to construct a parse tree from a sentence by identifying all phrasal constituents and their attachment pointstext chunking simplifies the task by dividing the sentence into nonoverlapping phrases where each word belongs to the lowest phrasal constituent that dominates itthe following example shows a sentence with text chunks and partofspeech tags the problem can be transformed into a classification taskfollowing ramshaw marcus work in base noun phrase chunking each word is assigned a chunk tag corresponding to the phrase to which it belongs the following table shows the above sentence with the assigned chunk tags and the partofspeech tags were generated by brill tagger all the systems are trained to completion table 3 shows the results of the text chunking experimentsthe performance of the fasttbl algorithm is the same as of regular tbl and runs in an order of magnitude fasterthe ica algorithm again runs considerably faster but at a cost of a significant performance hitthere are at least 2 reasons that contribute to this behavior 1the initial state has a lower performance than the one in tagging therefore the independence assumption might not hold25 of the samples are changed by at least one rule as opposed to pos tagging where only 25 of the samples are changed by a rule2the commitment assumption might also not holdfor this task 20 of the samples that were modified by a rule are also changed again by another onea question usually asked about a machine learning algorithm is how well it adapts to larger amounts of training datasince the performance of the fast tbl algorithm is identical to that of regular tbl the issue of interest is the dependency between the running time of the algorithm and the amount of training datathe experiment was performed with the partofspeech data setthe four algorithms were trained on training sets of different sizes training times were recorded and averaged over 4 trialsthe results are presented in figure 2it is obvious that the fast tbl algorithm is much more scalable than the regular tbl displaying a linear dependency on the amount of training data while the regular tbl has an almost quadratic dependencythe explanation for this behavior has been given in section 33figure 2 shows the time spent at each iteration versus the iteration number for the original tbl and fast tbl systemsit can be observed that the time taken per iteration increases dramatically with the iteration number for the regular tbl while for the fasttbl the situation is reversedthe consequence is that once a certain threshold has been reached the incremental time needed to train the fasttbl system to completion is negligiblewe have presented in this paper a new and improved method of computing the objective function for transformationbased learningthis method allows a transformationbased algorithm to train an observed 13 to 139 times faster than the original one while preserving the final performance of the algorithmthe method was tested in three different domains each one having different characteristics partofspeech tagging prepositional phrase attachment and text chunkingthe results obtained indicate that the algorithmic improvement generated by our method is not linked to a particular task but extends to any classification task where transformationbased learning can be appliedfurthermore our algorithm scales better with training data size therefore the relative speedup obtained will increase when more samples are available for training making the procedure a good candidate for large corpora tasksthe increased speed of the fast tbl algorithm also enables its usage in higher level machine learning algorithms such as adaptive boosting model combination and active learningrecent work has shown how a tbl framework can be adapted to generate confidences on the output and our algorithm is compatible with that frameworkthe stability resistance to overtraining the existence of probability estimates and now reasonable speed make tbl an excellent candidate for solving classification tasks in generalthe authors would like to thank david yarowsky for his advice and guidance eric brill and john c henderson for discussions on the initial ideas of the material presented in the paper and the anonymous reviewers for useful suggestions observations and connections with other published materialthe work presented here was supported by nsf grants iri9502312 iri9618874 and iis9985033
N01-1006
transformation based learning in the fast lanetransformationbased learning has been successfully employed to solve many natural language processing problemsit achieves stateoftheart performance on many natural language processing tasks and does not overtrain easilyhowever it does have a serious drawback the training time is often intorelably long especially on the large corpora which are often used in nlpin this paper we present a novel and realistic method for speeding up the training time of a transformationbased learner without sacrificing performancethe paper compares and contrasts the training time needed and performance achieved by our modified learner with two other systems a standard transformationbased learner and the ica system the results of these experiments show that our system is able to achieve a significant improvement in training time while still achieving the same performance as a standard transformationbased learnerthis is a valuable contribution to systems and algorithms which utilize transformationbased learning at any part of the executionwe propose fntbl toolkit which implements several optimizations in rule learning to drastically speed up the time needed for training
text and knowledge mining for coreference resolution and and and and and then cast_in_chain note that the performance of the mutual bootstrapping algorithm can deteriorate rapidly if erroneous rules are entered to make the algorithm more robust we use the same solution by introducing a second level of bootrapping outer level called most reliable based on semantic consistency and discard all the others before restarting the mutual bootstrapping loop again in our experiments we have retained only those rules for which the new performance given by the fmeasure was larger than the median of the past four loops the formula for the van rijsbergen fmeasure combines precision the recall 6 evaluation to measure the performance of cocktail we have trained the system on 30 muc6 and muc7 texts and tested it on the remaining 30 documents computed the the fperformance measures have been obtained automatically using the muc6 coreference scoring program table 4 lists the results precision recall fmeasure rules 871 617 723 rules combined 913 586 718 bootstrapping 920 739 819 table 4 bootstrapping effect on cocktail table 4 shows that the seed set of rules had good precision but poor recall by combining the rules with the entropybased measure we obtained further enhancement in precision but the recall dropped the application of the bootstrapping methodology determined an enhancement of recall and thus of the fmeasure in the future we intend to compare the overall effect of rules that recognize referential expressions on the overall performance of the system 7 conclusion we have introduced a new datadriven method for corefresolution implemented in the system unlike other knowledgepoor methods for corefresolution cockits most performant rules through massive data generated by its component furthermore by using an entropybased method we determine the best partition of corefering expressions chains rules are learned by applying a bootstrapping methodology that uncovers additional semantic consistency data references breck baldwin 1997 cogniac high precision coreference with limited knowledge and linguistic resources reference resolution is an important task for discourse or dialogue processing systems since identity relations between anaphoric textual entities and their antecedents is a prerequisite to the understanding of text or conversationtraditionally coreference resolution has been performed by combining linguistic and cognitive knowledge of languagelinguistic information is provided mostly by syntactic and semantic modeling of language whereas cognitive information is incorporated in computational models of discoursecomputational methods based on linguistic and congitive information were presented in and the acquisition of extensive linguistic and discourse knowledge necessary for resolving coreference is time consuming difficult and errorproneneverthless recent results show that knowledgepoor empirical methods perform with amazing accuracy on certain forms of coreference for example cogniac a system based on just seven ordered heuristics generates highprecision resolution for some cases of pronominal referencein our work we approached the coreference resolution problem by trying to determine how much more knowledge is required to supplement the abovementioned knowledgepoor methods and how to derive that knowledgeto this end we analyze the data to find what types of anaphorantecedent pairs are most popular in realworld texts devise knowledgeminimalist rules for handling the majority of those popular cases and discover what supplementary knowledge is needed for remaining more difficult casesto analyze coreference data we use a corpus of annotated textsto devise minimalist coreference resolution rules we consider strong indicators of cohesion such as repetitions name aliases or appositions and gender number and class agreementswordnet the vast semantic knowledge base provides suplementary knowledge in the form of semantic consistency between coreferring nounsadditional semantic consistency knowledge is generated by a bootstrapping mechanism when our coreference resolution system cocktail processes new textsthis bootstrapping mechanism inspired by the technique presented in targets one of the most problematic forms of knowledge needed for coreference resolution the semantic consistency of corefering nominalsthe rest of the paper is organized as followssection 2 discusses our text mining methodology for analysing the data and devising knowledgeminimalist rules for resolving the most popular coreference casessection 3 presents the knowledgemining components of cocktail that use wordnet for deriving semantic consistency as well as gender informationsection 4 presents an entropybased method for optimally combining coreference rules and section 5 presents the bootstrapping mechanismsection 6 reports and discusses the experimental results while section 7 summarizes the conclusions of a nominal or a disjunct of two or three of them as illustrated in table 2the gender attributes may have the values gender attributes are assigned by the two following heuristics heuristic 1 if a collocation fom a wordnet synset contains the word male the expression g for the whole sysnet is m if the collocation contains the words female or woman g f heuristic 2 consider the first four words from the synset glossif any of the gloss words have been assigned gender information propagate the same information to the defined synset as welleach hyponym of the concept person individual human categorized as person has expression g initialized to f v m since all lexemes represent persons that can be either males or femaleswhenever one of the two heuristics previously defined can be applied at any node s from this subhierarchy three operations take place t operation 1 we update g with the new expression brough forward by the heuristic t operation 2 we propagate all the expression to the hyponyms of s t operation 3 we revisit the whole person subhierarchy in search for concepts d that are defined with glosses that use any of the words from synset s or any word from any of its hyponymswhenever we find such a word we update its g expression to gwe also note that many words are polysemous thus a word w may have multiple senses under the person subhierarchy and moreover each sense might have a different g expressionin this case all words from the synsets containing w receive the disjunct of the gender attributes corresponding to each sense of w mining semantic information from wordnet we used the wordnet knowledge base to mine patterns of wordnet paths that connect pairs of coreferring nouns from the annotated chainsthe paths are combinations of any of the following wordnet 6a polysemous noun has multiple semantic senses and therefore has multiple entries in the wordnet dictionaryto determine the confidence of the path we consider three factors factor fi has only two valuesit is set to 1 when another coreference chain contains elements in the same nps as the anaphor and the anetcedentfor example if npi is quotthe professor sonquot and np2 is quothis fatherquot the semantic consistency between father and professor is more likely given that his and son coreferotherwise fi is set to 0factor f2 favors relations that are considered quotstrongerquot and shorter pathsfor this purpose we assign the following weights to each relation considered w 10 w 09 w 09 w 03 w 07 w 06 and w 05when computing the f2 factor we assume that whenever at least two relations of the same kind repeat we should consider the sequence of relations equivalent to a single relation having the weight devided by the length of the sequenceif we denote by riri the number of different relation types encountered in a path and rirsame denotes the number of links of type rel in a sequence then we define f2 with the formula factor h is a semantic measure operating on a conceptual spacewhen searching for a lexicosemantic path a search space ss is created which contains all wordnet content words that can be reached from the candidate antecedent or the anaphor in at most five combinations of the seven relations used by the third filterwe denote by n the total number of nouns and verbs in the search spacec represents the number of nouns and verbs that can be reached by both nominalsin addition rirtotal is the number of concepts along all paths established whereas note that the performance of the mutual bootstrapping algorithm can deteriorate rapidly if erroneous rules are enteredto make the algorithm more robust we use the same solution by introducing a second level of bootrappingthe outer level called metabootstrapping identifies the most reliable k rules based on semantic consistency and discard all the others before restarting the mutual bootstrapping loop againin our experiments we have retained only those rules for which the new performance given by the fmeasure was larger than the median of the past four loopsthe formula for the van rijsbergen fmeasure combines the precision p with the recall r in f to measure the performance of cocktail we have trained the system on 30 muc6 and muc7 texts and tested it on the remaining 30 documentswe computed the precision the recall and the fmeasurethe performance measures have been obtained automatically using the muc6 coreference scoring program table 4 lists the resultstable 4 shows that the seed set of rules had good precision but poor recallby combining the rules with the entropybased measure we obtained further enhancement in precision but the recall droppedthe application of the bootstrapping methodology determined an enhancement of recall and thus of the fmeasurein the future we intend to compare the overall effect of rules that recognize referential expressions on the overall performance of the systemwe have introduced a new datadriven method for coreference resolution implemented in the cocktail systemunlike other knowledgepoor methods for coreference resolution cocktail filters its most performant rules through massive training data generated by its autotagcoftef componentfurthermore by using an entropybased method we determine the best partition of corefering expressions in coreference chainsnew rules are learned by applying a bootstrapping methodology that uncovers additional semantic consistency data
N01-1008
text and knowledge mining for coreference resolutiontraditionally coreference is resolved by satisfying a combination of salience syntactic semantic and discourse constraintsthe acquisition of such knowledge is timeconsuming difficult and errorpronetherefore we present a knowledge minimalist methodology of mining coreference rules from annotated text corporasemantic consistency evidence which is a form of knowledge required by coreference is easily retrieved from wordnetadditional consistency knowledge is discovered by a metabootstrapping algorithm applied to unlabeled textswe use paths through wordnet using not only synonym and isa relations but also parts morphological derivations gloss texts and polysemy which are weighted with a measure based on the relation types and number of path elementsthe path patterns in wordnet are utilized to compute the semantic consistency between nps
a decision tree of bigrams is an accurate predictor of word sense this paper presents a corpusbased approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby this approach is evaluated using the sensetagged corpora from the 1998 senseval word sense disambiguation exercise it is more accurate than the average results reported for 30 of 36 words and is more accurate than the best results for 19 of 36 words word sense disambiguation is the process of selecting the most appropriate meaning for a word based on the context in which it occursfor our purposes it is assumed that the set of possible meanings ie the sense inventory has already been determinedfor example suppose bill has the following set of possible meanings a piece of currency pending legislation or a bird jawwhen used in the context of the senate bill is under consideration a human reader immediately understands that bill is being used in the legislative sensehowever a computer program attempting to perform the same task faces a difficult problem since it does not have the benefit of innate commonsense or linguistic knowledgerather than attempting to provide computer programs with realworld knowledge comparable to that of humans natural language processing has turned to corpusbased methodsthese approaches use techniques from statistics and machine learning to induce models of language usage from large samples of textthese models are trained to perform particular tasks usually via supervised learningthis paper describes an approach where a decision tree is learned from some number of sentences where each instance of an ambiguous word has been manually annotated with a sensetag that denotes the most appropriate sense for that contextprior to learning the sensetagged corpus must be converted into a more regular form suitable for automatic processingeach sensetagged occurrence of an ambiguous word is converted into a feature vector where each feature represents some property of the surrounding text that is considered to be relevant to the disambiguation processgiven the flexibility and complexity of human language there is potentially an infinite set of features that could be utilizedhowever in corpusbased approaches features usually consist of information that can be readily identified in the text without relying on extensive external knowledge sourcesthese typically include the partofspeech of surrounding words the presence of certain key words within some window of context and various syntactic properties of the sentence and the ambiguous wordthe approach in this paper relies upon a feature set made up of bigrams two word sequences that occur in a textthe context in which an ambiguous word occurs is represented by some number of binary features that indicate whether or not a particular bigram has occurred within approximately 50 words to the left or right of the word being disambiguatedwe take this approach since surface lexical features like bigrams collocations and cooccurrences often contribute a great deal to disambiguation accuracyit is not clear how much disambiguation accuracy is improved through the use of features that are identified by more complex preprocessing such as partofspeech tagging parsing or anaphora resolutionone of our objectives is to establish a clear upper bounds on the accuracy of disambiguation using feature sets that do not impose substantial pre processing requirementsthis paper continues with a discussion of our methods for identifying the bigrams that should be included in the feature set for learningthen the decision tree learning algorithm is described as are some benchmark learning algorithms that are included for purposes of comparisonthe experimental data is discussed and then the empirical results are presentedwe close with an analysis of our findings and a discussion of related work2 building a feature set of bigrams we have developed an approach to word sense disambiguation that represents text entirely in terms of the occurrence of bigrams which we define to be two cat cat totals big n11 10 n12 20 n1 30 big n21 40 n22 930 n2 970 totals n150 n2950 n1000 consecutive words that occur in a textthe distributional characteristics of bigrams are fairly consistent across corpora a majority of them only occur one timegiven the sparse and skewed nature of this data the statistical methods used to select interesting bigrams must be carefully chosenwe explore two alternatives the power divergence family of goodness of fit statistics and the dice coefficient an information theoretic measure related to pointwise mutual informationfigure 1 summarizes the notation for word and bigram counts used in this paper by way of a 2 x 2 contingency tablethe value of n11 shows how many times the bigram big cat occurs in the corpusthe value of n12 shows how often bigrams occur where big is the first word and cat is not the secondthe counts in n1 and n1 indicate how often words big and cat occur as the first and second words of any bigram in the corpusthe total number of bigrams in the corpus is represented by n introduce the power divergence family of goodness of fit statisticsa number of well known statistics belong to this family including the likelihood ratio statistic g2 and pearson x2 statisticthese measure the divergence of the observed and expected bigram counts where mij is estimated based on the assumption that the component words in the bigram occur together strictly by chance data distributionshowever suggest that there are cases where pearson statistic is more reliable than the likelihood ratio and that one test should not always be preferred over the otherin light of this presents fisher exact test as an alternative since it does not rely on the distributional assumptions that underly both pearson test and the likelihood ratiounfortunately it is usually not clear which test is most appropriate for a particular sample of datawe take the following approach based on the observation that all tests should assign approximately the same measure of statistical significance when the bigram counts in the contingency table do not violate any of the distributional assumptions that underly the goodness of fit statisticswe perform tests using x2 g2 and fisher exact test for each bigramif the resulting measures of statistical significance differ then the distribution of the bigram counts is causing at least one of the tests to become unreliablewhen this occurs we rely upon the value from fisher exact test since it makes fewer assumptions about the underlying distribution of datafor the experiments in this paper we identified the top 100 ranked bigrams that occur more than 5 times in the training corpus associated with a wordthere were no cases where rankings produced by g2 x2 and fisher exact test disagreed which is not altogether surprising given that low frequency bigrams were excludedsince all of these statistics produced the same rankings hereafter we make no distinction among them and simply refer to them generically as the power divergence statisticthe dice coefficient is a descriptive statistic that provides a measure of association among two words in a corpusit is similar to pointwise mutual information a widely used measure that was first introduced for identifying lexical relationships in pointwise mutual information can be defined as follows argues in favor of g2 over x2 especially when dealing with very sparse and skewed where w1 and w2 represent the two words that make up the bigrampointwise mutual information quantifies how often two words occur together in a bigram relative to how often they occur overall in the corpus however there is a curious limitation to pointwise mutual informationa bigram w1w2 that occurs n11 times in the corpus and whose component words w1 and w2 only occur as a part of that bigram will result in increasingly strong measures of association as the value of n11 decreasesthus the maximum pointwise mutual information in a given corpus will be assigned to bigrams that occur one time and whose component words never occur outside that bigramthese are usually not the bigrams that prove most useful for disambiguation yet they will dominate a ranked list as determined by pointwise mutual informationthe dice coefficient overcomes this limitation and can be defined as follows when n11 n1 n1 the value of dice will be 1 for all values n11when the value of n11 is less than either of the marginal totals the rankings produced by the dice coefficient are similar to those of mutual informationthe relationship between pointwise mutual information and the dice coefficient is also discussed in we have developed the bigram statistics package to produce ranked lists of bigrams using a range of teststhis software is written in perl and is freely available from wwwdumnedutpedersedecision trees are among the most widely used machine learning algorithmsthey perform a general to specific search of a feature space adding the most informative features to a tree structure as the search proceedsthe objective is to select a minimal set of features that efficiently partitions the feature space into classes of observations and assemble them into a treein our case the observations are manually sensetagged examples of an ambiguous word in context and the partitions correspond to the different possible senseseach feature selected during the search process is represented by a node in the learned decision treeeach node represents a choice point between a number of different possible values for a featurelearning continues until all the training examples are accounted for by the decision treein general such a tree will be overly specific to the training data and not generalize well to new examplestherefore learning is followed by a pruning step where some nodes are eliminated or reorganized to produce a tree that can generalize to new circumstancestest instances are disambiguated by finding a path through the learned decision tree from the root to a leaf node that corresponds with the observed featuresan instance of an ambiguous word is disambiguated by passing it through a series of tests where each test asks if a particular bigram occurs in the available window of contextwe also include three benchmark learning algorithms in this study the majority classifier the decision stump and the naive bayesian classifierthe majority classifier assigns the most common sense in the training data to every instance in the test dataa decision stump is a one node decision tree that is created by stopping the decision tree learner after the single most informative feature is added to the treethe naive bayesian classifier is based on certain blanket assumptions about the interactions among features in a corpusthere is no search of the feature space performed to build a representative model as is the case with decision treesinstead all features are included in the classifier and assumed to be relevant to the task at handthere is a further assumption that each feature is conditionally independent of all other features given the sense of the ambiguous wordit is most often used with a bag of words feature set where every word in the training sample is represented by a binary feature that indicates whether or not it occurs in the window of context surrounding the ambiguous wordwe use the weka implementations of the c45 decision tree learner the decision stump and the naive bayesian classifierweka is written in java and is freely available from wwwcswaikatoacnzmlour empirical study utilizes the training and test data from the 1998 senseval evaluation of word sense disambiguation systemsten teams participated in the supervised learning portion of this eventadditional details about the exercise including the data and results referred to in this paper can be found at the senseval web site and in we included all 36 tasks from senseval for which training and test data were providedeach task requires that the occurrences of a particular word in the test data be disambiguated based on a model learned from the sensetagged instances in the training datasome words were used in multiple tasks as different parts of speechfor example there were two tasks associated with bet one for its use as a noun and the other as a verbthus there are 36 tasks involving the disambiguation of 29 different wordsthe words and part of speech associated with each task are shown in table 1 in column 1note that the parts of speech are encoded as n for noun a for adjective v for verb and p for words where the part of speech was not providedthe number of test and training instances for each task are shown in columns 2 and 4each instance consists of the sentence in which the ambiguous word occurs as well 2 n11 as one or two surrounding sentencesin general the total context available for each ambiguous word is less than 100 surrounding wordsthe number of distinct senses in the test data for each task is shown in column 3the following process is repeated for each taskcapitalization and punctuation are removed from the training and test datatwo feature sets are selected from the training data based on the top 100 ranked bigrams according to the power divergence statistic and the dice coefficientthe bigram must have occurred 5 or more times to be included as a featurethis step filters out a large number of possible bigrams and allows the decision tree learner to focus on a small number of candidate bigrams that are likely to be helpful in the disambiguation processthe training and test data are converted to feature vectors where each feature represents the occurrence of one of the bigrams that belong in the feature setthis representation of the training data is the actual input to the learning algorithmsdecision tree and decision stump learning is performed twice once using the feature set determined by the power divergence statistic and again using the feature set identified by the dice coefficientthe majority classifier simply determines the most frequent sense in the training data and assigns that to all instances in the test datathe naive bayesian classifier is based on a feature set where every word that occurs 5 or more times in the training data is included as a featureall of these learned models are used to disambiguate the test datathe test data is kept separate until this stagewe employ a fine grained scoring method where a word is counted as correctly disambiguated only when the assigned sense tag exactly matches the true sense tagno partial credit is assigned for near missesthe accuracy attained by each of the learning algorithms is shown in table 1column 5 reports the accuracy of the majority classifier columns 6 and 7 show the best and average accuracy reported by the 10 participating senseval teamsthe evaluation at senseval was based on precision and recall so we converted those scores to accuracy by taking their producthowever the best precision and recall may have come from different teams so the best accuracy shown in column 6 may actually be higher than that of any single participating senseval systemthe average accuracy in column 7 is the product of the average precision and recall reported for the participating senseval teamscolumn 8 shows the accuracy of the decision tree using the j48 learning algorithm and the features identified by a power divergence statisticcolumn 10 shows the accuracy of the decision tree when the dice coefficient selects the featurescolumns 9 and 11 show the accuracy of the decision stump based on the power divergence statistic and the dice coefficient respectivelyfinally column 13 shows the accuracy of the naive bayesian classifier based on a bag of words feature setthe most accurate method is the decision tree based on a feature set determined by the power divergence statisticthe last line of table 1 shows the wintieloss score of the decision treepower divergence method relative to every other methoda win shows it was more accurate than the method in the column a loss means it was less accurate and a tie means it was equally accuratethe decision treepower divergence method was more accurate than the best reported senseval results for 19 of the 36 tasks and more accurate for 30 of the 36 tasks when compared to the average reported accuracythe decision stumps also fared well proving to be more accurate than the best senseval results for 14 of the 36 tasksin general the feature sets selected by the power divergence statistic result in more accurate decision trees than those selected by the dice coefficientthe power divergence tests prove to be more reliable since they account for all possible events surrounding two words w1 and w2 when they occur as bigram w1w2 when w1 or w2 occurs in a bigram without the other and when a bigram consists of neitherthe dice coefficient is based strictly on the event where w1 and w2 occur together in a bigramthere are 6 tasks where the decision tree power divergence approach is less accurate than the senseval average promisen scrapn shirtn amazev bitterp and sanctionpthe most dramatic difference occurred with amazev where the senseval average was 924 and the decision tree accuracy was 586however this was an unusual task where every instance in the test data belonged to a single sense that was a minority sense in the training datathe characteristics of the decision trees and decision stumps learned for each word are shown in table 2column 1 shows the word and part of speechcolumns 2 3 and 4 are based on the feature set selected by the power divergence statistic while columns 5 6 and 7 are based on the dice coefficientcolumns 2 and 5 show the node selected to serve as the decision stumpcolumns 3 and 6 show the number of leaf nodes in the learned decision tree relative to the number of total nodescolumns 4 and 7 show the number of bigram features selected to represent the training datathis table shows that there is little difference in the decision stump nodes selected from feature sets determined by the power divergence statistics versus the dice coefficientthis is to be expected since the top ranked bigrams for each measure are consistent and the decision stump node is generally chosen from among thosehowever there are differences between the feature sets selected by the power divergence statistics and the dice coefficientthese are reflected in the different sized trees that are learned based on these feature setsthe number of leaf nodes and the total number of nodes for each learned tree is shown in columns 3 and 6the number of internal nodes is simply the difference between the total nodes and the leaf nodeseach leaf node represents the end of a path through the decision tree that makes a sense distinctionsince a bigram feature can only appear once in the decision tree the number of internal nodes represents the number of bigram features selected by the decision tree learnerone of our original hypotheses was that accurate decision trees of bigrams will include a relatively small number of featuresthis was motivated by the success of decision stumps in performing disambiguation based on a single bigram featurein these experiments there were no decision trees that used all of the bigram features identified by the filtering step and for many words the decision tree learner went on to eliminate most of the candidate featuresthis can be seen by comparing the number of internal nodes with the number of candidate features as shown in columns 4 or 71 it is also noteworthy that the bigrams ultimately selected by the decision tree learner for inclusion in the tree do not always include those bigrams ranked most highly by the power divergence statistic or the dice coefficientthis is to be expected since the selection of the bigrams from raw text is only meafor most words the 100 top ranked bigrams form the set of candidate features presented to the decision tree learnerif there are ties in the top 100 rankings then there may be more than 100 features and if the there were fewer than 100 bigrams that occurred more than 5 times then all such bigrams are included in the feature set suring the association between two words while the decision tree seeks bigrams that partition instances of the ambiguous word into into distinct sensesin particular the decision tree learner makes decisions as to what bigram to include as nodes in the tree using the gain ratio a measure based on the overall mutual information between the bigram and a particular word sensefinally note that the smallest decision trees are functionally equivalent to our benchmark methodsa decision tree with 1 leaf node and no internal nodes acts as a majority classifiera decision tree with 2 leaf nodes and 1 internal node has the structure of a decision stumpone of our longterm objectives is to identify a core set of features that will be useful for disambiguating a wide class of words using both supervised and unsupervised methodologieswe have presented an ensemble approach to word sense disambiguation where multiple naive bayesian classifiers each based on co occurrence features from varying sized windows of context is shown to perform well on the widely studied nouns interest and linewhile the accuracy of this approach was as good as any previously published results the learned models were complex and difficult to interpret in effect acting as very accurate black boxesour experience has been that variations in learning algorithms are far less significant contributors to disambiguation accuracy than are variations in the feature setin other words an informative feature set will result in accurate disambiguation when used with a wide range of learning algorithms but there is no learning algorithm that can perform well given an uninformative or misleading set of featurestherefore our focus is on developing and discovering feature sets that make distinctions among word sensesour learning algorithms must not only produce accurate models but they should also she would new light on the relationships among features and allow us to continue refining and understanding our feature setswe believe that decision trees meet these criteriaa wide range of implementations are available and they are known to be robust and accurate across a range of domainsmost important their structure is easy to interpret and may provide insights into the relationships that exist among features and more general rules of disambiguationbigrams have been used as features for word sense disambiguation particularly in the form of collocations where the ambiguous word is one component of the bigram while some of the bigrams we identify are collocations that include the word being disambiguated there is no requirement that this be the casedecision trees have been used in supervised learning approaches to word sense disambiguation and have fared well in a number of comparative studies in the former they were used with the bag of word feature sets and in the latter they were used with a mixed feature set that included the partofspeech of neighboring words three collocations and the morphology of the ambiguous wordwe believe that the approach in this paper is the first time that decision trees based strictly on bigram features have been employedthe decision list is a closely related approach that has also been applied to word sense disambiguation rather than building and traversing a tree to perform disambiguation a list is employedin the general case a decision list may suffer from less fragmentation during learning than decision trees as a practical matter this means that the decision list is less likely to be overtrainedhowever we believe that fragmentation also reflects on the feature set used for learningours consists of at most approximately 100 binary featuresthis results in a relatively small feature space that is not as likely to suffer from fragmentation as are larger spacesthere are a number of immediate extensions to this workthe first is to ease the requirement that bigrams be made up of two consecutive wordsrather we will search for bigrams where the component words may be separated by other words in the textthe second is to eliminate the filtering step by which candidate bigrams are selected by a power divergence statisticinstead the decision tree learner would consider all possible bigramsdespite increasing the danger of fragmentation this is an interesting issue since the bigrams judged most informative by the decision tree learner are not always ranked highly in the filtering stepin particular we will determine if the filtering process ever eliminates bigrams that could be significant sources of disambiguation informationin the longer term we hope to adapt this approach to unsupervised learning where disambiguation is performed without the benefit of sense tagged textwe are optimistic that this is viable since bigram features are easy to identify in raw textthis paper shows that the combination of a simple feature set made up of bigrams and a standard decision tree learning algorithm results in accurate word sense disambiguationthe results of this approach are compared with those from the 1998 senseval word sense disambiguation exercise and show that the bigram based decision tree approach is more accurate than the best senseval results for 19 of 36 wordsthe bigram statistics package has been implemented by satanjeev banerjee who is supported by a grantinaid of research artistry and scholarship from the office of the vice president for research and the dean of the graduate school of the university of minnesotawe would like to thank the senseval organizers for making the data and results from the 1998 event freely availablethe comments of three anonymous reviewers were very helpful in preparing the final version of this papera preliminary version of this paper appears in
N01-1011
a decision tree of bigrams is an accurate predictor of word sensethis paper presents a corpusbased approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearbythis approach is evaluated using the sensetagged corpora from the 1998 senseval word sense disambiguation exerciseit is more accurate than the average results reported for 30 of 36 words and is more accurate than the best results for 19 of 36 wordswe compare decision trees decision stumps and a naive bayesian classifier to show that bigrams are very useful in identifying the intended sense of a word
edit detection and parsing for transcribed speech we present a simple architecture for parsing transcribed speech in which an editedword detector first removes such words from the sentence string and then a standard statistical parser trained on transcribed speech parses the remaining words the edit detector achieves a misclassification rate on edited words of 22 which marks everything as not edited has an error rate of 59 to evaluate our parsing results we introduce a new evaluation metric the purpose of which is to make evaluation of a parse tree relatively indifferent the exact tree position of by this metric the parser achieves 853 precision and 865 recall while significant effort has been expended on the parsing of written text parsing speech has received relatively little attentionthe comparative neglect of speech is understandable since parsing transcribed speech presents several problems absent in regular text ums and ahs frequent use of parentheticals ungrammatical constructions and speech repairs in this paper we present and evaluate a simple twopass architecture for handling the problems of parsing transcribed speechthe first pass tries to identify which of the words in the string are edited these words are removed from the string given to the second pass an already existing statistical parser trained on a transcribed speech this research was supported in part by nsf grant lis sbr 9720368 and by nsf itr grant 20100203 corpusthis architecture is based upon a fundamental assumption that the semantic and pragmatic content of an utterance is based solely on the unedited words in the word sequencethis assumption is not completely truefor example core and schubert 8 point to counterexamples such as have the engine take the oranges to elmira um i mean take them to corning where the antecedent of them is found in the edited wordshowever we believe that the assumption is so close to true that the number of errors introduced by this assumption is small compared to the total number of errors made by the systemin order to evaluate the parsers output we compare it with the goldstandard parse treesfor this purpose a very simple third pass is added to the architecture the hypothesized edited words are inserted into the parser output to the degree that our fundamental assumption holds a real application would ignore this last stepthis architecture has several things to recommend itfirst it allows us to treat the editing problem as a preprocess keeping the parser unchangedsecond the major clues in detecting edited words in transcribed speech seem to be relatively shallow phenomena such as repeated word and partofspeech sequencesthe kind of information that a parser would add eg the node dominating the edited node seems much less criticalnote that of the major problems associated with transcribed speech we choose to deal with only one of them speech repairs in a special fashionour reasoning here is based upon what one might and might not expect from a secondpass statistical parserfor example ungrammaticality in some sense is relative so if the training corpus contains the same kind of ungrammatical examples as the testing corpus one would not expect ungrammaticality itself to be a show stopperfurthermore the best statistical parsers 35 do not use grammatical rules but rather define probability distributions over all possible rulessimilarly parentheticals and filled pauses exist in the newspaper text these parsers currently handle albeit at a much lower ratethus there is no particular reason to expect these constructions to have a major impact1 this leaves speech repairs as the one major phenomenon not present in written text that might pose a major problem for our parserit is for that reason that we have chosen to handle it separatelythe organization of this paper follows the architecture just describedsection 2 describes the first passwe present therein a boosting model for learning to detect edited nodes and an evaluation of the model as a standalone edit detector section 3 describes the parsersince the parser is that already reported in 3 this section simply describes the parsing metrics used the details of the experimental setup and the results the switchboard corpus annotates disfluencies such as restarts and repairs using the terminology of shriberg 15the disfluencies include repetitions and substitutions italicized in and respectivelyrestarts and repairs are indicated by disfluency tags and in the disfluency postagged switchboard corpus and by edited nodes in the treetagged corpusthis section describes a procedure for automatically identifying words corrected by a restart or repair ie words that 1indeed 17 suggests that filled pauses tend to indicate clause boundaries and thus may be a help in parsing are dominated by an edited node in the treetagged corpusthis method treats the problem of identifying edited nodes as a wordtoken classification problem where each word token is classified as either edited or notthe classifier applies to words only punctuation inherits the classification of the preceding worda linear classifier trained by a greedy boosting algorithm 16 is used to predict whether a word token is editedour boosting classifier is directly based on the greedy boosting algorithm described by collins 7this paper contains important implementation details that are not repeated herewe chose collins algorithm because it offers good performance and scales to hundreds of thousands of possible feature combinationsthis section describes the kinds of linear classifiers that the boosting algorithm infersabstractly we regard each word token as an event characterized by a finite tuple of random variables y is the the conditioned variable and ranges over 1 1 with y 1 indicating that the word is not editedx1 xm are the conditioning variables each xj ranges over a finite set xjfor example x1 is the orthographic form of the word and x1 is the set of all words observed in the training section of the corpusour classifiers use m 18 conditioning variablesthe following subsection describes the conditioning variables in more detail they include variables indicating the pos tag of the preceding word the tag of the following word whether or not the word token appears in a rough copy as explained below etcthe goal of the classifier is to predict the value of y given values for x1 xmthe classifier makes its predictions based on the occurence of combinations of conditioning variablevalue pairs called featuresa feature f is a set of variablevalue pairs with xj e xjour classifier is defined in terms of a finite number n of features f1 fn where n 106 in our classifiers2 each feature fi defines an associated random boolean variable where takes the value 1 if x x and 0 otherwisethat is fi 1 iff xj xj for all e fiour classifier estimates a feature weight αi for each feature fi that is used to define the prediction variable z the prediction made by the classifier is sign zz ie 1 or 1 depending on the sign of zintuitively our goal is to adjust the vector of feature weights α to minimize the expected misclassiication rate e y this function is difficult to minimize so our boosting classifier minimizes the expected boost loss eexpas singer and schapire 16 point out the misclassification rate is bounded above by the boost loss so a low value for the boost loss implies a low misclassification rate b our classifier estimates the boost loss as etexp where et is the expectation on the empirical training corpus distributionthe feature weights are adjusted iteratively one weight is changed per iterationthe feature whose weight is to be changed is selected greedily to minimize the boost loss using the algorithm described in 7training continues for 25000 iterationsafter each iteration the misclassification rate on the development corpus bed y is estimated where bed is the expectation on empirical development corpus distributionwhile each iteration lowers the boost loss on the training corpus a graph of the misclassification rate on the development corpus versus iteration number is a noisy youshaped curve rising at later iterations due to overlearningthe value of α returned word token in our training datawe developed a method for quickly identifying such extensionally equivalent feature pairs based on hashing xored random bitmaps and deleted all but one of each set of extensionally equivalent features by the estimator is the one that minimizes the misclassficiation rate on the development corpus typically the minimum is obtained after about 12000 iterations and the feature weight vector α contains around 8000 nonzero feature weights 3 this subsection describes the conditioning variables used in the edited classifiermany of the variables are defined in terms of what we call a rough copyintuitively a rough copy identifies repeated sequences of words that might be restarts or repairspunctuation is ignored for the purposes of defining a rough copy although conditioning variables indicate whether the rough copy includes punctuationa rough copy in a tagged string of words is a substring of the form α1qyα2 where the set of freeinal words includes all partial words and a small set of conjunctions adverbs and miscellanea such as and or actually so etcthe set of interregnum strings consists of a small set of expressions such as uh you know i guess i mean etcwe search for rough copies in each sentence starting from left to right searching for longer copies firstafter we find a rough copy we restart searching for additional rough copies following the free final string of the previous copywe say that a word token is in a rough copy iff it appears in either the source or the free final4 is an example of a rough copy ish the work table 1 lists the conditioning variables used in our classifierin that table subscript integers refer to the relative position of word tokens relative to the current word egt1 is the pos tag of the following wordthe subscript f refers to the tag of the first word of the free final matchif a variable is not defined for a particular word it is given the special value null eg if a word is not in a rough copy then variables such as nm nu ni nl nr and tf all take the value nullflags are booleanvalued variables while numericvalued variables are bounded to a value between 0 and 4 the three variables ct cw and ti are intended to help the classifier capture very short restarts or repairs that may not involve a rough copythe flags ct and ci indicate whether the orthographic form andor tag of the next word are the same as those of the current wordti has a nonnull value only if the current word is followed by an interregnum string in that case ti is the pos tag of the word following that interregnumas described above the classifiers features are sets of variablevalue pairsgiven a tuple of variables we generate a feature for each tuple of values that the variable tuple assumes in the training datain order to keep the feature set managable the tuples of variables we consider are restricted in various waysthe most important of these are constraints of the form if xj is included among features variables then so is xkfor example we require that if a feature contains pi1 then it also contains pi for i 0 and we impose a similiar constraint on pos tagsfor the purposes of this research the switchboard corpus as distributed by the linguistic data consortium was divided into four sections and the word immediately following the interregnum also appears in a rough copy then we say that the interregnum word token appears in a rough copythis permits us to approximate the switchboard annotation convention of annotating interregna as edited if they appear in iterated editsthe training subcorpus consists of all files in the directories 2 and 3 of the parsedmerged switchboard corpusdirectory 4 is split into three approximately equalsize sectionsthe first of these is the testing corpusall edit detection and parsing results reported herein are from this subcorpusthe files sw4154mrg to sw4483mrg are reserved for future usethe files sw4519mrg to sw4936mrg are the development corpusin the complete corpus three parse trees were sufficiently ill formed in that our treereader failed to read themthese trees received trivial modifications to allow them to be read eg adding the missing extra set of parentheses around the complete treewe trained our classifier on the parsed data files in the training and development sections and evaluated the classifer on the test sectionsection 3 evaluates the parsers output in conjunction with this classifier this section focuses on the classifiers performance at the individual word token levelin our complete application the classifier uses a bitag tagger to assign each word a pos taglike all such taggers our tagger has a nonnegligible error rate and these tagging could conceivably affect the performance of the classifierto determine if this is the case we report classifier performance when trained both on gold tags and on machine tags we compare these results to a baseline null classifier which never identifies a word as editedour basic measure of performance is the word misclassification rate however we also report precision and recall scores for edited words aloneall words are assigned one of the two possible labels edited or nothowever in our evaluation we report the accuracy of only words other than punctuation and filled pausesour logic here is much the same as that in the statistical parsing community which ignores the location of punctuation for purposes of evaluation 35 6 on the grounds that its placement is entirely conventionalthe same can be said for filled pauses in the switchboard corpusour results are given in table 2they show that our classifier makes only approximately 13 of the misclassification errors made by the null classifier and that using the pos tags produced by the bitag tagger does not have much effect on the classifiers performance we now turn to the second pass of our twopass architecture using an offtheshelf statistical parser to parse the transcribed speech after having removed the words identified as edited by the first passwe first define the evaluation metric we use and then describe the results of our experimentsin this section we describe the metric we use to grade the parser outputas a first desideratum we want a metric that is a logical extension of that used to grade previous statistical parsing workwe have taken as our starting point what we call the relaxed labeled precisionrecall metric from previous research this metric is characterized as followsfor a particular test corpus let n be the total number of nonterminal constituents in the gold standard parseslet m be the number of such constituents returned by the parser and let c be the number of these that are correct then precision cm and recall cna constituent c is correct if there exists a constituent d in the gold standard such that in 2 and 3 above we introduce an equivalence relation r between string positionswe define r to be the smallest equivalence relation satisfying a r b for all pairs of string positions a and b separated solely by punctuation symbolsthe parsing literature uses r rather than because it is felt that two constituents should be considered equal if they disagree only in the placement of say a comma where one constituent includes the punctuation and the other excludes itour new metric relaxed edited labeled precisionrecall is identical to relaxed labeled precisionrecall except for two modificationsfirst in the gold standard all nonterminal subconstituents of an edited node are removed and the terminal constituents are made immediate children of a single edited nodefurthermore two or more edited nodes with no separating nonedited material between them are merged into a single edited nodewe call this version a simplified gold standard parse all precision recall measurements are taken with respected to the simplified gold standardsecond we replace r with a new equivalence relation e which we define as the smallest equivalence relation containing r and satisfying begin e end for each edited node c in the gold standard parse6 we give a concrete example in figure 1the first row indicates string position the second row gives the words of the sentencewords that are edited out have an e above themthe third row indicates the equivalence relation by labeling each string position with the smallest such position with which it is equivalentthere are two basic ideas behind this definitionfirst we do not care where the edited nodes appear in the tree structure produced by the parsersecond we are not interested in the fine structure of edited sections of the string just the fact that they are editedthat we do care which words are edited comes into our figure of merit in two waysfirst edited nodes remain even though their substructure does not and thus they are counted in the precision and recall numberssecondly failure to decide on the correct positions of edited nodes can cause collateral damage to neighboring constituents by causing them to start or stop in the wrong placethis is particularly relevant because according to our definition while the positions at the beginning and ending of an edit node are equivalent the interior positions are not than the simplified gold standardwe rejected this because the e relation would then itself be dependent on the parsers output a state of affairs that might allow complicated schemes to improve the parsers performance as measured by the metricsee figure 1the parser described in 3 was trained on the switchboard training corpus as specified in section 21the input to the training algorithm was the gold standard parses minus all edited nodes and their childrenwe tested on the switchboard testing subcorpus all parsing results reported herein are from all sentences of length less than or equal to 100 words and punctuationwhen parsing the test corpus we carried out the following operations we ran the parser in three experimental situations each using a different edit detector in step 2in the first of the experiments the edit detector was simply the simplified gold standard itselfthis was to see how well the parser would do it if had perfect information about the edit locationsin the second experiment the edit detector was the one described in section 2 trained and tested on the partofspeech tags as specified in the gold standard treesnote that the parser was not given the gold standard partofspeech tagswe were interested in contrasting the results of this experiment with that of the third experiment to gauge what improvement one could expect from using a more sophisticated tagger as input to the edit detectorin the third experiment we used the edit detector based upon the machine generated tagsthe results of the experiments are given in table 3the last line in the figure indicates the performance of this parser when trained and tested on wall street journal text 3it is the machine tags results that we consider the true capability of the detectorparser combination 853 precision and 865 recallthe general trends of table 3 are much as one might expectparsing the switchboard data is much easier given the correct positions of the edited nodes than without this informationthe difference between the goldtags and the machinetags parses is small as would be expected from the relatively small difference in the performance of the edit detector reported in section 2this suggests that putting significant effort into a tagger for use by the edit detector is unlikely to produce much improvementalso as one might expect parsing conversational speech is harder than wall street journal text even given the goldstandard edited nodesprobably the only aspect of the above numbers likely to raise any comment in the parsing community is the degree to which precision numbers are lower than recallwith the exception of the single pair reported in 3 and repeated above no precision values in the recent statisticalparsing literature 234514 have ever been lower than recall valueseven this one exception is by only 01 and not statistically significantwe attribute the dominance of recall over precision primarily to the influence of editdetector mistakesfirst note that when given the gold standard edits the difference is quite small when using the edit detector edits the difference increases to 12our best guess is that because the edit detector has high precision and lower recall many more words are left in the sentence to be parsedthus one finds more nonterminal constituents in the machine parses than in the gold parses and the precision is lower than the recallwhile there is a significant body of work on finding edit positions 1910131718 it is difficult to make meaningful comparisons between the various research efforts as they differ in the corpora used for training and testing the information available to the edit detector and the evaluation metrics usedfor example 13 uses a subsection of the atis corpus takes as input the actual speech signal and uses as its evaluation metric the percentage of time the program identifies the start of the interregnum on the other hand 910 use an internally developed corpus of sentences work from a transcript enhanced with information from the speech signal but do use a metric that seems to be similar to oursundoubtedly the work closest to ours is that of stolcke et al 18 which also uses the transcribed switchboard corpusthey categorize the transitions between words into more categories than we doat first glance there might be a mapping between their six categories and our two with three of theirs corresponding to edited words and three to not editedif one accepts this mapping they achieve an error rate of 26 down from their null rate of 45 as contrasted with our error rate of 22 down from our null rate of 59the difference in null rates however raises some doubts that the numbers are truly measuring the same thingthere is also a small body of work on parsing disfluent sentences 811hindles early work 11 does not give a formal evaluation of the parsers accuracythe recent work of schubert and core 8 does give such an evaluation but on a different corpus also their parser is not statistical and returns parses on only 62 of the strings and 32 of the strings that constitute sentencesour statistical parser naturally parses all of our corpusthus it does not seem possible to make a meaningful comparison between the two systemswe have presented a simple architecture for parsing transcribed speech in which an edited word detector is first used to remove such words from the sentence string and then a statistical parser trained on edited speech is used to parse the textthe edit detector reduces the misclassification rate on edited words from the nullmodel rate of 59 to 22to evaluate our parsing results we have introduced a new evaluation metric relaxed edited labeled precisionrecallthe purpose of this metric is to make evaluation of a parse tree relatively indifferent to the exact tree position of edited nodes in much the same way that the previous metric relaxed labeled precisionrecall make it indifferent to the attachment of punctuationby this metric the parser achieved 853 precision and 865 recallthere is of course great room for improvement both in standalone edit detectors and their combination with parsersalso of interest are models that compute the joint probabilities of the edit detection and parsing decisions that is do both in a single integrated statistical process
N01-1016
edit detection and parsing for transcribed speechwe present a simple architecture for parsing transcribed speech in which an editedword detector first removes such words from the sentence string and then a standard statistical parser trained on transcribed speech parses the remaining wordsthe edit detector achieves a misclassification rate on edited words of 22to evaluate our parsing results we introduce a new evaluation metric the purpose of which is to make evaluation of a parse tree relatively indifferent to the exact tree position of edited nodesby this metric the parser achieves 853 precision and 865 recallour work in statistically parsing conversational speech has examined the performance of a parser that removes edit regions in an earlier step
multipath translation lexicon induction via bridge languages 6 multipath translation induction language word so the system performance is lower than the section 3 results since all available dictionaries are incomplete it is difficult to decide which set of english words to compare against table 6 presents results for different choices of word coverage the subset of existing pairs for englishspanish the union over all languages and the intersection of all languages trends across subsets are relatively consistent as an illustration table 7 shows consensus formation on englishnorweigian and englishportuguese translation mappings via multiple bridge languages note that the englishfrench dictionary used here has no entry for quotbaitquot preventing its use as a bridge language for this word as can be seen in table 6 the distancebased combination methods are more successful at combining the different proposals than the rankn combinations one possible explanation for this is that rankbased classifiers pick the candidate with the best allaround distance while distancebased combinations choose the single best candidate choosing the best allaround performer is detrimental when cognates exist for some languages but not for others english bridge language bridge word target word score rank bay danish german dutch bugt bucht baai bukt bukt baug bukt 1 1 1 1 25 2 15 25 distancebased method bukt 1 1 rankbased method bukt 27 1 bait italian esca isca 5 1 nada 3 54 spanish carnada corneta 2 1 nada 3 12 isca 35 153 romanian nada nada 05 1 isca 35 153 french na na na na distancebased method isca 05 1 nada 05 2 rankbased method nada 67 1 isca 307 20 table 7 endtoend multipath translation induction the performance of an oracle if allowed to choose the correct translation if it appears within the topn in any language would provide an upper bound for the performance of the combination methods results for such oracles are also reported in table 6 the methods corresponding to quotoracle1quot and quotdistancequot are choosing from the same set of proposed targets and the quotdistancequot method achieves performance close to that of the oracle 6 path differences this section investigates the effect of different pathway configurations on the performance of the final multipath system by examining the following situations english to portuguese using the other romance languages as bridges english to norwegian using the germanic languages as bridges english to ukrainian using the slavic languages as bridges portuguese to english using the germanic languages and french as bridges the results of these experiments are shown in taenenglish ptportuguese frfrench ititalian esspanish roromanian dudutch nonorwegian degerman dadanish czczech ukukrainian popolish srserbian rurussian the data sets used in these experiments were apthe same size as those used in the previous experiment 11001300 translation word dictionaries for russian and ukrainian were converted into romanized pronunciation dictionaries there are three observations which can be made from the multipath results 1 adding more pathways usually results in an accuracy improvement when there is a drop in accuracy on the cognate vocabulary by adding an additional bridge language there tends to be an improvement in accuracy on the full vocabulary due to significantly more cognate pathways 2 it is difficult to substantially improve upon the performance of the single closest bridge language especially when they are as close as enespt improvements on performance relative to the single best ranged from 2 to 20 3 several mediocre pathways can be combined to improve performance though it is always better to find one highperforming pathway it is often possible to get good performance from the combination of several less wellperforming pathways in table 8 quotcvgquot or cognate coverage is the percentage words in the source language for which any of the bridge languages contains a cognate to the target translation italian and french bridges for example offer additional translation pathways to portuguese which augment the spanish pathways path accuracy on full vocab accuracy cvg cognate vocab enespt 587 867 655 enitpt 440 854 319 enfrpt 306 743 248 enfr itpt 412 794 422 enfr it espt 602 842 703 endano 719 924 754 enduno 361 767 398 endeno 361 747 389 endu deno 423 722 543 enda du deno 770 875 874 enruuk 488 890 447 enpouk 381 878 319 ensruk 319 867 308 ensr pouk 450 820 503 enru sr pouk 584 746 710 ptduen 291 690 384 ptfren 281 840 242 ptdeen 253 684 321 ptde fren 365 725 485 ptde fr duen 470 697 666 table 8 translation accuracy via different bridge language paths using all languages together improves coverage although this often does not improve performance over using the best single bridge language as a final note table 9 shows the crosslanguage translation rates for some of the investigated languages when translating from english to one of the romance languages using spanish as the bridge language achieves the highest accuracy and using russian as the bridge language achieves the best performance when translating from english to the slavic languages however note that using english alone without a bridge language when translating to the romance languages still achieves reasonable performance due to the substantial french and latinate presence in english vocabulary 7 related work probabilistic string edit distance learning techniques have been studied by ristad and yianilos for use in pronunciation modeling for speech recognition satta and henderson propose a transformation learning method for generic string transduction brill and moore propose an alternative string distance metric and learning algorithm while early statistical machine translation models such as brown et al did not use any cognate based information to seed their wordtoword translation probabilities subsequent models incorporated some simple deterministic heuristics to increase the translation model probabilities for cognates other methods have been demonstrated for building bilingual dictionaries using simple heuristic rules includes kirschner for englishczech dictionaries and chen for chineseenglish proper names tiedemann improves on these alignment seedings by learning allornothing rules for detecting swedishenglish cognates hajie et al has studied the exploitation of language similarity for use in machine translation in the case of the very closely related languages covington uses an algorithm based on heuristic orthographic changes to find cognate words for purposes of historical comparison perhaps the most comprehensive study of word alignment via string transduction methods was pioneered by knight and graehl while restricted to single language transliteration it very effectively used intermediary phonological models to bridge direct lexical borrowing across distant languages 8 conclusion the experiments reported in this paper extend prior research in a number of directions the novel probabilistic paradigm for inducing translation lexicons for words from unaligned word lists is introduced the set of languages on which we demonstrate these methods is broader than previously examined finally the use of multiple bridge languages and of the high degree of intrafamily language similarity for dictionary induction is new there are a number of open questions the first is whether there exists a better string transformation algorithm to use in the induction step one possible area of investigation is to use larger dictionaries and assess how much better stochastic transducers and distance metrics derived from them perform with more training data another option is to investigate the use of multivowel or multiconsonant compounds which better reflect the underlying phonetic units using an more sophisticated edit distance measure in this paper we explore ways of using cognate pairs to create translation lexicons it is an interesting research question as to whether we can augment these methods with translation probabilities estimated from statistical frequency information gleaned from loosely aligned or unaligned bilingual corpora for noncognate pairs various machine learning techniques including cotraining and mutual bootstrapping could employ these additional measures in creating better estimates the techniques presented here are useful for language pairs where an online translation lexicon does not already exist including the large majority of the world lowerdensity languages for language pairs with existing translation lexicons these methods can help improve coverage especially for technical vocabulary and other more recent borrowings which are often cognate but frequently missing from existing dictionaries in both cases the great potential of english x romance accuracy on cognate vocab tl bridge language pt it es fr ro 0 pt 856 867 743 721 794 it 837 851 755 821 780 es 858 840 781 821 793 fr 739 755 767 752 787 ro 728 844 828 761 783 av 782 820 822 757 777 784 english x romance accuracy on full vocab tl bridge language pt it es fr ro 0 pt 426 587 298 284 231 it 420 456 338 348 213 es 575 443 318 297 225 fr 307 352 327 333 249 ro 285 357 305 350 239 av 392 390 412 320 310 226 english x slavic accuracy on cognate vocab tl bridge language cz ru pl sr uk 0 cz 703 814 810 814 750 ru 727 841 803 873 739 pl 812 857 845 882 782 sr 857 829 858 855 767 uk 836 891 879 860 739 av 802 815 842 827 852 75 english x slavic accuracy on full vocab tl bridge language cz ru pl sr uk 0 cz 205 255 273 254 120 ru 233 299 273 471 134 pl 276 303 278 368 150 sr 310 296 294 331 185 uk 270 487 380 314 157 av 27 317 302 28 352 146 table 9 accuracy of english to tl via one bridge language this work is the ability to leverage a single bilingual dictionary into translation lexicons for its entire language family without any additional resources beyond raw wordlists for the other languages in the family 9 acknowledgements the authors would like to thank the following people for their insightful comments and feedback on drafts of this work radu florian jan hajie ellen riloff charles schafer and richard wicentowski thanks also to the johns hopkins nlp lab in general for the productive and stimulating environment references e brill and r moore 2000 an improved errorfor noisy channel spelling correction acl 286293 pf brown sa della pietra vj della pietra and r mercer 1993 the mathematics of statistical translation linguistics 19263311 buck 1949 a of selected synonyms in the principal indoeuropean languages chicagouniversity of chicago press hh chen sj huang yw ding and sc tsai 1998 proper name translation in crosslanguage retrieval of aclcoling pages 232236 chen 1993 aligning sentences in bilingual corusing lexical information of acl pages 916 m covington 1998 aligning multiple languages historical comparison of coling 275280 j hajie j hric and v kubori 2000 cesilko machine translation between closely related lanof anlp 712 jelinek 1997 methods for speech press z kirshner 1982 a dependency based analysis of english for the purpose of machine translation explizite beschreibung der sprache und automa textbearbeitung knight and j graehl 1998 machine transliter linguistics e ristad and p yianilos 1998 learning string distance trans pami g satta and j henderson 1997 string transforlearning of acleacl 444 451 m simard gf foster and p isabelle 1992 using cognates to align sentences in bilingual corpora within an editdistance of 3 from the remaining wordpairs as training datatrain on those pairsfor this set of experiments portuguese was chosen as the target language and spanish french italian and romanian the source languages the spanishportuguese dictionary contained 1000 word pairs while the others contained 900 pairs10fold crossvalidation experiments were performed in each casethe number of training pairs for the adaptive methods which remained after filtering out unlikely cognate pairs ranged from 621 to 232 for the purpose of evaluation we constrained the candidate test set to have exactly one translation per source wordhowever this property was not used to improve candidate alignment table 1 shows results for different candidate distance functions for spanishportuguese and frenchportuguese translation inductionthe metrics depicted in the first three lines namely levenshtein distance the hmm fenonic model and the stochastic transducer were previously described in section 2the other three methods are variants of levenshtein distance where the costs for edit operations have been modifiedin lv the substitution operations between vowels are changed from 1 to 05two adaptively trained variants ls and la are shown in the last two lines of table 1the weights in these two systems were produced by filtering the probabilities obtained from the stochastic transducer into three weight classes 05 075 and 1identity substitutions were assigned a cost of zerofor ls the cost matrix was separately trained for each language pair and for la it was trained collectively over all the romance languagestable 2 shows some of the highest probability consonanttoconsonant edit operations computed by the stochastic transducer most of these topranking derived transformations have been observed to be relatively low distance by either linguistic analysis of historical sound changes or by phonological classification notably nasal sonorants and voiced stops other pairs are derivationally reasonable and while some may be noise and not shown are voweltovowel substitutions which in general were the most highly ranked also not shown are tight correspondences between accented and unaccented vowel variants which were also learned by the stochastic transduceras can be observed from table 1 pure levenshtein distance works surprisingly welldynamic adaptation via the stochastic transducers also gives a notable boost on frenchportuguese but offer little improvement for spanishportuguese similarly a slight improvment is observed for romanianportuguese under s but no improvement for italianportuguesealso empirical evidence suggests that the best method is achieved through learning weights with stochastic transducers and then using these weights in the ls framework for each word o e 0 for each bridge language b translate o b e b vt e t calculate d rank t by d score t using information from all bridges select highest scored t produce mapping o t two scoring methods were investigated for the above algorithm one based on rank and the other on distancethe rankbased scoring method takes each proposed target and combines the rank of that proposal across all classifiers and chooses the translation with the lowest resulting rank since including all the hypothesized translations regardless of ranking performed poorly we only include the ones with a ranking lower than some threshold n the distancebased scoring method selects the hypothesized target word with the smallest distance from a translation in any of the bridge languageswe also tested one alternative distrank which uses ranks to break ties in the distancebased method with similar performancein table 6 we present the results obtained by applying different combination algorithms for the pathway from english to portuguese using one of the other romance languages as bridges and compare with the single best path these results are presented for unrestricted matching on the full dictionary lexicon 2this is a more difficult task than that used for direct induction so the system performance is lower than the section 3 resultssince all available dictionaries are incomplete it is difficult to decide which set of english words to compare againsttable 6 presents results for different choices of word coverage the subset of existing pairs for englishspanish the union over all languages and the intersection of all languagestrends across subsets are relatively consistentas an illustration table 7 shows consensus formation on englishnorweigian and englishportuguese translation mappings via multiple bridge languagesnote that the englishfrench dictionary used here has no entry for quotbaitquot preventing its use as a bridge language for this wordas can be seen in table 6 the distancebased combination methods are more successful at combining the different proposals than the rankn combinationsone possible explanation for this is that rankbased classifiers pick the candidate with the best allaround distance while distancebased combinations choose the single best candidatechoosing the best allaround performer is detrimental when cognates exist for some languages but not for othersthe performance of an oracle if allowed to choose the correct translation if it appears within the topn in any language would provide an upper bound for the performance of the combination methodsresults for such oracles are also reported in table 6the methods corresponding to quotoracle1quot and quotdistancequot are choosing from the same set of proposed targets and the quotdistancequot method achieves performance close to that of the oracle this section investigates the effect of different pathway configurations on the performance of the final multipath system by examining the following situations the results of these experiments are shown in table 83 3key enenglish ptportuguese frfrench ititalian esspanish roromanian dudutch nonorwegian degerman dadanish czczech ukukrainian popolish srserbian rurussian the data sets used in these experiments were approximately the same size as those used in the previous experiment 11001300 translation word pairsdictionaries for russian and ukrainian were converted into romanized pronunciation dictionariesthere are three observations which can be made from the multipath resultsin table 8 quotcvgquot or cognate coverage is the percentage words in the source language for which any of the bridge languages contains a cognate to the target translationitalian and french bridges for example offer additional translation pathways to portuguese which augment the spanish pathwaysusing all languages together improves coverage although this often does not improve performance over using the best single bridge languageas a final note table 9 shows the crosslanguage translation rates for some of the investigated languageswhen translating from english to one of the romance languages using spanish as the bridge language achieves the highest accuracy and using russian as the bridge language achieves the best performance when translating from english to the slavic languageshowever note that using english alone without a bridge language when translating to the romance languages still achieves reasonable performance due to the substantial french and latinate presence in english vocabularyprobabilistic string edit distance learning techniques have been studied by ristad and yianilos for use in pronunciation modeling for speech recognitionsatta and henderson propose a transformation learning method for generic string transductionbrill and moore propose an alternative string distance metric and learning algorithmwhile early statistical machine translation models such as brown et al did not use any cognate based information to seed their wordtoword translation probabilities subsequent models incorporated some simple deterministic heuristics to increase the translation model probabilities for cognatesother methods have been demonstrated for building bilingual dictionaries using simple heuristic rules includes kirschner for englishczech dictionaries and chen for chineseenglish proper namestiedemann improves on these alignment seedings by learning allornothing rules for detecting swedishenglish cognateshajie et al has studied the exploitation of language similarity for use in machine translation in the case of the very closely related languages covington uses an algorithm based on heuristic orthographic changes to find cognate words for purposes of historical comparisonperhaps the most comprehensive study of word alignment via string transduction methods was pioneered by knight and graehl while restricted to single language transliteration it very effectively used intermediary phonological models to bridge direct lexical borrowing across distant languagesthe experiments reported in this paper extend prior research in a number of directionsthe novel probabilistic paradigm for inducing translation lexicons for words from unaligned word lists is introducedthe set of languages on which we demonstrate these methods is broader than previously examinedfinally the use of multiple bridge languages and of the high degree of intrafamily language similarity for dictionary induction is newthere are a number of open questionsthe first is whether there exists a better string transformation algorithm to use in the induction stepone possible area of investigation is to use larger dictionaries and assess how much better stochastic transducers and distance metrics derived from them perform with more training dataanother option is to investigate the use of multivowel or multiconsonant compounds which better reflect the underlying phonetic units using an more sophisticated edit distance measurein this paper we explore ways of using cognate pairs to create translation lexiconsit is an interesting research question as to whether we can augment these methods with translation probabilities estimated from statistical frequency information gleaned from loosely aligned or unaligned bilingual corpora for noncognate pairsvarious machine learning techniques including cotraining and mutual bootstrapping could employ these additional measures in creating better estimatesthe techniques presented here are useful for language pairs where an online translation lexicon does not already exist including the large majority of the world lowerdensity languagesfor language pairs with existing translation lexicons these methods can help improve coverage especially for technical vocabulary and other more recent borrowings which are often cognate but frequently missing from existing dictionariesin both cases the great potential of this work is the ability to leverage a single bilingual dictionary into translation lexicons for its entire language family without any additional resources beyond raw wordlists for the other languages in the familythe authors would like to thank the following people for their insightful comments and feedback on drafts of this work radu florian jan hajie ellen riloff charles schafer and richard wicentowskithanks also to the johns hopkins nlp lab in general for the productive and stimulating environment
N01-1020
multipath translation lexicon induction via bridge languagesthis paper presents a method for inducing translation lexicons based on transduction models of cognate pairs via bridge languagesbilingual lexicons within languages families are induced using probabilistic string edit distance modelstranslation lexicons for arbitrary distant language pairs are then generated by a combination of these intrafamily translation models and one or more crossfamily online dictionariesup to 95 exact match accuracy is achieved on the target vocabulary thus substantial portions of translation lexicons can be generated accurately for languages where no bilingual dictionary or parallel corpora may existwe present a method for inducing translation lexicons based on transduction modules of cognate pairs via bridge languageswe present a method for inducing translation lexicons based on transduction models of cognate pairs via bridge languages
a probabilistic earley parser as a psycholinguistic model in human sentence processing cognitive load can be defined many ways this report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at point in a sentence the surprisal of word its prefix on a phrasestructural language model these loads can be efficiently calculated using a probabilistic earley parser which is interpreted as generating predictions about reading time on a wordbyword basis under grammatical assumptions supported by corpusfrequency data the operation of stolckes probabilistic earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subjectobject relative asymmetry what is the relation between a persons knowledge of grammar and that same persons application of that knowledge in perceiving syntactic structurethe answer to be proposed here observes three principlesprinciple 1 the relation between the parser and grammar is one of strong competencestrong competence holds that the human sentence processing mechanism directly uses rules of grammar in its operation and that a bare minimum of extragrammatical machinery is necessarythis hypothesis originally proposed by chomsky has been pursued by many researchers and stands in contrast with an approach directed towards the discovery of autonomous principles unique to the processing mechanismprinciple 2 frequency affects performancethe explanatory success of neural network and constraintbased lexicalist theories suggests a statistical theory of language performancethe present work adopts a numerical view of competition in grammar that is grounded in probabilityprinciple 3 sentence processing is eagereager in this sense means the experimental situations to be modeled are ones like selfpaced reading in which sentence comprehenders are unrushed and no information is ignored at a point at which it could be usedthe proposal is that a persons difficulty perceiving syntactic structure be modeled by wordtoword surprisal which can be directly computed from a probabilistic phrasestructure grammarthe approach taken here uses a parsing algorithm developed by stolckein the course of explaining the algorithm at a very high level i will indicate how the algorithm interpreted as a psycholinguistic model observes each principleafter that will come some simulation results and then a conclusionstolckes parsing algorithm was initially applied as a component of an automatic speech recognition systemin speech recognition one is often interested in the probability that some word will follow given that a sequence of words has been seengiven some lexicon of all possible words a language model assigns a probability to every string of words from the lexiconthis defines a probabilistic language a language model helps a speech recognizer focus its attention on words that are likely continuations of what it has recognized so farthis is typically done using conditional probabilities of the form the probability that the nth word will actually be wn given that the words leading up to the nth have been w1 w2 wn1given some finite lexicon the probability of each possible outcome for wn can be estimated using that outcomes relative frequency in a sampletraditional language models used for speech are ngram models in which n 1 words of history serve as the basis for predicting the nth wordsuch models do not have any notion of hierarchical syntactic structure except as might be visible through an nword windowaware that the ngram obscures many linguisticallysignificant distinctions many speech researchers sought to incorporate hierarchical phrase structure into language modeling although it was not until the late 1990s that such models were able to significantly improve on 3grams stolckes probabilistic earley parser is one way to use hierarchical phrase structure in a language modelthe grammar it parses is a probabilistic contextfree phrase structure grammar egsuch a grammar defines a probabilistic language in terms of a stochastic process that rewrites strings of grammar symbols according to the probabilities on the rulesthen each sentence in the language of the grammar has a probability equal to the product of the probabilities of all the rules used to generate itthis multiplication embodies the assumption that rule choices are independentsentences with more than one derivation accumulate the probability of all derivations that generate themthrough recursion infinite languages can be specified an important mathematical question in this context is whether or not such a grammar is consistent whether it assigns some probability to infinite derivations or whether all derivations are guaranteed to terminateeven if a pcfg is consistent it would appear to have another drawback it only assigns probabilities to complete sentences of its languagethis is as inconvenient for speech recognition as it is for modeling reading timesstolckes algorithm solves this problem by computing at each word of an input string the prefix probabilitythis is the sum of the probabilities of all derivations whose yield is compatible with the string seen so farif the grammar is consistent then subtracting the prefix probability from 10 gives the total probability of all the analyses the parser has disconfirmedif the human parser is eager then the work done during sentence processing is exactly this disconfirmationthe computation of prefix probabilities takes advantage of the design of the earley parser which by itself is not probabilisticin this section i provide a brief overview of stolckes algorithm but the original paper should be consulted for full details earley parsers work topdown and propagate predictions confirmed by the input string back up through a set of states representing hypotheses the parser is entertaining about the structure of the sentencethe global state of the parser at any one time is completely defined by this collection of states a chart which defines a tree seta state is a record that specifies an earley parser has three main functions predict scan and complete each of which can enter new states into the chartstarting from a dummy start state in which the dot is just to the left of the grammars start symbol predict adds new states for rules which could expand the start symbolin these new predicted states the dot is at the far lefthand side of each ruleafter prediction scan checks the input string if the symbol immediately following the dot matches the current word in the input then the dot is moved rightward across the symbolthe parser has scanned this wordfinally complete propagates this change throughout the chartif as a result of scanning any states are now present in which the dot is at the end of a rule then the left hand side of that rule has been recognized and any other states having a dot immediately in front of the newlyrecognized left hand side symbol can now have their dots moved as wellthis happens over and over until no new states are generatedparsing finishes when the dot in the dummy start state is moved across the grammars start symbolstolckes innovation as regards prefix probabilities is to add two additional pieces of information to each state α the forward or prefix probability and y the inside probabilityhe notes that path an earley path or simply path is a sequence of earley states linked by prediction scanning or completion constrained a path is said to be constrained by or generate a string x if the terminals immediately to the left of the dot in all scanned states in sequence form the string x the significance of earley paths is that they are in a onetoone correspondence with leftmost derivationsthis will allow us to talk about probabilities of derivations strings and prefixes in terms of the actions performed by earleys parser this correspondence between paths of parser operations and derivations enables the computation of the prefix probability the sum of all derivations compatible with the prefix seen so farby the correspondence between derivations and earley paths one would need only to compute the sum of all paths that are constrained by the observed prefixbut this can be done in the course of parsing by storing the current prefix probability in each statethen when a new state is added by some parser operation the contribution from each antecedent state each previous state linked by some parser operation is summed in the new stateknowing the prefix probability at each state and then summing for all parser operations that result in the same new state efficiently counts all possible derivationspredicting a rule corresponds to multiplying by that rules probabilityscanning does not alter any probabilitiescompletion though requires knowing y the inside probability which records how probable was the inner structure of some recognized phrasal nodewhen a state is completed a bottomup confirmation is united with a topdown prediction so the α value of the completeee is multiplied by the y value of the completeerimportant technical problems involving leftrecursive and unit productions are examined and overcome in however these complications do not add any further machinery to the parsing algorithm per se beyond the grammar rules and the dotmoving conventions in particular there are no heuristic parsing principles or intermediate structures that are later destroyedin this respect the algorithm observes strong competence principle 1in virtue of being a probabilistic parser it observes principle 2finally in the sense that predict and complete each apply exhaustively at each new input word the algorithm is eager satisfying principle 3psycholinguistic theories vary regarding the amount bandwidth they attribute to the human sentence processing mechanismtheories of initial parsing preferences suggest that the human parser is fundamentally serial a function from a tree and new word to a new treethese theories explain processing difficulty by appealing to garden pathing in which the current analysis is faced with words that cannot be reconciled with the structures built so fara middle ground is held by boundedparallelism theories in these theories the human parser is modeled as a function from some subset of consistent trees and the new word to a new tree subsetgarden paths arise in these theories when analyses fall out of the set of trees maintained from word to word and have to be reanalyzed as on strictly serial theoriesfinally there is the possibility of total parallelism in which the entire set of trees compatible with the input is maintained somehow from word to wordon such a theory gardenpathing cannot be explained by reanalysisthe probabilistic earley parser computes all parses of its input so as a psycholinguistic theory it is a total parallelism theorythe explanation for gardenpathing will turn on the reduction in the probability of the new tree set compared with the previous tree set reanalysis plays no rolebefore illustrating this kind of explanation with a specific example it will be important to first clarify the nature of the linking hypothesis between the operation of the probabilistic earley parser and the measured effects of the human parserthe measure of cognitive effort mentioned earlier is defined over prefixes for some observed prefix the cognitive effort expended to parse that prefix is proportional to the total probability of all the structural analyses which cannot be compatible with the observed prefixthis is consistent with eagerness since if the parser were to fail to infer the incompatibility of some incompatible analysis it would be delaying a computation and hence not be eagerthis prefixbased linking hypothesis can be turned into one that generates predictions about wordbyword reading times by comparing the total effort expended before some word to the total effort after in particular take the comparison to be a ratiomaking the further assumption that the probabilities on pcfg rules are statements about how difficult it is to disconfirm each rule then the ratio of this assumption is inevitable given principles 1 and 2if there were separate processing costs distinct from the optimization costs postulated in the grammar then strong competence is violateddefining all grammatical structures as equally easy to disconfirm or perceive likewise voids the gradedness of grammaticality of any content the α value for the previous word to the α value for the current word measures the combined difficulty of disconfirming all disconfirmable structures at a given word the definition of cognitive loadscaling this number by taking its log gives the surprisal and defines a wordbased measure of cognitive effort in terms of the prefixbased oneof course if the language model is sensitive to hierarchical structure then the measure of cognitive effort so defined will be structuresensitive as well could account for garden path structural ambiguitygrammar generates the celebrated garden path sentence the horse raced past the barn fell english speakers hearing these words one by one are inclined to take the horse as the subject of raced expecting the sentence to end at the word barn this is the main verb reading in figure 1the debate over the form grammar takes in the mind is clearly a fundamental one for cognitive sciencemuch recent psycholinguistic work has generated a wealth of evidence that frequency of exposure to linguistic elements can affect our processing however there is no clear consensus as to the size of the elements over which exposure has clearest effectgibson and pearlmutter identify it as an outstanding question whether or not phrase structure statistics are necessary to explain performance effects in sentence comprehension are phraselevel contingent frequency constraints necessary to explain comprehension performance or are the remaining types of constraints sufficientif phraselevel contingent frequency constraints are necessary can they subsume the effects of other constraints equally formal work in linguistics has demonstrated the inadequacy of contextfree grammars as an appropriate model for natural language in the general case to address this criticism the same prefix probabilities could be computing using treeadjoining grammars with contextfree grammars serving as the implicit backdrop for much work in human sentence processing as well as linguistics2 simplicity seems as good a guide as any in the selection of a grammar formalismprobabilistic contextfree grammar will help illustrate the way a phrasestructured language model the human sentence processing mechanism is metaphorically led up the garden path by the main verb reading when upon hearing fell it is forced to accept the alternative reduced relative reading shown in figure 2the confusion between the main verb and the reduced relative readings which is resolved upon hearing fell is the empirical phenomenon at issueas the parse trees indicate grammar analyzes reduced relative clauses as a vp adjoined to an np3in one sample of parsed text4 such adjunctions are about 7 times less likely than simple nps made up of a determiner followed by a nounthe probabilities of the other crucial rules are likewise estimated by their relative frequencies in the samplethis simple grammar exhibits the essential character of the explanation garden paths happen at points where the parser can disconfirm alternatives that together comprise a great amount of probabilitynote the category ambiguity present with raced which can show up as both a pasttense verb and a past participle figure 3 shows the reading time predictions5 derived via the linking hypothesis that reading time at word n is proportional to the surprisal log at fell the parser gardenpaths up until that point both the mainverb and reducedrelative structures are consistent with the inputthe prefix probability before fell is scanned is more than 10 times greater than after suggesting that the probability mass of the analyses disconfirmed at that point was indeed greatin fact all of the probability assigned to the mainverb structure is now lost and only parses that involve the lowprobability np rule survive a rule introduced 5 words backif this garden path effect is truly a result of both the main verb and the reduced relative structures being simultaneously available up until the final verb 5whether the quantitative values of the predicted reading times can be mapped onto a particular experiment involves taking some position on the oftobserved imperfect relationship between corpus frequency and psychological norms then the effect should disappear when words intervene that cancel the reduced relative interpretation early onto examine this possibility consider now a different example sentence this time from the language of grammar the probabilities in grammar are estimated from the same sample as beforeit generates a sentence composed of words actually found in the sample the banker told about the buyback resigned this sentence exhibits the same reduced relative clause structure as does the horse raced past the barn fell grammar also generates6 the subject relative the banker who was told about the buyback resigned now a comparison of two conditions is possiblerc only the banker who was told about the buyback resigned the words who was cancel the main verb reading and should make that condition easier to processthis asymmetry is borne out in graphs 4 and 5at resigned the probabilistic earley parser predicts less reading time in the subject relative condition than in the reduced relative conditionthis comparison verifies that the same sorts of phenomena treated in reanalysis and bounded parallelism parsing theories fall out as cases of the present total parallelism theoryalthough they used frequency estimates provided by corpus data the previous two grammars were partially handbuiltthey used a subset of the rules found in the sample of parsed texta grammar including all rules observed in the entire sample supports the same sort of reasoningin this grammar instead of just 2 np rules there are 532 along with 120 s rulesmany of these generate analyses compatible with prefixes of the reduced relative clause at various points during parsing so the expectation is that the parser will be disconfirming many more hypotheses at each word than in the simpler examplefigure 6 shows the reading time predictions derived from this much richer grammarbecause the terminal vocabulary of this richer grammar is so much larger a comparatively large amount of information is conveyed by the nouns banker and buyback leading to high surprisal values at those wordshowever the garden path effect is still observable at resigned where the prefix probability ratio is nearly 10 times greater than at either of the nounsamid the lexical effects the probabilistic earley parser is affected by the same structural ambiguity that affects english speakersthe same kind of explanation supports an account of the subjectobject relative asymmetry in the processing of unreduced relative clausessince the earley parser is designed to work with contextfree grammars the following example grammar adopts a gpsgstyle analysis of relative clauses the estimates of the ratios for the two sr rules are obtained by counting the proportion of subject relatives among all relatives in the treebanks parsed brown corpus7grammar generates both subject and object relative clausessr npr vp is the rule that generates subject relatives and sr npr snp generates object relativesone might expect there to be a greater processing load for object relatives as soon as enough lexical material is present to determine that the sentence is in fact an object relativesthe same probabilistic earley parser explains this asymmetry in the same way as it explains the garden path effectits predictions under the same linking hypothesis as in the previous cases are depicted in graphs 7 and 8the mean surprisal for the object relative is about 50 whereas the mean surprisal for the subject relative is about 21these examples suggest that a totalparallelism parsing theory based on probabilistic grammar can characterize some important processing phenomenain the domain of structural ambiguity in particular the explanation is of a different kind than in traditional reanalysis models the order of processing is not theoretically significant but the estimate of its magnitude at each point in a sentence isresults with empiricallyderived grammars suggest an affirmative answer to gibson and pearlmutters quessthe difference in probability between subject and object rules could be due to the work necessary to set up storage for the filler effectively recapitulating the hold hypothesis tion phraselevel contingent frequencies can do the work formerly done by other mechanismspursuit of methodological principles 1 2 and 3 has identified a model capable of describing some of the same phenomena that motivate psycholinguistic interest in other theoretical frameworksmoreover this recommends probabilistic grammars as an attractive possibility for psycholinguistics by providing clear testable predictions and the potential for new mathematical insights
N01-1021
a probabilistic earley parser as a psycholinguistic modelin human sentence processing cognitive load can be defined many waysthis report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence the surprisal of word wi given its prefix w0i1 on a phrasestructural language modelthese loads can be efficiently calculated using a probabilistic earley parser which is interpreted as generating predictions about reading time on a wordbyword basisunder grammatical assumptions supported by corpusfrequency data the operation of stolckes probabilistic earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subjectobject relative asymmetrysince the introduction of a parserbased calculation for surprisal statistical techniques have been become common as models of reading difficulty and linguistic complexity
applying cotraining methods to statistical parsing we propose a novel cotraining method for statistical parsing the algorithm takes as input a small corpus annotated with parse trees a dictionary of possible lexicalized structures for each word in the training set and a large pool of unlabeled text the algorithm iteratively labels the entire data set with parse trees using empirical results based on parsing the wall street journal corpus we show that training a statistical parser on the combined labeled and unlabeled data strongly outperforms training only on the labeled data the current crop of statistical parsers share a similar training methodologythey train from the penn treebank a collection of 40000 sentences that are labeled with corrected parse trees in this paper we explore methods for statistical parsing that can be used to combine small amounts of labeled data with unlimited amounts of unlabeled datain the experiment reported here we use 9695 sentences of bracketed data such methods are attractive for the following reasons in this paper we introduce a new approach that combines unlabeled data with a small amount of labeled data to train a statistical parserwe use a cotraining method that has been used previously to train classifiers in applications like wordsense disambiguation document classification and namedentity recognition and apply this method to the more complex domain of statistical parsing2 unsupervised techniques in language processing while machine learning techniques that exploit annotated data have been very successful in attacking problems in nlp there are still some aspects which are considered to be open issues in the particular domain of statistical parsing there has been limited success in moving towards unsupervised machine learning techniques a more promising approach is that of combining small amounts of seed labeled data with unlimited amounts of unlabeled data to bootstrap statistical parsersin this paper we use one such machine learning technique cotraining which has been used successfully in several classification tasks like web page classification word sense disambiguation and namedentity recognitionearly work in combining labeled and unlabeled data for nlp tasks was done in the area of unsupervised part of speech tagging reported very high results for unsupervised pos tagging using hidden markov models by exploiting handbuilt tag dictionaries and equivalence classestag dictionaries are predefined assignments of all possible pos tags to words in the test datathis impressive result triggered several followup studies in which the effect of hand tuning the tag dictionary was quantified as a combination of labeled and unlabeled datathe experiments in showed that only in very specific cases hmms were effective in combining labeled and unlabeled datahowever showed that aggressively using tag dictionaries extracted from labeled data could be used to bootstrap an unsupervised pos tagger with high accuracy we exploit this approach of using tag dictionaries in our method as well it is important to point out that before attacking the problem of parsing using similar machine learning techniques we face a representational problem which makes it difficult to define the notion of tag dictionary for a statistical parserthe problem we face in parsing is more complex than assigning a small fixed set of labels to examplesif the parser is to be generally applicable it has to produce a fairly complex label given an input sentencefor example given the sentence pierre vinken will join the board as a nonexecutive director the parser is expected to produce an output as shown in figure 1since the entire parse cannot be reasonably considered as a monolithic label the usual method in parsing is to decompose the structure assigned in the following way however such a recursive decomposition of structure does not allow a simple notion of a tag dictionarywe solve this problem by decomposing the structure in an approach that is different from that shown above which uses contextfree rulesthe approach uses the notion of tree rewriting as defined in the lexicalized tree adjoining grammar formalism 1 which retains the notion of lexicalization that is crucial in the success of a statistical parser while permitting a simple definition of tag dictionaryfor example the parse in figure 1 can be generated by assigning the structured labels shown in figure 2 to each word in the sentence we use a tool described in to convert the penn treebank into this representationcombining the trees together by rewriting nodes as trees gives us the parse tree in figure 1a history of the bilexical dependencies that define the probability model used to construct the parse is shown in figure 3this history is called the derivation treein addition as a byproduct of this kind of representation we obtain more than the phrase structure of each sentencewe also produce a more embellished parse in which phenomena such as predicateargument structure subcategorization and movement are given a probabilisa stochastic ltag derivation proceeds as follows an initial tree is selected with probability pinit and other trees selected by words in the sentence are combined using the operations of substitution and adjoiningthese operations are explained below with exampleseach of these operations is performed with probability pattachsubstitution is defined as rewriting a node in the frontier of a tree with probability pattach which is said to be proper if where t q t0 indicates that tree t0 is substituting into node q in tree t an example of the operation of substitution is shown in figure 4adjoining is defined as rewriting any internal node of a tree by another treethis is a recursive rule and each adjoining operation is performed with probability pattach which is proper if pattach here is the probability that t0 rewrites an internal node q in tree t or that no adjoining occurs at node q in t the additional factor that accounts for no adjoining at a node is required for the probability to be wellformedan example of the operation of adjoining is shown in figure 5each ltag derivation d which was built starting from tree a with n subsequent attachments has the probability note that assuming each tree is lexicalized by one word the derivation d corresponds to a sentence of n 1 wordsin the next section we show how to exploit this notion of tag dictionary to the problem of statistical parsingmany supervised methods of learning from a treebank have been studiedthe question we want to pursue in this paper is whether unlabeled data can be used to improve the performance of a statistical parser and at the same time reduce the amount of labeled training data necessary for good performancewe will assume the data that is input to our method will have the following characteristics the pair of probabilistic models can be exploited to bootstrap new information from unlabeled datasince both of these steps ultimately have to agree with each other we can utilize an iterative method called cotraining that attempts to increase agreement between a pair of statistical models by exploiting mutual constraints between their outputcotraining has been used before in applications like wordsense disambiguation webpage classification and namedentity identification in all of these cases using unlabeled data has resulted in performance that rivals training solely from labeled datahowever these previous approaches were on tasks that involved identifying the right label from a small set of labels and in a relatively small parameter spacecompared to these earlier models a statistical parser has a very large parameter space and the labels that are expected as output are parse trees which have to be built up recursivelywe discuss previous work in combining labeled and unlabeled data in more detail in section 7effectively by picking confidently labeled data from each model to add to the training data one model is labeling data for the other modelin the representation we use parsing using a lexicalized grammar is done in two steps each of these two steps involves ambiguity which can be resolved using a statistical modelby explicitly representing these two steps independently we can pursue independent statistical models for each step these two models have to agree with each other on the trees assigned to each word in the sentencenot only do the right trees have to be assigned as predicted by the first model but they also have to fit together to cover the entire sentence as predicted by the second model2this represents the mutual constraint that each model places on the otherfor the words that appear in the training data we collect a list of partofspeech labels and trees that each word is known to select in the training datathis information is stored in a pos tag dictionary and a tree dictionaryit is important to note that no frequency or any other distributional information is storedthe only information stored in the dictionary is which tags or trees can be selected by each word in the training datawe use a count cutoff for trees in the labeled data and combine observed counts into an unobserved tree countthis is similar to the usual technique of assigning the token unknown to infrequent word tokensin this way trees unseen in the labeled data but in the tag dictionary are assigned a probability in the parserthe problem of lexical coverage is a severe one for unsupervised approachesthe use of tag dictionaries is a way around this problemsuch an approach has already been used for unsupervised partofspeech tagging in where seed data of which pos tags can be selected by each word is given as input to the unsupervised taggerin future work it would be interesting to extend models for unknownword handling or other machine learning techniques in clustering or the learning of subcategorization frames to the creation of such tag dictionariesas described before we treat parsing as a twostep processthe two models that we use are we select the most likely trees for each word by examining the local contextthe statistical model we use to decide this is the trigram model that was used by b srinivas in his supertagging model the model assigns an nbest lattice of tree assignments associated with the input sentence with each path corresponding to an assignment of an elementary tree for each word in the sentence where t0 tn is a sequence of elementary trees assigned to the sentence w0 wnwe get by using bayes theorem and we obtain from by ignore the denominator and by applying the usual markov assumptionsthe output of this model is a probabilistic ranking of trees for the input sentence which is sensitive to a small local context windowonce the words in a sentence have selected a set of elementary trees parsing is the process of attaching these trees together to give us a consistent bracketing of the sentencesnotation let t stand for an elementary tree which is lexicalized by a word w and a part of speech tag p let pinit stand for the probability of being root of a derivation tree defined as follows including lexical information this is written as where the variable top indicates that t is the tree that begins the current derivationthere is a useful approximation for pinit pr ti pr where label is the label of the root node of t where n is the number of bracketing labels and a is a constant used to smooth zero countslet pattach stand for the probability of attachment of t into another t we decompose into the following components we do a similar decomposition for for each of the equations above we use a backoff model which is used to handle sparse data problemswe compute a backoff model as follows let e1 stand for the original lexicalized model and e2 be the backoff level which only uses part of speech information for both pinit and pattach let c countthen the backoff model is computed as follows where a c and d is the diversity of e1 for pattach we further smooth probabilities and we use as an example the other two are handled in the same way where k is the diversity of adjunction that is the number of different trees that can attach at that nodet is the set of all trees t that can possibly attach at node in tree t for our experiments the value of a is set to 1 100000we are now in the position to describe the cotraining algorithm which combines the models described in section 41 and in section 42 in order to iteratively label a large pool of unlabeled datawe use the following datasets in the algorithm labeled a set of sentences bracketed with the correct parse trees cache a small pool of sentences which is the focus of each iteration of the cotraining algorithm unlabeled a large set of unlabeled sentencesthe only information we collect from this set of sentences is a treedictionary treedict and partofspeech dictionary posdictconstruction of these dictionaries is covered in section 32in addition to the above datasets we also use the usual development test set and a test set which is used to evaluate the bracketing accuracy of the parserthe cotraining algorithm consists of the following steps which are repeated iteratively until all the sentences in the set unlabeled are exhaustedfor the experiment reported here n 10 and k was set to be n in each iterationwe ran the algorithm for 12 iterations and then added the best parses for all the remaining sentencesthe experiments we report were done on the penn treebank wsj corpus the various settings for the cotraining algorithm are as follows while it might seem expensive to run the parser over the cache multiple times we use the pruning capabilities of the parser to good use hereduring the iterations we set the beam size to a value which is likely to prune out all derivations for a large portion of the cache except the most likely onesthis allows the parser to run faster hence avoiding the usual problem with running an iterative algorithm over thousands of sentencesin the initial runs we also limit the length of the sentences entered into the cache because shorter sentences are more likely to beat out the longer sentences in any casethe beam size is reset when running the parser on the test data to allow the parser a better chance at finding the most likely parsewe scored the output of the parser on section 23 of the wall street journal penn treebankthe following are some aspects of the scoring that might be useful for comparision with other results no punctuations are scored including sentence final punctuationempty elements are not scoredwe used evalb which scores based on parseval with the standard parameter file also we used adwait ratnaparkhis partofspeech tagger to tag unknown words in the test datawe obtained 8002 and 7964 labeled bracketing precision and recall respectively the baseline model which was only trained on the 9695 sentences of labeled data performed at 7223 and 6912 precision and recallthese results show that training a statistical parser using our cotraining method to combine labeled and unlabeled data strongly outperforms training only on the labeled datait is important to note that unlike previous studies our method of moving towards unsupervised parsing are directly compared to the output of supervised parserscertain differences in the applicability of the usual methods of smoothing to our parser because the lower accuracy as compared to other state of the art statistical parsershowever we have consistently seen increase in performance when using the cotraining method over the baseline across several trialsit should be emphasised that this is a result based on less than 20 of data that is usually used by other parserswe are experimenting with the use of an even smaller set of labeled data to investigate the learning curvethe twostep procedure used in our cotraining method for statistical parsing was incipient in the supertagger which is a statistical model for tagging sentences with elementary lexicalized structuresthis was particularly so in the lightweight dependency analyzer which used shortest attachment heuristics after an initial supertagging stage to find syntactic dependencies between words in a sentencehowever there was no statistical model for attachments and the notion of mutual constraints between these two steps was not exploited in this workprevious studies in unsupervised methods for parsing have concentrated on the use of insideoutside algorithm however there are several limitations of the insideoutside algorithm for unsupervised parsing see for some experiments that draw out the mismatch between minimizing error rate and iteratively increasing the likelihood of the corpusother approaches have tried to move away from phrase structural representations into dependency style parsing however there are still inherent computational limitations due to the vast search space for discussionnone of these approaches can even be realistically compared to supervised parsers that are trained and tested on the kind of representations and the complexity of sentences that are found in the penn treebank combine unlabeled and labeled data for parsing with a view towards language modeling applicationsthe goal in their work is not to get the right bracketing or dependencies but to reduce the word error rate in a speech recognizerour approach is closely related to previous cotraining methods first introduced an iterative method for increasing a small set of seed data used to disambiguate dual word senses by exploiting the constraint that in a segment of discourse only one sense of a word is usedthis use of unlabeled data improved performance of the disambiguator above that of purely supervised methods further embellish this approach and gave it the name of cotrainingtheir definition of cotraining includes the notion that different models can constrain each other by exploiting different views of the datathey also prove some pac results on learnabilitythey also discuss an application of classifying web pages by using their method of mutually constrained models further extend the use of classifiers that have mutual constraints by adding terms to adaboost which force the classifiers to agree provide a variant of cotraining which is suited to the learning of decision trees where the data is split up into different equivalence classes for each of the models and they use hypothesis testing to determine the agreement between the modelsin future work we would like to experiment whether some of these ideas could be incorporated into our modelin future work we would like to explore use of the entire 1m words of the wsj penn treebank as our labeled data and to use a larger set of unbracketed wsj data as input to the cotraining algorithmin addition we plan to explore the following points that bear on understanding the nature of the cotraining learning algorithm the contribution of the dictionary of trees extracted from the unlabeled set is an issue that we would like to explore in future experimentsideally we wish to design a cotraining method where no such information is used from the unlabeled set the relationship between cotraining and them bears investigation is a study which tries to separate two factors the gradient descent aspect of them vs the iterative nature of cotraining and the generative model used in them vs the conditional independence between the features used by the two models that is exploited in cotrainingalso them has been used successfully in text classification in combination of labeled and unlabeled data in our experiments unlike we do not balance the label priors when picking new labeled examples for addition to the training dataone way to incorporate this into our algorithm would be to incorporate some form of sample selection into the selection of examples that are considered as labeled with high confidence in this paper we proposed a new approach for training a statistical parser that combines labeled with unlabeled datait uses a cotraining method where a pair of models attempt to increase their agreement on labeling the datathe algorithm takes as input a small corpus of 9695 sentences of bracketed data a large pool of unlabeled text and a tag dictionary of lexicalized structures for each word in this training set the algorithm presented iteratively labels the unlabeled data set with parse treeswe then train a statistical parser on the combined set of labeled and unlabeled datawe obtained 8002 and 7964 labeled bracketing precision and recall respectivelythe baseline model which was only trained on the 9695 sentences of labeled data performed at 7223 and 6912 precision and recallthese results show that training a statistical parser using our cotraining method to combine labeled and unlabeled data strongly outperforms training only on the labeled datait is important to note that unlike previous studies our method of moving towards unsupervised parsing can be directly compared to the output of supervised parsersunlike previous approaches to unsupervised parsing our method can be trained and tested on the kind of representations and the complexity of sentences that are found in the penn treebankin addition as a byproduct of our representation we obtain more than the phrase structure of each sentencewe also produce a more embellished parse in which phenomena such as predicateargument structure subcategorization and movement are given a probabilistic treatment
N01-1023
applying cotraining methods to statistical parsingwe propose a novel cotraining method for statistical parsingthe algorithm takes as input a small corpus annotated with parse trees a dictionary of possible lexicalized structures for each word in the training set and a large pool of unlabeled textthe algorithm iteratively labels the entire data set with parse treesusing empirical results based on parsing the wall street journal corpus we show that training a statistical parser on the combined labeled and unlabeled data strongly outperforms training only on the labeled dataour cotraining a mostly unsupervised algorithm that replaces the human by having two parsers label training examples for each other
knowledgefree induction of inflectional morphologies we propose an algorithm to automatically induce the morphology of inflectional languages using only text corpora and no human input our algorithm combines cues from orthography semantics and syntactic distributions to induce morphological relationships in german dutch and english using celex as a gold standard for evaluation we show our algorithm to be an improvement over any knowledgefree algorithm yet proposed many nlp tasks such as building machinereadable dictionaries are dependent on the results of morphological analysiswhile morphological analyzers have existed since the early 1960s current algorithms require human labor to build rules for morphological structurein an attempt to avoid this laborintensive process recent work has focused on machinelearning approaches to induce morphological structure using large corporain this paper we propose a knowledgefree algorithm to automatically induce the morphology structures of a languageour algorithm takes as input a large corpus and produces as output a set of conflation sets indicating the various inflected and derived forms for each word in the languageas an example the conflation set of the word abuse would contain abuse abused abuses abusive abusively and so forthour algorithm extends earlier approaches to morphology induction by combining various induced information sources the semantic relatedness of the affixed forms using a latent semantic analysis approach to corpusbased semantics affix frequency syntactic context and transitive closureusing the handlabeled celex lexicon as our gold standard the current version of our algorithm achieves an fscore of 881 on the task of identifying conflation sets in english outperforming earlier algorithmsour algorithm is also applied to german and dutch and evaluated on its ability to find prefixes suffixes and circumfixes in these languagesto our knowledge this serves as the first evaluation of complete regular morphological induction of german or dutch have evaluated induction algorithms on morphological subproblems in germanprevious morphology induction approaches have fallen into three categoriesthese categories differ depending on whether human input is provided and on whether the goal is to obtain affixes or complete morphological analysiswe here briefly describe work in each categorysome researchers begin with some initial humanlabeled source from which they induce other morphological componentsin particular xu and croft use word context derived from a corpus to refine porter stemmer outputgaussier induces derivational morphology using an inflectional lexicon which includes part of speech informationgrabar and zweigenbaum use the snomed corpus of semanticallyarranged medical terms to find semanticallymotivated morphological relationshipsalso yarowsky and wicentowski obtained outstanding results at inducing english past tense after beginning with a list of the open class roots in the language a table of a languages inflectional parts of speech and the canonical suffixes for each part of speecha second knowledgefree category of research has focused on obtaining affix inventoriesbrent et al used minimum description length to find the most datacompressing suffixeskazakov does something akin to this using mdl as a fitness metric for evolutionary computingdéjean uses a strategy similar to that of harris he declares that a stem has ended when the number of characters following it exceed some given threshold and identifies any residual following semantic relations we identified those word pairs the stems as suffixes that have strong semantic correlations as being due to the existence of morphological ambiguity finding affixes alone does not constitute a complete morphological analysishence the last category of research is also knowledgefree but attempts to induce for each morphological variants of each otherwith the exception of word segmentation we provided no human information to our systemwe applied our system to an english corpus and evaluated by comparing each words conflation set as produced by our algorithm to those derivable from celex word of a corpus a complete analysissince our most of the existing algorithms described focus on approach falls into this category jacquemin and déjean describe work on prefixes we describe work in this area in more detailnone of these algorithms consider the general jacquemin deems pairs of word ngrams as morphologically related if two words in the first ngram have the same first few letters as two words in the second ngram and if there is a suffix for each stem whose length is less than k he also clusters groups of words having the same kinds of word endings which gives an added performance boosthe applies his algorithm to a french term list and scores based on sampled byhand evaluationgoldsmith tries to automatically sever each word in exactly one place in order to establish a potential set of stems and suffixeshe uses the expectationmaximization algorithm and mdl as well as some triage procedures to help eliminate inappropriate parses for every word in a corpushe collects the possible suffixes for each stem and calls these signatures which give clues about word classeswith the exceptions of capitalization removal and some word segmentation goldsmith algorithm is otherwise knowledgefreehis algorithm linguistica is freely available on the internetgoldsmith applies his algorithm to various languages but evaluates in english and frenchin our earlier work we generated a list of n candidate suffixes and used this list to identify word pairs which share the same stem but conclude with distinct candidate suffixeswe then applied latent semantic analysis as a method of automatically determining semantic relatedness between word pairsusing statistics from the conditions of circumfixing or infixing nor are they applicable to other language types such as agglutinative languages additionally most approaches have centered around statistics of orthographic propertieswe had noted previously however that errors can arise from strictly orthographic systemswe had observed in other systems such errors as inappropriate removal of valid affixes failure to resolve morphological ambiguities and pruning of semiproductive affixes yet we illustrated that induced semantics can help overcome some of these errorshowever we have since observed that induced semantics can give rise to different kinds of problemsfor instance morphological variants may be semantically opaque such that the meaning of one variant cannot be readily determined by the other additionally highfrequency function words may be conflated due to having weak semantic information coupling semantic and orthographic statistics as well as introducing induced syntactic information and relational transitivity can help in overcoming these problemstherefore we begin with an approach similar to our previous algorithmyet we build upon this algorithm in several ways in that we 1 consider circumfixes 2 automatically identify capitalizations by treating them similar to prefixes 3 incorporate frequency information 4 use distributional information to help identify syntactic properties and 5 use transitive closure to help find variants that may not have been found to be semantically related but which are related to mutual variantswe then apply these strategies to english german and dutchwe evaluate our algorithm figure 2yet using this approach there may be against the humanlabeled celex lexicon in all circumfixes whose endings will be overlooked in three languages and compare our results to those the search for suffixes unless we first remove all that the goldsmith and schonejurafsky algorithms candidate prefixestherefore we build a lexicon would have obtained on our same datawe show consisting of all words in our corpus and identify all how each of our additions result in progressively word beginnings with frequencies in excess of some better overall solutions threshold we call these pseudoprefixeswe as in our earlier approach we begin by generating from an untagged corpus a list of word pairs that might be morphological variantsour algorithm has changed somewhat though since we previously sought word pairs that vary only by a prefix or a suffix yet we now wish to generalize to those with circumfixing differenceswe use circumfix to mean true circumfixes like the german get as well as combinations of prefixes and suffixesit should be mentioned also that we assume the existence of languages having valid circumfixes that are not composed merely of a prefix and a suffix that appear independently elsewhereto find potential morphological variants our first goal is to find word endings which could serve as suffixeswe had shown in our earlier work how one might do this using a character tree or trie to demonstrate how this is done suppose our initial lexicon sc contained the words align real aligns realign realigned react reacts and reacted due to the high frequency occurrence of re suppose it is identified as a pseudoprefixif we strip off re from all words and add all residuals to a trie the branch of the trie of words beginning with a is depicted in figure 2in our earlier work we showed that a majority of the regular suffixes in the corpus can be found by identifying trie branches that appear repetitivelyby branch we mean those places in the trie where some splitting occursin the case of figure 2 for example the branches null s and ed each appear twicewe assemble a list of all trie branches that occur some minimum number of times and refer to such as potential suffixesgiven this list we can now find potential prefixes using a similar strategyusing our original lexicon we can now strip off all potential suffixes from each word and form a new augmented lexiconthen if we reverse the ordering on the words and insert them into a trie the branches that are formed will be potential prefixes before describing the last steps of this procedure it is beneficial to define a few terms our final goal in this first stage of induction is to find all of the possible rules and their corresponding rulesetswe therefore reevaluate each word in the original lexicon to identify all potential circumfixes that could have been valid for the wordfor example suppose that the lists of potential suffixes and prefixes contained ed and re respectivelynote also that null exists by default in both lists as wellif we consider the word realigned from our lexicon sc we would find that its potential circumfixes would be nulled renull and reed and the corresponding pseudostems would be realign aligned and align respectively from sc we also note that circumfixes reed and nulling share the pseudostems us align and view so a rule could be created reed_t5 where t5 is an acceptance thresholdwe showed in our earlier work that t585 affords high overall precision while still identifying most valid morphological relationshipsthe first major change to our previous algorithm is an attempt to overcome some of the weaknesses of purely semanticbased morphology induction by incorporating information about affix frequenciesas validated by kazakov high frequency word endings and beginnings in inflectional languages are very likely to be legitimate affixesin english for example the highest frequency rule is secelex suggests that 997 of our ppmvs for this rule would be truehowever since the purely semanticbased approach tends to select only relationships with contextually similar meanings only 92 of the ppmvs are retainedthis suggests that one might improve the analysis by supplementing semantic probabilities with orthographicbased probabilities our approach to obtaining prorth is motivated by an appeal to minimum edit distance med has been applied to the morphology induction problem by other researchers med determines the minimumweighted set of insertions substitutions and deletions required to transform one word into anotherfor example only a single deletion is required to transform rates into rate whereas two substitutions and an insertion are required to transform it into rating effectively if cost is transforming cost cost cost whereas costcostmore generally suppose word x has circumfix c1b1e1 and pseudostem s and word y has circumfix c2 b2e2 also with pseudostem sthen costcostcostsince we are free to choose whatever cost function we desire we can equally choose one whose range lies in the interval of 01hence we can assign consider table 2 which is a sample of ppmvs prorthhowever note that there is a path that can be followed along solid edges from every correct word to every other correct variantthis suggests that taking into consideration link transitivity may drastically reduce the number of deletionsthere are two caveats that need to be considered for transitivity to be properly pursuedthe first caveat if no rule exists that would transform x into z we will assume that despite the fact that there may be a probabilistic path between the two we will disregard such a paththe second caveat is that the algorithms we test againstfurthermore since we will say that paths can only consist of solid celex has limited coverage many of these loweredges namely each pr on every path must frequency words could not be scored anywaythis exceed the specified threshold cutoff also helps each of the algorithms to obtain given these constraints suppose now there is a stronger statistical information on the words they do transitive relation from x to z by way of some process which means that any observed failures intermediate path œiy1y2 ytthat is assume cannot be attributed to weak statistics there is a path xy1 y1y2ytzsuppose morphological relationships can be represented as also that the probabilities of these relationships are directed graphsfigure 6 for instance illustrates respectively p0 p1 p2ptif is a decay factor in the directed graph according to celex of words the unit interval accounting for the number of link associated with conduct we will call the words separations then we will say that the pr of such a directed graph the conflation set for any of along path œi has probability pr quott p it6 p we the words in the graphdue to the difficulty in combine the probabilities of all independent paths developing a scoring algorithm to compare directed between x and z according to figure 5 graphs we will follow our earlier approach and only function branchprobbetween prob0 foreach independent path œj return prob if the returned probability exceeds t5 we declare x and z to be morphological variants of each otherwe compare this improved algorithm to our former algorithm as well as to goldsmith linguistica we use as input to our system 67 million words of english newswire 23 million of german and 67 million of dutchour gold standards are the handtagged morphologicallyanalyzed celex lexicon in each of these languages we apply the algorithms only to those words of our corpora with frequencies of 10 or moreobviously this cutoff slightly limits the generality of our results but it also greatly decreases processing time for all of compare induced conflation sets to those of celexto evaluate we compute the number of correct inserted and deleted words each algorithm predicts for each hypothesized conflation setif xw represents word w conflation set according to an algorithm and if yw represents its celexbased conflation set then in making these computations we disregard any celex words absent from our data set and vice versamost capital words are not in celex so this process also discards themhence we also make an augmented celex to incorporate capitalized formstable 5 uses the above scoring mechanism to compare the fscores of our system at a cutoff threshold of 85 to those of our earlier algorithm at the same threshold goldsmith and a baseline system which performs no analysis the s and c columns respectively indicate performance of systems when scoring for suffixing and circumfixing the a column shows circumfixing performance using the augmented celexspace limitations required that we illustrate a scores for one language only but performance in the other two language is similarly degradedboxes are shaded out for algorithms not designed to produce circumfixesnote that each of our additions resulted in an overall improvement which held true across each of the three languagesfurthermore using tenfold cross validation on the english data we find that fscore differences of the s column are each statistically significant at least at the 95 levelwe have illustrated three extensions to our earlier morphology induction work in addition to induced semantics we incorporated induced orthographic syntactic and transitive information resulting in almost a 20 relative reduction in overall induction errorwe have also extended the work by illustrating performance in german and dutch where to our knowledge complete morphology induction performance measures have not previously been obtainedlastly we showed a mechanism whereby circumfixes as well as combinations of prefixing and suffixing can be induced in lieu of the suffixonly strategies prevailing in most previous researchfor the future we expect improvements could be derived by coupling this work which focuses primarily on inducing regular morphology with that of yarowsky and wicentowski who assume some information about regular morphology in order to induce irregular morphologywe also believe that some findings of this work can benefit other areas of linguistic induction such as part of speechthe authors wish to thank the anonymous reviewers for their thorough review and insightful comments
N01-1024
knowledgefree induction of inflectional morphologieswe propose an algorithm to automatically induce the morphology of inflectional languages using only text corpora and no human inputour algorithm combines cues from orthography semantics and syntactic distributions to induce morphological relationships in german dutch and englishusing celex as a gold standard for evaluation we show our algorithm to be an improvement over any knowledgefree algorithm yet proposedwe use latent semantic analysis to find prefixes suffixes and circumfixes in german dutch and english
chunking with support vector machines we apply support vector machines to identify english base phrases svms are known to achieve high generalization performance even with input data of high dimensional feature spaces furthermore by the kernel principle svms can carry out training with smaller computational overhead independent of their dimensionality we apply weighted voting of 8 svmsbased systems trained with distinct chunk representations experimental results show that our approach achieves higher accuracy than previous approaches chunking is recognized as series of processes first identifying proper chunks from a sequence of tokens and second classifying these chunks into some grammatical classesvarious nlp tasks can be seen as a chunking taskexamples include english base noun phrase identification english base phrase identification japanese chunk identification and named entity extractiontokenization and partofspeech tagging can also be regarded as a chunking task if we assume each character as a tokenmachine learning techniques are often applied to chunking since the task is formulated as estimating an identifying function from the information available in the surrounding contextvarious machine learning approaches have been proposed for chunking conventional machine learning techniques such as hidden markov model and maximum entropy model normally require a careful feature selection in order to achieve high accuracythey do not provide a method for automatic selection of given feature setsusually heuristics are used for selecting effective features and their combinationsnew statistical learning techniques such as support vector machines and boosting have been proposedthese techniques take a strategy that maximizes the margin between critical samples and the separating hyperplanein particular svms achieve high generalization even with training data of a very high dimensionfurthermore by introducing the kernel function svms handle nonlinear feature spaces and carry out the training considering combinations of more than one featurein the field of natural language processing svms are applied to text categorization and syntactic dependency structure analysis and are reported to have achieved higher accuracy than previous approachesin this paper we apply support vector machines to the chunking taskin addition in order to achieve higher accuracy we apply weighted voting of 8 svmbased systems which are trained using distinct chunk representationsfor the weighted voting systems we introduce a new type of weighting strategy which are derived from the theoretical basis of the svmslet us define the training samples each of which belongs either to positive or negative class as is a feature vector of theth sample represented by an dimensional vector is the class or negative class label of theth sampleis the number of the given training samplesin the basic svms framework we try to separate the positive and negative samples by a hyperplane expressed as svms find an optimal hyperplane which separates the training data into two classeswhat does optimal meanin order to define it we need to consider the margin between two classesfigure 1 illustrates this ideasolid lines show two possible hyperplanes each of which correctly separates the training data into two classestwo dashed lines parallel to the separating hyperplane indicate the boundaries in which one can move the separating hyperplane without any misclassificationwe call the distance between those parallel dashed lines as marginsvms find the separating hyperplane which maximizes its marginprecisely two dashed lines and margin can be expressed as to maximize this margin we should minimize in other words this problem becomes equivalent to solving the following optimization problem the training samples which lie on either of two dashed lines are called support vectorsit is known that only the support vectors in given training data matterthis implies that we can obtain the same decision function even if we remove all training samples except for the extracted support vectorsin practice even in the case where we cannot separate training data linearly because of some noise in the training data etc we can build the separating linear hyperplane by allowing some misclassificationsthough we omit the details here we can build an optimal hyperplane by introducing a soft margin parameter which trades off between the training error and the magnitude of the marginfurthermore svms have a potential to carry out the nonlinear classificationthough we leave the details to the optimization problem can be rewritten into a dual form where all feature vectors appear in their dot productsby simply substituting every dot product of and in dual form with a certain kernel function svms can handle nonlinear hypothesesamong many kinds of kernel functions available we will focus on the th polynomial kernel use ofth polynomial kernel functions allows us to build an optimal separating hyperplane which takes into account all combinations of features up tostatistical learning theory states that training error and test error hold the following theoremtheorem 1 if is the vc dimension ofthe class functions implemented by some machine learning algorithms then for all functions of that class with a probability of at least the risk is bounded by where is a nonnegative integer called the vapnik chervonenkis dimension and is a measure of the complexity of the given decision functionthe rhs term of is called vc boundin order to minimize the risk we have to minimize the empirical risk as well as vc dimensionit is known that the following theorem holds for vc dimension and margin theorem 2 suppose as the dimension of given training samples as the margin and as the smallest diameter which encloses all training sample then vc dimension of the svms are bounded by in order to minimize the vc dimension we have to maximize the margin which is exactly the strategy that svms takevapnik gives an alternative bound for the risktheorem 3 suppose is an error rate estimated by leaveoneout procedure is bounded as leaveoneout procedure is a simple method to examine the risk of the decision function first by removing a single sample from the training data we construct the decision function on the basis of the remaining training data and then test the removed samplein this fashion we test allsamples of the training data usingdifferent decision functions is a natural consequence bearing in mind that support vectors are the only factors contributing to the final decision functionnamely when the every removed support vector becomes error in leaveoneout procedure becomes the rhs term of in practice it is known that this bound is less predictive than the vc boundthere are mainly two types of representations for proper chunksone is insideoutside representation and the other is startend representationthis representation was first introduced in and has been applied for base np chunkingthis method uses the following set of three tags for representing proper chunksi current token is inside of a chunko current token is outside of any chunkb current token is the beginning of a chunk which immediately follows another chunktjong kim sang calls this method as iob1 representation and introduces three alternative versions iob2ioe1 and ioe2 iob2 a b tag is given for every token which exists at the beginning of a chunkother tokens are the same as iob1this method has been used for the japanese named entity extraction task and requires the following five tags for representing proper chunks 11originally uchimoto uses ceyouos representationhowever we rename them as bioes for our purpose since b current token is the start of a chunk consisting of more than one tokene current token is the end of a chunk consisting of more than one tokeni current token is a middle of a chunk consisting of more than two tokenss current token is a chunk consisting of only one tokeno current token is outside of any chunkexamples of these five representations are shown in table 1if we have to identify the grammatical class of each chunk we represent them by a pair of an iobes label and a class labelfor example in iob2 representation bvp label is given to a token which represents the beginning of a verb base phrase basically svms are binary classifiers thus we must extend svms to multiclass classifiers in order to classify three or more classesthere are two popular methods to extend a binary classification task to that of classesone is one class vs all othersthe idea is to build classifiers so as to separate one class from all othersthe other is pairwise classificationthe idea is to build classifiers considering all pairs of classes and final decision is given by their weighted votingthere are a number of other methods to extend svms to multiclass classifiersfor example dietterich and bakiri and allwein introduce a unifying framework for solving the multiclass problem we want to keep consistency with insidestart representation by reducing them into binary modelshowever we employ the simple pairwise classifiers because of the following reasons in general svms require training cost thus if the size of training data for individual binary classifiers is small we can significantly reduce the training costalthough pairwise classifiers tend to build a larger number of binary classifiers the training cost required for pairwise method is much more tractable compared to the one vs all others some experiments report that a combination of pairwise classifiers performs better than the one vs all othersfor the feature sets for actual training and classification of svms we use all the information available in the surrounding context such as the words their partofspeech tags as well as the chunk labelsmore precisely we give the following features to identify the chunk label for theth word hereis the word appearing atth position is the pos tag of and is the chunk label forth wordin addition we can reverse the parsing direction by using two chunk tags which appear to the rhs of the current token in this paper we call the method which parses from left to right as forward parsing and the method which parses from right to left as backward parsingsince the preceding chunk labels are not given in the test data they are decided dynamically during the tagging of chunk labelsthe technique can be regarded as a sort of dynamic programming matching in which the best answer is searched by maximizing the total certainty score for the combination of tagsin using dp matching we limit a number of ambiguities by applying beam search with width in conll 2000 shared task the number of votes for the class obtained through the pairwise voting is used as the certain score for beam search with width 5 in this paper however we apply deterministic method instead of applying beam search with keeping some ambiguitiesthe reason we apply deterministic method is that our further experiments and investigation for the selection of beam width shows that larger beam width dose not always give a significant improvement in the accuracygiven our experiments we conclude that satisfying accuracies can be obtained even with the deterministic parsinganother reason for selecting the simpler setting is that the major purpose of this paper is to compare weighted voting schemes and to show an effective weighting method with the help of empirical risk estimation frameworkstjong kim sang et al report that they achieve higher accuracy by applying weighted voting of systems which are trained using distinct chunk representations and different machine learning algorithms such as mbl me and igtreeit is wellknown that weighted voting scheme has a potential to maximize the margin between critical samples and the separating hyperplane and produces a decision function with high generalization performancethe boosting technique is a type of weighted voting scheme and has been applied to many nlp problems such as parsing partofspeech tagging and text categorizationin our experiments in order to obtain higher accuracy we also apply weighted voting of 8 svmbased systems which are trained using distinct chunk representationsbefore applying weighted voting method first we need to decide the weights to be given to individual systemswe can obtain the best weights if we could obtain the accuracy for the true test datahowever it is impossible to estimate themin boosting technique the voting weights are given by the accuracy of the training data during the iteration of changing the frequency of training datahowever we cannot use the accuracy of the training data for voting weights since svms do not depend on the frequency of training data and can separate the training data without any misclassification by selecting the appropriate kernel function and the soft margin parameterin this paper we introduce the following four weighting methods in our experimentswe give the same voting weight to all systemsthis method is taken as the baseline for other weighting methodsdividing training data into portions we employ the training by using portions and then evaluate the remaining portionin this fashion we will have individual accuracyfinal voting weights are given by the average of these accuraciesthe value of which represents the smallest diameter enclosing all of the training data is approximated by the maximum distance from the origin2we consider two parsing directions for each representation ie systems for a single training data setthen we employ svms training using these independent chunk representationsleaveoneout bound for each of 8 systemsas for cross validation we employ the steps 1 and 2 for each divided training data and obtain the weights4we test these 8 systems with a separated test data setbefore employing weighted voting we have to convert them into a uniform representation since the tag sets used in individual 8 systems are differentfor this purpose we reconvert each of the estimated results into 4 representations 5we employ weighted voting of 8 systems with respect to the converted 4 uniform representations and the 4 voting schemes respectivelyfinally we have 4 results for our experimentsalthough we can use models with iobesf or iobesb representations for the committees for the weighted voting we do not use them in our voting experimentsthe reason is that the number of classes are different and the estimated vc and loo bound cannot straightforwardly be compared with other models that have three classes under the same conditionwe conduct experiments with iobesf and iobesb representations only to investigate how far the difference of various chunk representations would affect the actual chunking accuracieswe use the following three annotated corpora for our experimentsbase np standard data set this data set was first introduced by and taken as the standard data set for basenp identification task2this data set consists of four sections of the wall street journal part of the penn treebank for the training data and one section for the test datathe data has partofspeech tags annotated by the brill taggerbase np large data set this data set consists of 20 sections of the wsj part of the penn treebank for the training data and one section for the test datapos tags in this data sets are also annotated by the brill taggerwe omit the experiments iob1 and ioe1 representations for this training data since the data size is too large for our current svms learning programin case of iob1 and ioe1 the size of training data for one classifier which estimates the class i and o becomes much larger compared with iob2 and ioe2 modelsin addition we also omit to estimate the voting weights using cross validation method due to a large amount of training costchunking data set this data set was used for conll2000 shared taskin this data set the total of 10 base phrase classes are annotatedthis data set consists of 4 sections of the wsj part of the penn treebank for the training data and one section for the test data 3all the experiments are carried out with our software package tinysvm4 which is designed and optimized to handle large sparse feature vectors and large number of training samplesthis package can estimate the vc bound and leaveoneout bound automaticallyfor the kernel function we use the 2nd polynomial function and set the soft margin parameter to be 1in the basenp identification task the performance of the systems is usually measured with three rates precision recall and in this paper we refer to as accuracytable 2 shows results of our svms based chunking with individual chunk representationsthis table also lists the voting weights estimated by different approaches we also show the results of startend representation in table 2table 3 shows the results of the weighted voting of four different voting methods a uniform b cross validation c vc bound d leaveoneout boundtable 4 shows the precision recall and of the best result for each data setwe obtain the best accuracy when we apply ioe2b representation for basenps and chunking data setin fact we cannot find a significant difference in the performance between insideoutside and startend representationssassano and utsuro evaluate how the difference of the chunk representation would affect the performance of the systems based on different machine learning algorithmsthey report that decision list system performs better with startend representation than with insideoutside since decision list considers the specific combination of featuresas for maximum entropy they report that it performs better with insideoutside representation than with startend since maximum entropy model regards all features as independent and tries to catch the more general feature setswe believe that svms perform well regardless of the chunk representation since svms have a high generalization performance and a potential to select the optimal features for the given taskby applying weighted voting we achieve higher accuracy than any of single representation system regardless of the voting weightsfurthermore we achieve higher accuracy by applying cross validation and vcbound and leaveoneout methods than the baseline methodby using vc bound for each weight we achieve nearly the same accuracy as that of cross validationthis result suggests that the vc bound has a potential to predict the error rate for the true test data accuratelyfocusing on the relationship between the accuracy of the test data and the estimated weights we find that vc bound can predict the accuracy for the test data preciselyeven if we have no room for applying the voting schemes because of some realworld constraints the use of vc bound may allow to obtain the best accuracyon the other hand we find that the prediction ability of leaveoneout is worse than that of vc boundcross validation is the standard method to estimate the voting weights for different systemshowever cross validation requires a larger amount of computational overhead as the training data is divided and is repeatedly used to obtain the voting weightswe believe that vc bound is more effective than cross validation since it can obtain the comparable results to cross validation without increasing computational overheadtjong kim sang et al report that they achieve accuracy of 9386 for basenps data set and 9490 for basenpl data setthey apply weighted voting of the systems which are trained using distinct chunk representations and different machine learning algorithms such as mbl me and igtreeour experiments achieve the accuracy of 9376 9411 for basenps and 9529 9534 for basenpl even with a single chunk representationin addition by applying the weighted voting framework we achieve accuracy of 9422 for basenps and 9577 for basenpl data setas far as accuracies are concerned our model outperforms tjong kim sangs modelin the conll2000 shared task we achieved the accuracy of 9348 using iob2f representation 5by combining weighted voting schemes we achieve accuracy of 9391in addition our method also outperforms other methods based on the weighted votingapplying to other chunking tasks our chunking method can be equally applicable to other chunking task such as english pos tagging japanese chunk identification and named entity extractionfor future we will apply our method to those chunking tasks and examine the performance of the methodincorporating variable context length model in our experiments we simply use the socalled fixed context length modelwe believe that we can achieve higher accuracy by selecting appropriate context length which is actually needed for identifying individual chunk tagssassano and utsuro introduce a variable context length model for japanese named entity identification task and perform better resultswe will incorporate the variable context length model into our systemconsidering more predictable bound in our experiments we introduce new types of voting methods which stem from the theorems of svms vc bound and leaveoneout boundon the other hand chapelle and vapnik introduce an alternative and more predictable bound for the risk and report their proposed bound is quite useful for selecting the kernel function and soft margin parameterwe believe that we can obtain higher accuracy using this more predictable bound for the voting weights in our experimentsin this paper we introduce a uniform framework for chunking task based on support vector machines experimental results on wsj corpus show that our method outperforms other conventional machine learning frameworks such mbl and maximum entropy modelsthe results are due to the good characteristics of generalization and nonoverfitting of svms even with a high dimensional vector spacein addition we achieve higher accuracy by applying weighted voting of 8svm based systems which are trained using distinct chunk representations
N01-1025
chunking with support vector machineswe apply support vector machines to identify english base phrases svms are known to achieve high generalization performance even with input data of high dimensional feature spacesfurthermore by the kernel principle svms can carry out training with smaller computational overhead independent of their dimensionalitywe apply weighted voting of 8 svmsbased systems trained with distinct chunk representationsexperimental results show that our approach achieves higher accuracy than previous approachesin this paper we develop an svmsbased chunking tool yamcha
inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora e brill 1995 transformationbased errordriven learning and natural language processing a case study in part of tagging linguistics french sidesthere are two central limitations to this paradigm howeverthe first is the often very poor accuracy of word alignments due both to the current limitations of wordalignment algorithms and also to the often weak or incomplete inherent match between the two sides of a bilingual corpusthe paper will address and handle this problem through robust noisetolerant learning algorithms capable of being trained effectively on incomplete and highly inaccurate alignmentsthe second limitation is the potential mismatch in the annotation needs of two languages not all distinctions that may be desirable for one language are compatible or even present in a parallel language such as englishthe paper will discuss solutions to these languagelevel mismatches and will illustrate that at the level of nounphrase structure and core partofspeech tags essential annotations can be projected with remarkable effectiveness and coverage in many casesfinally the paper will empirically evaluate two major questions for each of the tasksthe approach and general algorithms investigated in this paper were initiated in conjunction with the egypt project of the 1999 johns hopkins summer machine translation workshop previously tools for automatic wordalignment of bilingual corpora were not widely available outside ibm the research group pioneering statistical machine translation with the candide system the researchers who developed independent wordalignment tools tended to focus on translation model applications for their wordalignments rather than the induction of standalone monolingual analyzers via crosslanguage projectionfor example kupiec began with existing xerox monolingual bracketers to improve translation alignments rather than the conversethe primary exception has been in the area of parallel bilingual parsingwu proposed a framework for inversion transduction grammars where parallel corpora in languages such as english and chinese are parsed concurrently with crosslanguage order differences captured via mobilelike cfg production reorderingstructural relationships in one language help constrain structural relationships in the second languageevaluation on nounphrase bracketing showed 78 precision for chinese and 80 precision for englishthus while remarkably effective for learning without humanannotated training data the algorithm does assume the existence of a parallel secondlanguage mirror for all sentences to be parsedalso wu observed significant performance degradation when either the word alignment or translation faithfulness in these pairs are weakthis further motivates the noiserobust training and standalone application of our current workin a related framework jones and havrilla investigated the use of twistedpair grammars for syntactic transfergiven an existing hindiurdu sentence parse english output was generated by rotating subtrees using the constraints and preferences of the transduction grammarthe ability to generate candidate targetlanguage orderings in this manner offers great potential to productively constrain search in a statistical mt systemyet the assumption of existing syntactic analyses for each source language further motivates the need to induce such analysesthe data used in our experiments are the englishfrench canadian hansards and englishchinese hong kong hansards parallel records of parliamentary proceedings and publicationsboth corpora were wordaligned by the now publicly available egypt system and based on ibm model 3 statistical mt formalism the data sets used for our projection studies both contained approximately 2 million words in each languagetheir alignment was based on strictly wordbased model variants for english and characterbased model variants for chinese with no use of morphological analysis or stemming postagging bracketing outside dictionaries or any other external data source or annotation toolthus the experiments were carefully designed 1the two exceptions are endofsentence detection and tokenizationfor the french hansards before alignment only even when the higherror automatic alignments have been manually corrected yielding 69 and 78 direct projection accuracy respectively traditional supervised learning algorithms tend to perform poorly at this level of noise and a standard bigram tagger trained on the automatically aligned data achieves only 82 when evaluated on a heldout test setmore highly lexicalized learning algorithms exhibit even greater potential for overmodeling the specific projection errors of this datathus our research has focused on noiserobust techniques for distilling a conservative but effective tagger from this challenging raw projection datato do so we downweight or exclude training data segments identified as poorly aligned or likely noise use a conservative bigram learning algorithm and train the lexical prior and tagsequence models separately using aggressive generalization techniquesin a standard bigram tagging model one selects a tag sequence t for a word sequence w by argmax p pp where using standard independence assumptionssection 422 will discuss the estimation of p the following section describes the estimation of p which using bayes rule and direct measurement of p from the french data can be used to calculate p as inspection of the raw projected tag data shows the need for an improved estimation of ptemporarily excluding the case of compound alignments table 1 shows the observed frequency distributions of english tags projected onto four french words from 1to1 alignments for the core nvjri pos tagsnote that the total probability mass assigned to potentially correct tags is relatively low with fairly broad misassignment to incorrect tags for the given wordat the core tag level in particular we observe empirically that words in french have a strong tendency to have only 1 possible core pos tag and very rarely have more than 2even in english with relatively high p ambiguity only 037 of the tokens in the brown corpus are not covered by a word type two most frequent core tags and in french the percentage drops to 003thus we employ an aggressive reestimation in favor of this bias where for t the ith most frequent tag for w giving the large majority of the new probability mass to the single highest frequency core tagapplying this model recursively the finer grained subtag probabilities are assigned by selecting the two highest frequency subtags for each of the two remaining core tags and reallocating the core tag probability mass between these two as in the equations above as illustrated in table 2finally the issue arises of what to do with the 1ton phrasal alignment cases shown in figure 2 the potential seems to be great for function words to inherit substantial spurious probability mass via such datahowever the relatively frequent occurrence of correct 1to1 alignments the diffuse nature of the noise and the aggressive smoothing towards a single pos tag prevent these cases from adversely affecting final function word assignmentsgiven the lower frequency of most content words the potential risks of using these 1ton alignments are greater but so are the benefits given that the 1to1 alignments tend to be both sparse and somewhat biasedseveral options are under investigation for combining these two p estimators but the simplest and currently most effective is to perform basic interpolation between the tag distributions estimated from 1to1 alignments only and from the entire set of 1ton alignments as follows while this does indeed introduce substantial spurious tag probabilities initially the aggressive smoothing towards the majority tag described above tends to eliminate most of this noisethe major reason for estimating the lexical priors and tag sequence model separately is that a tag sequence bigram model has far fewer parameters than the lexical prior model and thus can be estimated on a very conservatively chosen set of filtered high confidence alignment datain contrast the lexical prior models already suffer from sparse data problems and are negatively affected by an orderofmagnitude data reduction even if the data is of higher qualitythe proposed model for identifying highquality tag sequence data for training considers two different information sources for sentence filteringweightingthe first is the final model3 alignment score for the sentence indicating a multisource measure of overall alignment confidencethe second measure more directly targets confidence in the tag sequences themselvesafter the lexical prior models have been trained sentences are also tested to identify those where the directly projected tag sequence is closely compatible with the estimated lexical prior probabilities for each worda pseudodivergence weighting is computed for a sentence of length k by i ejk_i log p penalizing words whose projected tag does not match the majority lexical prior2 sorting and filteringweighting by the cumulative normalized score yields a subset of training data where multiple sources essentially concur on the correct tag sequencewhile the potential exists that this higher confidence data subset may be biased in the sequence phenomena it contains the substantial noise reduction in preliminary investigations appears to be a worthwhile tradeofffuture work will focus on differential confidence weighting of sentence fragments and iterative reestimation2the exception is for function words located in a 1ton alignment sequencegiven the very high probability of these raw projections being incorrect and their prevalence it is expedient to attempt to correct these tag instances prior to the first tagsequencemodel training by replacing their raw projection tag with the majority lexical prior for the word from 421doing so salvages very large quantities of otherwise accurate tag sequence data with very little introduced noiseevaluation of the tagger projection and induction algorithms is conducted on two granularities of tagsetthe first tagset is at the level of core partofspeech tags such as verb noun pronoun adjective adverb preposition determiner etc for which english and french share remarkable compatibilitythe second is at the level of granularity captured in the english penn treebank tagset where for example singular and plural nouns are distinguishedas previously noted the goal of this work is not to induce potential french tagset features such as grammatical gender mood or subtle tense distinctions that do not appear in english but to focus on the algorithm effectiveness at accurately transferring tagging capabilities at the granularity that is present in english for independent evaluation data a 120kword handtagged french dataset generously provided by universite de montreal was usedhowever because both this text stream and tagset had no overlap with parallel data used to train the algorithm a simple mapping table between the tagsets was defined so that output could be compared on a compatible common denominatoran abbreviated version is shown in table 34 the large majority of these compatible divergences in bracketing convention are due to the projection algorithm tendency to bracket possessive compounds as single np and its tendency to bracket simple conjunctive compounds were usedfor french the increase from 59 fmeasure on direct projection to 91 fmeasure for the standalone induced bracketer shows that the training algorithm is able to generalize successfully from the very noisy raw projection data distilling a reasonably accurate model of basenp structure from this high degree of noisethis paper has shown that automatically wordaligned bilingual corpora can be used to induce both successful partofspeech taggers and nounphrase bracketersit has further illustrated that simple direct projection of pos and np annotations across languages is very noisy even when the word alignments have been manually correctednoiserobust data filtering and modeling procedures are shown to train effectively on this lowquality datathe resulting standalone partofspeech taggers and basenp bracketers significantly outperform the raw direct projections on which they were trainedthis indicates that they have successfully distilled and modeled the signal present in the very noisy projection data and are able to perform as respectable standalone monolingual tools with absolutely no humansupervised training data in the target languagethese results also show considerable potential for further improvement by cotraining with monolingually induced morphological analyzersthe standalone monolingual pos taggers and bracketers induced from wordaligned data also show potential for improving their initial alignmentsnp bracketings for both the source and target language can improve the ibm mt distortion model by boosting the probabilities of word alignments consistent with cohesive np structure and penalizing alignments that break np cohesiona standalone pos tagger applicable to new data can be used to improve statistical mt translation models both by supporting finer translation model granularity and by serving as a source of backoff alignment probabilities for previously unseen wordsthus tagging models induced from bilingual alignments can be used to improve these very alignments and hence improve their own training source
N01-1026
inducing multilingual pos taggers and np bracketers via robust projection across aligned corporathis paper investigates the potential for projecting linguistic annotations including partofspeech tags and base noun phrase bracketings from one language to another via automatically wordaligned parallel corporafirst experiments assess the accuracy of unmodified direct transfer of tags and brackets from the source language english to the target languages french and chinese both for noisy machinealigned sentences and for clean handaligned sentencesperformance is then substantially boosted over both of these baselines by using training techniques optimized for very noisy data yielding 9496 core french partofspeech tag accuracy and 90 french bracketing fmeasure for standalone monolingual tools trained without the need for any humanannotated data in the given languagewe induce a partofspeech tagger for french and base noun phrase detectors for french and chinese via transfer from english resourceswe are the first to propose the use of parallel texts to bootstrap the creation of taggers
learning to paraphrase an unsupervised approach using multiplesequence alignment we address the texttotext generation problem of sentencelevel paraphrasing a phenomenon distinct from and more difficult than wordor phraselevel paraphrasing our apapplies alignment sentences gathered from unannotated comparable corpora it learns a set of paraphrasing patrepresented by lattice and automatically determines how to apply these patterns to rewrite new sentences the results of our evaluation experiments show that the system derives accurate paraphrases outperforming baseline systems this is a late parrotit is a stiffbereft of life it rests in peaceif you had not nailed him to the perch he would be pushing up the daisiesits metabolical processes are of interest only to historiansit is hopped the twigit is shuffled off this mortal coilit is rung down the curtain and joined the choir invisiblethis is an exparrot monty python pet shop a mechanism for automatically generating multiple paraphrases of a given sentence would be of significant practical import for texttotext generation systemsapplications include summarization and rewriting both could employ such a mechanism to produce candidate sentence paraphrases that other system components would filter for length sophistication level and so forthnot surprisingly therefore paraphrasing has been a focus of generation research for quite some another interesting application somewhat tangential to generation would be to expand existing corpora by providing time one might initially suppose that sentencelevel paraphrasing is simply the result of wordforword or phrasebyphrase substitution applied in a domain and contextindependent fashionhowever in studies of paraphrases across several domains this was generally not the casefor instance consider the following two sentences after the latest fed rate cut stocks rose across the boardwinners strongly outpaced losers after greenspan cut interest rates againobserve that fed and greenspan are interchangeable only in the domain of us financial mattersalso note that one cannot draw onetoone correspondences between single words or phrasesfor instance nothing in the second sentence is really equivalent to across the board we can only say that the entire clauses stocks rose across the board and winners strongly outpaced losers are paraphrasesthis evidence suggests two consequences we cannot rely solely on generic domainindependentlexical resources for the task of paraphrasing and sentencelevel paraphrasing is an important problem extending beyond that of paraphrasing smaller lexical unitsour work presents a novel knowledgelean algorithm that uses multiplesequence alignment to learn to generate sentencelevel paraphrases essentially from unannotated corpus data alonein contrast to previous work using msa for generation see bangalore et al and barzilay and lee for other uses of such data2002 we need neither parallel data nor explicit information about sentence semanticsrather we use two comparable corpora in our case collections of articles produced by two different newswire agencies about the same eventsthe use of related corpora is key we can capture paraphrases that on the surface bear little resemblance but that by the nature of the data must be descriptions of the same informationnote that we also acquire paraphrases from each of the individual corpora but the lack of clues as to sentence equivalence in single corpora means that we must be more conservative only selecting as paraphrases items that are structurally very similarour approach has three main stepsfirst working on each of the comparable corpora separately we compute lattices compact graphbased representations to find commonalities within groups of structurally similar sentencesnext we identify pairs of lattices from the two different corpora that are paraphrases of each other the identification process checks whether the lattices take similar argumentsfinally given an input sentence to be paraphrased we match it to a lattice and use a paraphrase from the matched lattices mate to generate an output sentencethe key features of this approach are focus on paraphrase generationin contrast to earlier work we not only extract paraphrasing rules but also automatically determine which of the potentially relevant rules to apply to an input sentence and produce a revised form using themflexible paraphrase typesprevious approaches to paraphrase acquisition focused on certain rigid types of paraphrases for instance limiting the number of argumentsin contrast our method is not limited to a set of a priorispecified paraphrase typesuse of comparable corpora and minimal use of knowledge resourcesin addition to the advantages mentioned above comparable corpora can be easily obtained for many domains whereas previous approaches to paraphrase acquisition required parallel corporawe point out that one such approach recently proposed by pang et al also represents paraphrases by lattices similarly to our method although their lattices are derived using parse informationmoreover our algorithm does not employ knowledge resources such as parsers or lexical databases which may not be available or appropriate for all domains a key issue since paraphrasing is typically domaindependentnonetheless our algorithm achieves good performanceprevious work on automated paraphrasing has considered different levels of paraphrase granularitylearning synonyms via distributional similarity has been wellstudied jacquemin and barzilay and mckeown identify phraselevel paraphrases while lin and pantel and shinyama et al acquire structural paraphrases encoded as templatesthese latter are the most closely related to the sentencelevel paraphrases we desire and so we focus in this section on templateinduction approacheslin and pantel extract inference rules which are related to paraphrases to improve question answeringthey assume that paths in dependency trees that take similar arguments are close in meaninghowever only twoargument templates are consideredshinyama et al also use dependencytree information to extract templates of a limited form like us they use articles written about the same event in different newspapers as dataour approach shares two characteristics with the two methods just described pattern comparison by analysis of the patterns respective arguments and use of nonparallel corpora as a data sourcehowever extraction methods are not easily extended to generation methodsone problem is that their templates often only match small fragments of a sentencewhile this is appropriate for other applications deciding whether to use a given template to generate a paraphrase requires information about the surrounding context provided by the entire sentenceoverview we first sketch the algorithms broad outlinesthe subsequent subsections provide more detailed descriptions of the individual stepsthe major goals of our algorithm are to learn recurring patterns in the data such as x y people z seriously where the capital letters represent variables pairings between such patterns that represent paraphrases for example between the pattern x y people z of them seriously and the pattern y were by x among them z were in serious conditionfigure 1 illustrates the main stages of our approachduring training pattern induction is first applied independently to the two datasets making up a pair of comparable corporaindividual patterns are learned by applying name substitution from a cluster of 49 similarities emphasized multiplesequence alignment to clusters of sentences describing approximately similar events these patterns are represented compactly by lattices we then check for lattices from the two different corpora that tend to take the same arguments these lattice pairs are taken to be paraphrase patternsonce training is done we can generate paraphrases as follows given the sentence the surprise bombing injured twenty people five of them seriously we match it to the lattice x y people z of them seriously which can be rewritten as y were by x among them z were in serious condition and so by substituting arguments we can generate twenty were wounded by the surprise bombing among them five were in serious condition or twenty were hurt by the surprise bombing among them five were in serious conditionour first step is to cluster sentences into groups from which to learn useful patterns for the multiplesequence techniques we will use this means that the sentences within clusters should describe similar events and have similar structure as in the sentences of figure 2this is accomplished by applying hierarchical completelink clustering to the sentences using a similarity metric based on word ngram overlap the only subtlety is that we do not want mismatches on sentence details causing sentences describing the same type of occurrence from being separated as this might yield clusters too fragmented for effective learning to take placewe therefore first replace all appearances of dates numbers and proper names2 with generic tokensclusters with fewer than ten sentences are discardedin order to learn patterns we first compute a multiplesequence alignment of the sentences in a given clusterpairwise msa takes two sentences and a scoring function giving the similarity between words it determines the highestscoring way to perform insertions deletions and changes to transform one of the sentences into the otherpairwise msa can be extended efficiently to multiple sequences via the iterative pairwise alignment a polynomialtime method commonly used in computational biology 3 the results can be represented in an intuitive form via a word lattice which compactly represents structural similarities between the clusters sentencesto transform lattices into generationsuitable patterns requires some understanding of the possible varieties of lattice structuresthe most important part of the transformation is to determine which words are actually instances of arguments and so should be replaced by slots the key intuition is that because the sentences in the cluster represent the same type of event such as a bombing but generally refer to different instances of said event areas of large variability in the lattice should correspond to argumentsto quantify this notion of variability we first formalize its opposite commonalitywe define backbone nodes as those shared by more than 50 of the clusters sentencesthe choice of 50 is not arbitrary it can be proved using the pigeonhole principle that our strictmajority criterion imposes a unique linear ordering of the backbone nodes that respects the word ordering within the sentences thus guaranteeing at least a degree of wellformedness and avoiding the problem of how to order backbone nodes occurring on parallel branches of the latticeonce we have identified the backbone nodes as points of strong commonality the next step is to identify the regions of variability between them as corresponding to the arguments of the propositions that the sentences representfor example in the top of figure 3 the words southern city settlement of namecoastal resort of name etc all correspond to the location of an event and could be replaced by a single slotfigure 3 shows an example of a lattice and the derived slotted lattice we give the details of the slotinduction process in the appendixnow if we were using a parallel corpus we could employ sentencealignment information to determine which lattices correspond to paraphrasessince we do not have this information we essentially approximate the parallelcorpus situation by correlating information from descriptions of the same event occurring in the two different corporaour method works as followsonce lattices for each corpus in our comparablecorpus pair are computed we identify lattice paraphrase pairs using the idea that paraphrases will tend to take the same values as arguments more specifically we take a pair of lattices from different corpora look back at the sentence clusters from which the two lattices were derived and compare the slot values of those crosscorpus sentence pairs that appear in articles written on the same day on the same topic we pair the lattices if the degree of matching is over a threshold tuned on heldout datafor example suppose we have two lattices slot1 bombed slot2 and slot3 was bombed by slot4 drawn from different corporaif in the first lattices sentence cluster we have the sentence the plane bombed the town and in the second lattices sentence cluster we have a sentence written on the same day reading the town was bombed by the plane then the corresponding lattices may well be paraphrases where slot1 is identified with slot4 and slot2 with slot3to compare the set of argument values of two lattices we simply count their word overlap giving double weight to proper names and numbers and discarding auxiliaries given a sentence to paraphrase we first need to identify which if any of our previouslycomputed sentence clusters the new sentence belongs most strongly towe do this by finding the best alignment of the sentence to the existing lattices4 if a matching lattice is found we choose one of its comparablecorpus paraphrase lattices to rewrite the sentence substituting in the argument values of the original sentencethis yields as many paraphrases as there are lattice pathsall evaluations involved judgments by native speakers of english who were not familiar with the paraphrasing systems under considerationwe implemented our system on a pair of comparable corpora consisting of articles produced between september 2000 and august 2002 by the agence francepresse and reuters news agenciesgiven our interest in domaindependent paraphrasing we limited attention to 9mb of articles collected using a tdtstyle document clustering system concerning individual acts of violence in israel and army raids on the palestinian territoriesfrom this data out parametertraining set we extracted 43 slotted lattices from the afp corpus and 32 slotted lattices from the reuters corpus and found 25 crosscorpus matching pairs since lattices contain multiple paths these yielded 6534 template pairs5 before evaluating the quality of the rewritings produced by our templates and lattices we first tested the quality of a random sample of just the template pairsin our instructions to the judges we defined two text units to be paraphrases if one of them can generally be substituted for the other without great loss of information 6 given a pair of templates produced by a system the judges marked them as paraphrases if for many instantiations of the templates variables the resulting text units were paraphrasesto put the evaluation results into context we wanted to compare against another system but we are not aware of any previous work creating templates precisely for the task of generating paraphrasesinstead we made a goodfaith effort to adapt the dirt system to the problem selecting the 6534 highestscoring templates it produced when run on our datasets was unsuitable for evaluation purposes because their paraphrase extraction component is too tightly coupled to the underlying information extraction systemit is important to note some important caveats in making this comparison the most prominent being that dirt was not designed with sentenceparaphrase generation in mind its templates are much shorter than ours which may have affected the evaluators judgments and was originally implemented on much larger data sets7 the point of this evaluation is simply to determine whether another corpusbased paraphrasefocused approach could easily achieve the same performance levelin brief the dirt system works as followsdependency trees are constructed from parsing a large corpusleaftoleaf paths are extracted from these dependency 7to cope with the corpussize issue dirt was trained on an 84mb corpus of middleeast news articles a strict superset of the 9mb we usedother issues include the fact that dirts output needed to be converted into english it produces paths like nofn tide nnnn which we transformed into y tide of x so that its output format would be the same as ours trees with the leaves serving as slotsthen pairs of paths in which the slots tend to be filled by similar values where the similarity measure is based on the mutual information between the value and the slot are deemed to be paraphraseswe randomly extracted 500 pairs from the two algorithms output setsof these 100 paraphrases made up a common set evaluated by all four judges allowing us to compute agreement rates in addition each judge also evaluated another individual set seen only by him or herself consisting of another 100 pairs the individual sets allowed us to broaden our samples coverage of the corpus8 the pairs were presented in random order and the judges were not told which system produced a given pairas figure 4 shows our system outperforms the dirt system with a consistent performance gap for all the judges of about 38 although the absolute scores vary the judges assessment of correctness was fairly constant between the full 100instance set and just the 50instance common set alonein terms of agreement the kappa value on the common set was 054 which corresponds to moderate agreement multiway agreement is depicted in figure 4 there we see that in 86 of 100 cases at least three of the judges gave the same correctness assessment and in 60 cases all four judges concurredfinally we evaluated the quality of the paraphrase sentences generated by our system thus testing all the system components pattern selection paraphrase acquisition and generationwe are not aware of another system generating sentencelevel paraphrasestherefore we used as a baseline a simple paraphrasing system that just replaces words with one of their randomlychosen wordnet synonyms the number of substitutions was set proportional to the number of words our method replaced in the same sentencethe point of this comparison is to check whether simple synonym substitution yields results comparable to those of our algorithm10 for this experiment we randomly selected 20 afp articles about violence in the middle east published later than the articles in our training corpusout of 484 sentences in this set our system was able to paraphrase 59 we found that after proper name substitution only seven sentences in the test set appeared in the training set11 which implies that lattices boost the generalization power of our method significantly from seven to 59 sentencesinterestingly the coverage of the system varied significantly with article lengthfor the eight articles of ten or fewer sentences we paraphrased 608 of the sentences per article on average but for longer articles only 93 of the sentences per article on average were paraphrasedour analysis revealed that long articles tend to include large portions that are unique to the article such as personal stories of the event participants which explains why our algorithm had a lower paraphrasing rate for such articlesall 118 instances were presented in random order to two judges who were asked to indicate whether the meaning had been preservedof the paraphrases generated by our system the two evaluators deemed 814 and 78 respectively to be valid whereas for the baseline system the correctness results were 695 and 661 respectivelyagreement according to the kappa statistic was 06note that judging full sentences is inherently easier than judging templates because template comparison requires considering a variety ofpossible slot values while sentences are selfcontained unitsfigure 5 shows two example sentences one where our msabased paraphrase was deemed correct by both judges and one where both judges deemed the msagenerated paraphrase incorrectexamination of the results indicates that the two systems make essentially orthogonal types of errorsthe baseline systems relatively poor performance supports our claim that wholesentence paraphrasing is a hard task even when accurate wordlevel paraphrases are givenwe presented an approach for generating sentence level paraphrases a task not addressed previouslyour method learns structurally similar patterns of expression from data and identifies paraphrasing pairs among them using a comparable corpusa flexible patternmatching procedure allows us to paraphrase an unseen sentence by matching it to one of the induced patternsour approach generates both lexical and structural paraphrasesanother contribution is the induction of msa lattices from nonparallel datalattices have proven advantageous in a number of nlp contexts but were usually produced from parallel data which may not be readily available for many applicationswe showed that word lattices can be induced from a type of corpus that can be easily obtained for many domains broadening the applicability of this useful representationwe are grateful to many people for helping us in this workwe thank stuart allen itai balaban hubie chen tom heyerman evelyn kleinberg carl sable and alex zubatov for acting as judgeseric breck helped us with translating the output of the dirt systemwe had numerous very useful conversations with all those mentioned above and with eli barzilay noemie elhadad jon kleinberg mirella lapata smaranda muresan and bo pangwe are very grateful to dekang lin for providing us with dirts outputwe thank the cornell nlp group especially eric breck claire cardie amanda hollandminkley and bo pang for helpful comments on previous draftsthis paper is based upon work supported in part by the national science foundation under itri am grant iis0081334 and a sloan research fellowshipany opinions findings and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the national science foundation or the sloan foundationif no more than of all the edges out of a backbone node lead to the same next node we have high enough variability to warrant inserting a slot nodeotherwise we incorporate reliable synonyms12 into the backbone structure by preserving all nodes that are reached by at least of the sentences passing through the two neighboring backbone nodesfurthermore all backbone nodes labelled with our special generic tokens are also replaced with slot nodes since they too probably represent arguments nodes with indegree lower than the synonymy threshold are removed under the assumption that they probably represent idiosyncrasies of individual sentencessee figure 6 for examplesfigure 3 shows an example of a lattice and the slotted lattice derived via the process just described
N03-1003
learning to paraphrase an unsupervised approach using multiplesequence alignmentwe address the texttotext generation problem of sentencelevel paraphrasing a phenomenon distinct from and more difficult than word or phraselevel paraphrasingour approach applies multiplesequence alignment to sentences gathered from unannotated comparable corpora it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentencesthe results of our evaluation experiments show that the system derives accurate paraphrases outperforming baseline systemswe propose to apply multiplesequence alignment for traditional sentencelevel prwe construct lattices over paraphrases using an iterative pairwise multiple sequence alignment algorithmwe propose a multisequence alignment algorithm that takes structurally similar sentences and builds a compact lattice representation that encodes local variationswe present an approach for generating sentence level paraphrases learning structurally similar patterns of expression from data and identifying paraphrasing pairs among them using a comparable corpus
inducing history representations for broad coverage statistical parsing we present a neural network method for inducing representations of parse histories and using these history representations to estimate the probabilities needed by a statistical leftcorner parser the resulting statistical parser achieves performance on the penn treebank which is only 06 below the best current parser for this task despite using a smaller vocabulary size and less prior linguistic knowledge crucial to this success is the use of structurally determined soft biases in inducing the representation of the parse history and no use of hard independence assumptions unlike most problems addressed with machine learning parsing natural language sentences requires choosing between an unbounded number of possible phrase structure treesthe standard approach to this problem is to decompose this choice into an unbounded sequence of choices between a finite number of possible parser actionsthis sequence is the parse for the phrase structure treewe can then define a probabilistic model of phrase structure trees by defining a probabilistic model of each parser action in its parse context and apply machine learning techniques to learn this model of parser actionsmany statistical parsers are based on a historybased model of parser actionsin these models the probability of each parser action is conditioned on the history of previous actions in the parsebut here again we are faced with an unusual situation for machine learning problems conditioning on an unbounded amount of informationa major challenge in designing a historybased statistical parser is choosing a finite representation of the unbounded parse history from which the probability of the next parser action can be accurately estimatedprevious approaches have used a handcrafted finite set of features to represent the parse history in the work presented here we automatically induce a finite set of real valued features to represent the parse historywe perform the induction of a history representation using an artificial neural network architecture called simple synchrony networks this machine learning method is specifically designed for processing unbounded structuresit allows us to avoid making a priori independence assumptions unlike with handcrafted history featuresbut it also allows us to make use of our a priori knowledge by imposing structurally specified and linguistically appropriate biases on the search for a good history representationthe combination of automatic feature induction and linguistically appropriate biases results in a historybased parser with stateoftheart performancewhen trained on just partofspeech tags the resulting parser achieves the best current performance of a nonlexicalized parser on the penn treebankwhen a relatively small vocabulary of words is used performance is only marginally below the best current parser accuracyif either the biases are reduced or the induced history representations are replaced with handcrafted features performance degradesthe parsing system we propose consists of two components one which estimates the parameters of a probability model for phrase structure trees and one which searches for the most probable phrase structure tree given these parametersthe probability model we use is generative and historybasedat each step the models stochastic process generates a characteristic of the tree or a word of the sentencethis sequence of decisions is the derivation of the tree which we will denote because there is a onetoone mapping from phrase structure trees to our derivations we can use the chain rule for conditional probabilities to derive the probability of a tree as the multiplication of the probabilities of each derivation decision conditional on that decisions prior derivation history the neural network is used to estimate the parameters of this probability modelto define the parameters we need to choose the ordering of the decisions in a derivation such as a topdown or shiftreduce orderingthe ordering which we use here is that of a form of leftcorner parser a leftcorner parser decides to introduced a node into the parse tree after the subtree rooted at the nodes first child has been fully parsedthen the subtrees for the nodes remaining children are parsed in their lefttoright orderwe use a version of leftcorner parsing which first applies rightbinarization to the grammar as is done in except that we binarize down to nullary rules rather than to binary rulesthis allows the choice of the children for a node to be done incrementally rather than all the children having to be chosen when the node is first introducedwe also extended the parsing strategy slightly to handle chomsky adjunction structures as a special casethe chomsky adjunction is removed and replaced with a special modifier link in the tree we also compiled some frequent chains of nonbranching nodes into a single node with a new label all these grammar transforms are undone before any evaluation of the output trees is performedan example of the ordering of the decisions in a derivation is shown by the numbering on the left in figure 1to precisely specify this ordering it is sufficient to characterize the state of the parser as a stack of nodes which are in the process of being parsed as illustrated on the right in figure 1the parsing strategy starts with a stack that contains a node labeled root and must end in the same configuration each parser action changes the stack and makes an associated specification of a characteristic of the parse treethe possible parser actions are the following where is a tagword pair are nonterminal labels and is a stack of zero or more node labels shift map stack to and specify that is the next word in the sentence project map stack to and specify that is the parent of in the tree attach map stack to and specify that is the parent of in the tree modify map stack to and specify that is the modifier parent of in the tree any valid sequence of these parser actions is a derivation for a phrase structure treethe neural network estimates the parameters in two stages first computing a representation of the derivation history and then computing a probability distribution over the possible decisions given that historyfor the second stage computing we use standard neural network methods for probability estimation a loglinear model is used to estimate the probability distribution over the four types of decisions shifting projecting attaching and modifyinga separate loglinear model is used to estimate the probability distribution over node labels given that projecting project project which is multiplied is chosen by the probability estimate for projecting to the probability get estimates for that set of decisions project project project project project similarly the probability estimate for shifting the word which is actually observed in the sentence shift is computed with loglinear modelsthis means that values for all possible words need to be computed to do the normalizationthe high cost of this computation is reduced by splitting the computation of shift shift into multiple stages first estimating a distribution over all possible tags shift shift and then estimating a distribution over the possible tagword pairs given the correct tag shift shift this means that only estimates for the tagword pairs with the correct tag need to be computedwe also reduced the computational cost of terminal prediction by replacing the very large number of lower frequency tagword pairs with tagunknownword pairs which are also used for tagword pairs which were not in the training setwe do not do any morphological analysis of unknown words although we would expect some improvement in performance if we dida variety of frequency thresholds were tried as reported in section 6the most novel aspect of our parsing model is the way in which the representation of the derivation history is computedchoosing this representation is a challenge for any historybased statistical parser because the history is of unbounded sizeloglinear models as with most probability estimation methods require that there be a finite set of features on which the probability is conditionedthe standard way to handle this problem is to handcraft a finite set of features which provides a sufficient summary of the history the probabilities are then assumed to be independent of all the information about the history which is not captured by the chosen featuresthe difficulty with this approach is that the choice of features can have a large impact on the performance of the system but it is not feasible to search the space of possible feature sets by handin this work we use a method for automatically inducing a finite representation of the derivation historythe method is a form of multilayered neural network called simple synchrony networks the output layer of this network is the loglinear model which computes the function discussed abovein addition the ssn has a hidden layer which computes a finite vector of real valued features from a sequence of inputs specifying the derivation history this hidden layer vector is the history representation it is analogous to the hidden state of a hidden markov model in that it represents the state of the underlying generative process and in that it is not explicitly specified in the output of the generative processthe mapping from the derivation history to the history representation is computed with the recursive application of a function as will be discussed in the next section maps previous history representations plus predefined features of the derivation history to a realvalued vector because the function is nonlinear the induction of this history representation allows the training process to explore a much more general set of estimators than would be possible with a loglinear model alone 1 this generality makes this estimation method less dependent on the choice of input representation in addition because the inputs to include previous history representations the mapping is defined recursivelythis recursion allows the input to to be unbounded because an unbounded derivation history can be successively compressed into a fixedlength vector of history featurestraining a simple synchrony network is similar to training a loglinear modelfirst an appropriate error function is defined for the networks outputs and then some form of gradient descent learning is applied to search for a minimum of this error function2 this learning simultaneously tries to optimize the parameters of the output computation and the parameters of the mapping from the derivation history to the history representationwith multilayered networks such as ssns this training is not guaranteed to converge to a global optimum but in practice a set of parameters whose error is close to the optimum can be foundthe reason no global optimum can be found is that it is intractable to find the optimal mapping from the derivation history to the history representationgiven this difficulty it is important to impose appropriate biases on the search for a good history representationwhen researchers choose a handcrafted set of features to represent the derivation history they are imposing a domaindependent bias on the learning process through the independence assumptions which are implied by this choicein this work we do not make any independence assumptions but instead impose soft biases to emphasize some features of the derivation history over othersthis is achieved through the choice of what features are input explicitly to the computation of and what other history representations 1as is standard is the sigmoid activation function applied to a weighted sum of its inputsmultilayered neural networks of this form can approximate arbitrary mappings from inputs to outputs whereas a loglinear model alone can only estimate probabilities where the categoryconditioned probability distributions of the predefined inputs are in a restricted form of the exponential family 2we use the crossentropy error function which ensures that the minimum of the error function converges to the desired probabilities as the amount of training data increases this implies that the minimum for any given dataset is an estimate of the true probabilitieswe use the online version of backpropagation to perform the gradient descent are also inputif the explicit features include the previous decision and the other history representations include the previous history representation then any information about the derivation history could conceivably be included in thus such a model is making no a priori independence assumptionshowever some of this information is more likely to be included than other of this information which is the source of the models soft biasesthe bias towards including certain information in the history representation arises from the recency bias in training recursively defined neural networksthe only trained parameters of the mapping are the parameters of the function which selects a subset of the information from a set of previous history representations and records it in a new history representationthe training process automatically chooses the parameters of based on what information needs to be recordedthe recorded information may be needed to compute the output for the current step or it may need to be passed on to future history representations to compute a future outputhowever the more history representations intervene between the place where the information is input and the place where the information is needed the less likely the training is to learn to record this informationwe can exploit this recency bias in inducing history representations by ensuring that information which is known to be important at a given step in the derivation is input directly to that steps history representation and that as information becomes less relevant it has increasing numbers of history representations to pass through before reaching the steps history representationthe principle we apply when designing the inputs to each history representation is that we want recency in this flow of information to match a linguistically appropriate notion of structural localityto achieve this structurallydetermined inductive bias we use simple synchrony networks which are specifically designed for processing structuresassn divides the processing of a structure into a set of subprocesses with one subprocess for each node of the structurefor phrase structure tree derivations we divide a derivation into a set of subderivations by assigning a derivation step to the subderivation for the node top which is on the top of the stack prior to that stepthe ssn network then performs the same computation at each position in each subderivationthe unbounded nature of phrase structure trees does not pose a problem for this approach because increasing the number of nodes only increases the number of times the ssn network needs to perform a computation and not the number of parameters in the computation which need to be trainedfor each position in the subderivation for a node top the ssn computes two realvalued vectors namely and is computed by applying the function to a set of predefined features of the derivation history plus a small set of previous history representations rep top where rep is the most recent previous history representation for a node rep top top is a small set of nodes which are particularly relevant to decisions involving top this set always includes top itself but the remaining nodes in top and the features in need to be chosen by the system designerthese choices determine how information flows from one history representation to another and thus determines the inductive bias of the systemwe have designed top and so that the inductive bias reflects structural localitythus top includes nodes which are structurally local to top these nodes are the leftcorner ancestor of top top s leftcorner child and top s most recent child for rightbranching structures the leftcorner ancestor is the parent conditioning on which has been found to be beneficial as has conditioning on the leftcorner child because these inputs include the history representations of both the leftcorner ancestor and the most recent child a derivation step always has access to the history representation from the previous derivation step and thus any information from the entire previous derivation history could in principle be stored in the history representationthus this model is making no a priori hard independence assumptions just a priori soft biasesas mentioned above top also includes top itself which means that the inputs to always include the history representation for the most recent derivation step assigned to top this input imposes an appropriate bias because the induced history features which are relevant to previous derivation decisions involving top are likely to be relevant to the decision at step as wellas a simple example in figure 1 the prediction of the left corner terminal of the vp node and the decision that the s node is the root of the whole sentence are both dependent on the fact that the node on the top of the stack in each case has the label s the predefined features of the derivation history which are input to for node top at step are chosen to reflect the information which is directly relevant to choosing the next decision in the parser presented here these inputs are the last decision in the derivation the label or tag of the subderivations node top the tagword pair for the most recently predicted terminal and the tagword pair for top s leftcorner terminal inputting the last decision is sufficient to provide the ssn with a complete specification of the derivation historythe remaining features were chosen so that the inductive bias would emphasize these pieces of informationonce we have trained the ssn to estimate the parameters of our probability model we use these estimates to search the space of possible derivations to try to find the most probable derivationbecause we do not make a priori independence assumptions searching the space of all possible derivations has exponential complexity so it is important to be able to prune the search space if this computation is to be tractablethe leftcorner ordering for derivations allows very severe pruning without significant loss in accuracy which is crucial to the success of our parser due to the relatively high computational cost of computing probability estimates with a neural network rather than with the simpler methods typically employed in nlpour pruning strategy is designed specifically for leftcorner parsingwe prune the search space in two different ways the first applying fixed beam pruning at certain derivation steps and the second restricting the branching factor at all derivation stepsthe most important pruning occurs after each word has been shifted onto the stackwhen a partial derivation reaches this position it is stopped to see if it is one of the best 100 partial derivations which end in shifting that wordonly a fixed beam of the best 100 derivations are allowed to continue to the next wordexperiments with a variety of postprediction beam widths confirms that very small validation performance gains are achieved with widths larger than 100to search the space of derivations in between two words we do a bestfirst searchthis search is not restricted by a beam width but a limit is placed on the searchs branching factorat each point in a partial derivation which is being pursued by the search only the 10 best alternative decisions are considered for continuing that derivationthis was done because we found that the bestfirst search tended to pursue a large number of alternative labels for a nonterminal before pursuing subsequent derivation steps even though only the most probable labels ended up being used in the best derivationswe found that a branching factor of 10 was large enough that it had virtually no effect on the validation accuracywe used the penn treebank to perform empirical experiments on this parsing modelto sion on the testing set test the effects of varying vocabulary sizes on performance and tractability we trained three different modelsthe simplest model includes no words in the vocabulary relying completely on the information provided by the partofspeech tags of the wordsthe second model uses all tagword pairs which occur at least 200 times in the training setthe remaining words were all treated as instances of the unknownwordthis resulted in a vocabulary size of 512 tagword pairsthe third model thresholds the vocabulary at 20 instances in the training set resulting in 4242 tagword pairs3 we determined appropriate training parameters and network size based on intermediate validation results and our previous experience with networks similar to the models ssntags and ssnfreq 200we trained two or three networks for each of the three vocabulary sizes and chose the best ones based on their validation performancetraining times vary but are long being around 4 days for a ssntags model 6 days for a ssnfreq 200 model and 10 days for a ssnfreq 20 model we then tested the best models for each vocabulary size on the testing set4 standard measures of performance are shown in table 15 the top panel of table 1 lists the results for the nonlexicalized model and the available results for three other models which only use partofspeech tags as inputs another neural network parser an earlier statistical leftcorner parser and a pcfg the ssntags model achieves performance which is much better than the only other broad coverage neural network parser the ssntags model also does better than any other published results on parsing with just partofspeech tags as exemplified by the results for and the bottom panel of table 1 lists the results for the two lexicalized models and five recent statistical parsers on the complete testing set the performance of our lexicalized models is very close to the three best current parsers which all achieve equivalent performancethe performance of the best current parser represents only a 4 reduction in precision error and only a 7 reduction in recall error over the ssnfreq 20 modelthe ssn parser achieves this result using much less lexical knowledge than other approaches which all minimally use the words which occur at least 5 times plus morphological features of the remaining wordsanother diffference between the three best parsers and ours is that we parse incrementally using a beam searchthis allows use to trade off parsing accuracy for parsing speed which is a much more important issue than training timerunning times to achieve the above levels of performance on the testing set averaged around 30 seconds per sentence for ssntags 1 minute per sentence for ssnfreq 200 and 2 minutes per sentence for ssnfreq 20 but by reducing the number of alternatives considered in the search for the most probable parse we can greatly increase parsing speed without much loss in accuracywith the ssnfreq 200 model accuracy slightly better than can be achieved at 27 seconds per sentence and accuracy slightly better than can be achieved at 05 seconds per sentence to investigate the role which induced history representations are playing in this parser we trained a number of and fmeasure on the validation set for different versions of the ssnfreq 200 model additional ssns and tested them on the validation set6 the middle panel of table 2 shows the performance when some of the induced history representations are replaced with the label of their associated nodethe first four lines show the performance when this replacement is performed individually for each of the history representations input to the computation of a new history representation namely that for the nodes leftcorner ancestor its most recent child its leftcorner child and the previous parser action at the node itself respectivelythe final line shows the performance when all these replacements are donein the first two models this replacement has the effect of imposing a hard independence assumption in place of the soft biases towards ignoring structurally more distant informationthis is because there is no other series of history representations through which the removed information could passin the second two models this replacement simply removes the bias towards paying attention to more structurally local information without imposing any independence assumptionsin each modified model there is a reduction in performance as compared to the case where all these history representations are used the biggest decrease in performance occurs when the leftcorner ancestor is represented with just its label this implies that more distant topdown constraints and constraints from the left context are playing a big role in the success of the ssn parser and suggests that parsers which do not include information about this context in their history features will not do wellanother big decrease in performance occurs when the most recent child is represented with just its label this implies that more distant bottomup constraints are also playing a big role probably including some information 6the validation set is used to avoid repeated testing on the standard testing setsentences of length greater than 100 were excluded about lexical headsthere is also a decrease in performance when the leftcorner child is represented with just its label this implies that the first child does tend to carry information which is relevant throughout the subderivation for the node and suggests that this child deserves a special status in a history representationinterestingly a smaller although still substantial degradation occurs when the previous history representation for the same node is replaced with its node labelwe suspect that this is because the same information can be passed via its childrens history representationsfinally not using any of these sources of induced history features results in dramatically worse performance with a 58 increase in fmeasure error over using all threeone bias which is conspicuously absent from our parser design is a bias towards paying particular attention to lexical headsthe concept of lexical head is central to theories of syntax and has often been used in designing handcrafted history features thus it is reasonable to suppose that the incorporation of this bias would improve performanceon the other hand the ssn may have no trouble in discovering the concept of lexical head itself in which case incorporating this bias would have little effectto investigate this issue we trained several ssn parsers with an explicit representation of phrasal headresults are shown in the lower panel of table 2the first model includes a fifth type of parser action head attach which is used to identify the head child of each node in the treealthough incorrectly identifying the head child does not effect the performance for these evaluation measures forcing the parser to learn this identification results in some loss in performance as compared to the ssnfreq 200 modelthis is to be expected since we have made the task harder without changing the inductive bias to exploit the notion of headthe second model uses the identification of the head child to determine the lexical head of the phrase7 after the head child is attached to a node the nodes lexical head is identified and that word is added to the set of features input directly to the nodes subsequent history representationsthis adds an inductive bias towards treating the lexical head as important for posthead parsing decisionsthe results show that this inductive bias does improve performance but not enough to compensate for the degradation caused by having to learn to identify head childrenthe lack of a large improvement suggests that the ssnfreq 200 model already learns the significance of lexical heads but perhaps a different method for incorporating the bias towards con7if a nodes head child is a word then that word is the nodes lexical headif a nodes head child is a nonterminal then the lexical head of the head child is the nodes lexical head ditioning on lexical heads could improve performance a littlethe third model extends the head word model by adding the head child to the set of structurally local nodes top this addition does not result in an improvement suggesting that the induced history representations can identify the significance of the head child without the need for additional biasthe degradation appears to be caused by increased problems with overtraining due to the large number of additional weightsmost previous work on statistical parsing has used a historybased probability model with a handcrafted set of features to represent the derivation history ratnaparkhi defines a very general set of features for the histories of a shiftreduce parsing model but the results are not as good as models which use a more linguistically informed set of features for a topdown parsing model in addition to the method proposed in this paper another alternative to choosing a finite set of features is to use kernel methods which can handle unbounded feature setshowever this causes efficiency problemscollins and duffy define a kernel over parse trees and apply it to reranking the output of a parser but the resulting feature space is restricted by the need to compute the kernel efficiently and the results are not as good as collins previous work on reranking using a finite set of features future work could use the induced history representations from our work to define efficiently computable tree kernelsthe only other broad coverage neural network parser also uses a neural network architecture which is specifically designed for processing structureswe believe that their poor performance is due to a network design which does not take into consideration the recency bias discussed in section 4ratnaparkhis parser can also be considered a form of neural network but with only a single layer since it uses a loglinear model to estimate its probabilitiesprevious work on applying ssns to natural language parsing has not been general enough to be applied to the penn treebank so it is not possible to compare results directly to this workthis paper has presented a method for estimating the parameters of a historybased statistical parser which does not require any a priori independence assumptionsa neural network is trained simultaneously to estimate the probabilities of parser actions and to induce a finite representation of the unbounded parse historythe probabilities of parser actions are conditioned on this induced history representation rather than being conditioned on a set of handcrafted history features chosen a prioria beam search is used to search for the most probable parse given the neural networks probability estimateswhen trained and tested on the standard penn treebank datasets the parsers performance is only 06 below the best current parsers for this task despite using a smaller vocabulary and less prior linguistic knowledgethe neural network architecture we use simple synchrony networks not only allows us to avoid imposing hard independence assumptions it also allows us to impose linguistically appropriate soft biases on the learning processssns are specifically designed for processing structures which allows us to design the ssn so that the induced representations of the parse history are biased towards recording structurally local information about the parsewhen we modify these biases so that some structurally local information tends to be ignored performance degradeswhen we introduce independence assumptions by cutting off access to information from more distant parts of the structure performance degrades dramaticallyon the other hand we find that biasing the learning to pay more attention to lexical heads does not improve performance
N03-1014
inducing history representations for broad coverage statistical parsingwe present a neural network method for inducing representations of parse histories and using these history representations to estimate the probabilities needed by a statistical leftcorner parserthe resulting statistical parser achieves performance on the penn treebank which is only 06 below the best current parser for this task despite using a smaller vocabulary size and less prior linguistic knowledgecrucial to this success is the use of structurally determined soft biases in inducing the representation of the parse history and no use of hard independence assumptionsof the previous work on using neural net works for parsing natural language the most empirically successful has been our work using simple synchrony networkswe test the effect of larger input vocabulary on ssn performance by changing the frequency cutoff that selects the input tagword pairs