Text
stringlengths
45
130k
Id
stringlengths
8
8
Summary
stringlengths
55
2.67k
the conll2010 shared task learning to detect hedges and their scope in natural language text the conll2010 shared task was dedicated to the detection of uncertainty cues and their linguistic scope in natural language texts the motivation behind this task was that distinguishing factual and uncertain information in texts is of essential importance in information extraction this paper provides a general overview of the shared task including the annotation protocols of the training and evaluation datasets the exact task definitions the evaluation metrics employed and the overall results the paper concludes with an analysis of the prominent approaches and an overview of the systems submitted to the shared task every year since 1999 the conference on computational natural language learning provides a competitive shared task for the computational linguistics communityafter a fiveyear period of multilanguage semantic role labeling and syntactic dependency parsing tasks a new task was introduced in 2010 namely the detection of uncertainty and its linguistic scope in natural language sentencesin natural language processing and in particular in information extraction many applications seek to extract factual information from textin order to distinguish facts from unreliable or uncertain information linguistic devices such as hedges have to be identifiedapplications should handle detected speculative parts in a different mannera typical example is proteinprotein interaction extraction from biological texts where the aim is to mine text evidence for biological entities that are in a particular relation with each otherhere while an uncertain relation might be of some interest for an enduser as well such information must not be confused with factual textual evidence uncertainty detection has two levelsautomatic hedge detectors might attempt to identify sentences which contain uncertain information and handle whole sentences in a different manner or they might attempt to recognize insentence spans which are speculativeinsentence uncertainty detection is a more complicated task compared to the sentencelevel one but it has benefits for nlp applications as there may be spans containing useful factual information in a sentence that otherwise contains uncertain partsfor example in the following sentence the subordinated clause starting with although contains factual information while uncertain information is included in the main clause and the embedded questionalthough il1 has been reported to contribute to th17 differentiation in mouse and man it remains to be determined whether therapeutic targeting of il1 will substantially affect il17 in raboth tasks were addressed in the conll2010 shared task in order to provide uniform manually annotated benchmark datasets for both and to compare their difficulties and stateoftheart solutions for themthe uncertainty detection problem consists of two stagesfirst keywordscues indicating uncertainty should be recognized then either a sentencelevel decision is made or the linguistic scope of the cue words has to be identifiedthe latter task falls within the scope of semantic analysis of sentences exploiting syntactic patterns as hedge spans can usually be determined on the basis of syntactic patterns dependent on the keywordthe term hedging was originally introduced by lakoff however hedge detection has received considerable interest just recently in the nlp communitylight et al used a handcrafted list of hedge cues to identify speculative sentences in medline abstracts and several biomedical nlp applications incorporate rules for identifying the certainty of extracted information the most recent approaches to uncertainty detection exploit machine learning models that utilize manually labeled corporamedlock and briscoe used single words as input features in order to classify sentences from biological articles as speculative or nonspeculative based on semiautomatically collected training examplesszarvas extended the methodology of medlock and briscoe to use ngram features and a semisupervised selection of the keyword featureskilicoglu and bergler proposed a linguistically motivated approach based on syntactic information to semiautomatically refine a list of hedge cuesganter and strube proposed an approach for the automatic detection of sentences containing uncertainty based on wikipedia weasel tags and syntactic patternsthe bioscope corpus is manually annotated with negation and speculation cues and their linguistic scopeit consists of clinical freetexts biological texts from full papers and scientific abstractsusing bioscope for training and evaluation morante and daelemans developed a scope detector following a supervised sequence labeling approach while ozgyou are and radev developed a rulebased system that exploits syntactic patternsseveral related works have also been published within the framework of the bionlp09 shared task on event extraction where a separate subtask was dedicated to predicting whether the recognized biological events are under negation or speculation based on the genia event corpus annotations the shared task addressed the detection of uncertainty in two domainsas uncertainty detection is extremely important for biomedical information extraction and most existing approaches have targeted such applications participants were asked to develop systems for hedge detection in biological scientific articlesuncertainty detection is also important eg in encyclopedias where the goal is to collect reliable world knowledge about realworld concepts and topicsfor example wikipedia explicitly declares that statements reflecting author opinions or those not backed up by facts should be avoided thus the communityedited encyclopedia wikipedia became one of the subjects of the shared task as wellin the biomedical domain sentences were manually annotated for both hedge cues and their linguistic scopehedging is typically expressed by using specific linguistic devices that modify the meaning or reflect the authors attitude towards the content of the texttypical hedge cues fall into the following categories however there are some cases where a hedge is expressed via a phrase rather than a single wordcomplex keywords are phrases that express uncertainty together but not on their own an instance of a complex keyword can be seen in the following sentence mild bladder wall thickening raises the question of cystitisthe expression raises the question of may be substituted by suggests and neither the verb raises nor the noun question convey speculative meaning on their ownhowever the whole phrase is speculative therefore it is marked as a hedge cueduring the annotation process a minmax strategy for the marking of keywords and their scope was followedon the one hand when marking the keywords the minimal unit that expresses hedging and determines the actual strength of hedging was marked as a keywordon the other hand when marking the scopes of speculative keywords the scope was extended to the largest syntactic unit possiblethat is all constituents that fell within the uncertain interpretation were included in the scopeour motivation here was that in this way if we simply disregard the marked text span the rest of the sentence can usually be used for extracting factual information for instance in the example above we can be sure that the symptom mild bladder wall thickening is exhibited by the patient but a diagnosis of cystitis would be questionablethe scope of a speculative element can be determined on the basis of syntaxthe scopes of the bioscope corpus are regarded as consecutive text spans and their annotation was based on constituency grammarthe scope of verbs auxiliaries adjectives and adverbs usually starts right with the keywordin the case of verbal elements ie verbs and auxiliaries it ends at the end of the clause or sentence thus all complements and adjuncts are includedthe scope of attributive adjectives generally extends to the following noun phrase whereas the scope of predicative adjectives includes the whole sentencesentential adverbs have a scope over the entire sentence while the scope of other adverbs usually ends at the end of the clause or sentenceconjunctions generally have a scope over the syntactic unit whose members they coordinatesome linguistic phenomena can change scope boundaries in the sentence thus they were given special attention during the annotation phasethe chief editors of wikipedia have drawn the attention of the public to uncertainty issues they call weasel1a word is considered to be a weasel word if it creates an impression that something important has been said but what is really communicated is vague misleading evasive or ambiguousweasel words do not give a neutral account of facts rather they offer an opinion without any backup or sourcethe following sentence does not specify the source of information it is just the vague term some people that refers to the holder of this opinion some people claim that this results in a better taste than that of other diet colas statements with weasel words usually evoke questions such as who says that whose opinion is this and how many people think sotypical instances of weasels can be grouped in the following way elements referring to uncertainty probable likely possible unsure often possibly allegedly apparently perhaps etc elements denoting generalization widely traditionally generally broadlyaccepted widespread etc qualifiers and superlatives global superior excellent immensely legendary best largest most prominent etc elements expressing obviousness clearly obviously arguably etchowever the use of the above words or grammatical devices does not necessarily entail their being a weasel cue since their use may be justifiable in their contextsas the main application goal of weasel detection is to highlight articles which should be improved we decided to annotate only weasel cues in wikipedia articles but we did not mark their scopesduring the manual annotation process the following cue marking principles were employedcomplex verb phrases were annotated as weasel cues since in some cases both the passive construction and the verb itself are responsible for the weaselin passive forms with dummy subjects and there is there are constructions the weasel cue included the grammatical subject as wellas for numerically vague expressions the noun phrase containing a quantifier was marked as a weasel cueif there was no quantifier the noun was annotated as a weasel cuecomparatives and superlatives were annotated together with their articleanaphoric pronouns referring to a weasel word were also annotated as weasel cuestwo uncertainty detection tasks in two domains with three types of submissions were given to the participants of the conll2010 shared taskthe aim of task1 was to develop automatic procedures for identifying sentences in texts which contain unreliable or uncertain informationin particular this task is a binary classification problem ie factual and uncertain sentences have to be distinguishedas training and evaluation data were providedthe annotation of weaselhedge cues was carried out on the phrase level and sentences containing at least one cue were considered as uncertain while sentences with no cues were considered as factualthe participating systems had to submit a binary classification of the test sentences while marking cues in the submissions was voluntary for task2 insentence scope resolvers had to be developedthe training and evaluation data consisted of biological scientific texts in which instances of speculative spans that is keywords and their linguistic scope were annotated manuallysubmissions to task2 were expected to automatically annotate the cue phrases and the left and right boundaries of their scopes the evaluation for task1 was carried out at the sentence level ie the cue annotations in the sentence were not taken into accountthe fβ1 measure of the uncertain class was employed as the chief evaluation metricthe task2 systems were expected to mark cueand corresponding scope beginend tags linked together by using some unique idsa scopelevel fβ1 measure was used as the chief evaluation metric where true positives were scopes which exactly matched the gold standard cue phrases and gold standard scope boundaries assigned to the cue wordthat is correct scope boundaries with incorrect cue annotation and correct cue words with bad scope boundaries were both treated as errorsthis scopelevel metric is very strictfor instance the requirement of the precise match of the cue phrase is questionable as from an application point of view the goal is to find uncertain text spans and the evidence for this is not so importanthowever the annotation of cues in datasets is essential for training scope detectors since locating the cues usually precedes the identification of their scopehence we decided to incorporate cue matches into the evaluation metricanother questionable issue is the strict boundary matching requirementfor example including or excluding punctuations citations or some bracketed expressions like from a scope is not crucial for an otherwise accurate scope detectoron the other hand the list of such ignorable phenomena is arguable especially across domainsthus we considered the strict boundary matching to be a straightforward and unambiguous evaluation criterionminor issues like those mentioned above could be handled by simple postprocessing rulesin conclusion we think that the uncertainty detection community may find more flexible evaluation criteria in the future but the strict scopelevel metric is definitely a good starting point for evaluationparticipants were invited to submit results in different configurations where systems were allowed to exploit different kinds of annotated resourcesthe three possible submission categories were the motivation behind the crossdomain and the open challenges was that in this way we could assess whether adding extra information to the systems can contribute to the overall performancetraining and evaluation corpora were annotated manually for hedgeweasel cues and their scope by two independent linguist annotatorsany differences between the two annotations were later resolved by the chief annotator who was also responsible for creating the annotation guidelines and training the two annotatorsthe datasets are freely available2 for further benchmark experiments at httpwwwinfyouszegedhu rgaiconll2010stsince uncertainty cues play an important role in detecting sentences containing uncertainty they are tagged in the task1 datasets as well to enhance training and evaluation of systemsthe biological training dataset consisted of the biological part of the bioscope corpus hence it included abstracts from the genia corpus 5 full articles from the functional genomics literature and 4 articles from the open access bmc bioinformatics websitethe automatic segmentation of the documents was corrected manually and the sentences were annotated manually for hedge cues and their scopesthe evaluation dataset was based on 15 biomedical articles downloaded from the publicly available pubmedcentral database including 5 random articles taken from the bmc bioinformatics journal in october 2009 5 random articles to which the drosophila mesh term was assigned and 5 random articles having the mesh terms human blood cells and transcription factor these latter ten articles were also published in 2009the aim of this article selection procedure was to have a theme that was close to the training corpusthe evaluation set contained 5003 sentences out of which 790 were uncertainthese texts were manually annotated for hedge cues and their scopeto annotate the training and the evaluation datasets the same annotation principles were appliedfor both task1 and task2 the same dataset was provided the difference being that for task1 only hedge cues and sentencelevel uncertainty were given however for task2 hedge cues and their scope were marked in the text2186 paragraphs collected from wikipedia archives were also offered as task1 training data the evaluation dataset contained 2346 wikipedia paragraphs with 9634 sentences out of which 2234 were uncertainfor the selection of the wikipedia paragraphs used to construct the training and evaluation datasets we exploited the weasel tags added by the editors of the encyclopedia each paragraph containing weasel tags was extracted from the history dump of english wikipediafirst 438 randomly selected paragraphs were manually annotated from this pool then the most frequent cue phrases were collectedlater on two other sets of wikipedia paragraphs were gathered on the basis of whether they contained such cue phrases or notthe aim of this sampling procedure was to provide large enough training and evaluation samples containing weasel words and also occurrences of typical weasel words in nonweasel contextseach sentence was annotated manually for weasel cuessentences were treated as uncertain if they contained at least one weasel cue ie the scope of weasel words was the entire sentence unannotated but preprocessed full biological articles and 1 million paragraphs from wikipedia were offered to the participants as wellthese datasets did not contain any manual annotation for uncertainty but their usage permitted data sampling from a large pool of indomain texts without timewasting preprocessing tasks both training and evaluation data were released in a custom xml formatfor each task a separate xml file was made available containing the whole document set for the given taskevaluation datasets were available in the same format as training data without any sentencelevel certainty cue or scope annotationsthe xml format enabled us to provide more detailed information about the documents such as segment boundaries and types and it is the straightforward format to represent nested scopesnested scopes have overlapping text spans which may contain cues for multiple scopes the xml format utilizes idreferences to determine the scope of a given cuenested constructions are rather complicated to represent in the standard iob format moreover we did not want to enforce a uniform tokenizationto support the processing of the data files reader and writer software modules were developed and offered to the participants for the ucompare framework ucompare provides a universal interface and several text mining and natural language processing tools for general and biological domainsin this way participants could configure and execute a flexible chain of analyzing tools even with a graphical uiparticipants uploaded their results through the shared task website and the official evaluation was performed centrallyafter the evaluation period the results were published for the participants on the weba total of 23 teams participated in the shared task22 16 and 13 teams submitted output for task1b task1w and task2 respectivelytables 1 2 and 3 contain the results of the submitted systems for task1 and task2the last name of the first author of the system description paper is used here as a system name3the last column contains the type of submissionthe system of kilicoglu and bergler is the only open submissionthey adapted their system introduced in kilicoglu and bergler to the datasets of the shared taskregarding cross submissions zhao et al and ji et al managed to achieve a noticeable improvement by exploiting crossdomain datazhao et al extended the biological cue word dictionary of their system using it as a feature for classification by the frequent cues of the wikipedia dataset while ji et al used the union of the two datasets for training higher precision than recallthere may be two reasons for thisthe systems may have applied only reliable patterns or patterns occurring in the evaluation set may be imperfectly covered by the training datasetsthe most intense participation was on task1bhere participants applied various precisionrecall tradeoff strategiesfor instance tang et al achieved a balanced precisionrecall configuration while li et al achieved third place thanks to their superior precisiontables 4 and 5 show the cuelevel performances ie the fmeasure of cue phrase matching where true positives were strict matchesnote that it was optional to submit cue annotations for task1 it is interesting to see that morante et al who obtained the best results on task2 achieved a mediumranked fmeasure on the cuelevel while on the scopelevel the difference is 13 in the reverse direction which indicates that the real strength of the system of morante et al is the accurate detection of scope boundariesthe approaches to task1 fall into two major categoriesthere were six systems which handled the task as a classical sentence classification problem and employed essentially a bagofwords feature representation the remaining teams focused on the cue phrases and sought to classify every token if it was a part of a cue phrase then a sentence was predicted as uncertain if it contained at least one recognized cue phrasefive systems followed a pure token classification approach for cue detection while others used sequential labeling techniques to identify cue phrases in sentences the feature set employed in task1 systems typically consisted of the wordform its lemma or stem pos and chunk codes and about the half of the participants constructed features from the dependency andor constituent parse tree of the sentences as well it is interesting to see that the top ranked systems of task1b followed a sequence labeling approach while the best systems on task1w applied a bagofwords sentence classificationthis may be due to the fact that biological sentences have relatively simple patternsthus the context of the cue words can be utilized while wikipedia weasels have a diverse natureanother observation is that the top systems in both task1b and task1w are the ones which did not derive features from syntactic parsingeach task2 system was built upon a task1 system iethey attempted to recognize the scopes for the predicted cue phrases have argued that the objective functions of task1 and task2 cue detection problems are different because of sentences containing multiple hedge spansmost systems regarded multiple cues in a sentence to be independent from each other and formed different classification instances from themthere were three systems which incorporated information about other hedge cues of the sentence into the feature space and zhang et al constructed a cascade system which utilized directly the predicted scopes during predicting other scopes in the same sentencethe identification of the scope for a certain cue was typically carried out by classifying each token in the sentencetask2 systems differ in the number of class labels used as target and in the machine learning approaches appliedmost systems following morante and daelemans used three class labels irst ast and nonetwo participants used four classes by adding nside while three systems followed a binary classification approach the systems typically included a postprocessing procedure to force scopes to be continuous and to include the cue phrase in questionthe machine learning methods applied can be again categorized into sequence labeling sification handcrafted rules machine learners entropy guided transformation learning averaged perceptron knearest neighbour the way of identifying scopes predicting firstlast tokens firstinsidelast tokens just inside tokens multiple hedges the system applied a mechanism for handling multiple hedges inside a sentence and token classification approaches the feature sets used here are the same as for task1 extended by several features describing the relationship between the cue phrase and the token in question mostly by describing the dependency path between themthe conll2010 shared task introduced the novel task of uncertainty detectionthe challenge consisted of a sentence identification task on uncertainty and an insentence hedge scope detection task in the latter task the goal of automatic systems was to recognize speculative text spans inside sentencesthe relatively high number of participants indicates that the problem is rather interesting for the natural language processing communitywe think that this is due to the practical importance of the task for applications and because it addresses several open research questionsalthough several approaches were introduced by the participants of the shared task and we believe that the ideas described in this proceedings can serve as an excellent starting point for the development of an uncertainty detector there is a lot of room for improving such systemsthe manually annotated datasets and software tools developed for the shared task may act as benchmarks for these future experiments the authors would like to thank joakim nivre and lluıs marquez for their useful suggestions comments and help during the organisation of the shared taskthis work was supported in part by the national office for research and technology of the hungarian government within the framework of the projects textrend belami and maszeker
W10-3001
the conll2010 shared task learning to detect hedges and their scope in natural language textthe conll2010 shared task was dedicated to the detection of uncertainty cues and their linguistic scope in natural language textsthe motivation behind this task was that distinguishing factual and uncertain information in texts is of essential importance in information extractionthe conll2010 shared task aimed at detecting uncertainty cues in texts focused on these phrases in trying to determine whether sentences contain uncertain informationthe goal of the conll 2010 shared task is to develop linguistic scope detectors as well
sentiment analysis of twitter data we examine sentiment analysis on twitter data the contributions of this paper are we introduce posspecific prior polarity features we explore the use of a tree kernel to obviate the need for tedious feature engineering the new features and the tree kernel perform approximately at the same level both outperforming the stateoftheart baseline microblogging websites have evolved to become a source of varied kind of informationthis is due to nature of microblogs on which people post real time messages about their opinions on a variety of topics discuss current issues complain and express positive sentiment for products they use in daily lifein fact companies manufacturing such products have started to poll these microblogs to get a sense of general sentiment for their productmany times these companies study user reactions and reply to users on microblogsone challenge is to build technology to detect and summarize an overall sentimentin this paper we look at one such popular microblog called twitter and build models for classifying tweets into positive negative and neutral sentimentwe build models for two classification tasks a binary task of classifying sentiment into positive and negative classes and a 3way task of classifying sentiment into positive negative and neutral classeswe experiment with three types of models unigram model a feature based model and a tree kernel based modelfor the feature based model we use some of the features proposed in past literature and propose new featuresfor the tree kernel based model we design a new tree representation for tweetswe use a unigram model previously shown to work well for sentiment analysis for twitter data as our baselineour experiments show that a unigram model is indeed a hard baseline achieving over 20 over the chance baseline for both classification tasksour feature based model that uses only 100 features achieves similar accuracy as the unigram model that uses over 10000 featuresour tree kernel based model outperforms both these models by a significant marginwe also experiment with a combination of models combining unigrams with our features and combining our features with the tree kernelboth these combinations outperform the unigram baseline by over 4 for both classification tasksin this paper we present extensive feature analysis of the 100 features we proposeour experiments show that features that have to do with twitterspecific features add value to the classifier but only marginallyfeatures that combine prior polarity of words with their partsofspeech tags are most important for both the classification tasksthus we see that standard natural language processing tools are useful even in a genre which is quite different from the genre on which they were trained furthermore we also show that the tree kernel model performs roughly as well as the best feature based models even though it does not require detailed feature engineeringwe use manually annotated twitter data for our experimentsone advantage of this data over previously used datasets is that the tweets are collected in a streaming fashion and therefore represent a true sample of actual tweets in terms of language use and contentour new data set is available to other researchersin this paper we also introduce two resources which are available 1 a hand annotated dictionary for emoticons that maps emoticons to their polarity and 2 an acronym dictionary collected from the web with english translations of over 5000 frequently used acronymsthe rest of the paper is organized as followsin section 2 we discuss classification tasks like sentiment analysis on microblog datain section 3 we give details about the datain section 4 we discuss our preprocessing technique and additional resourcesin section 5 we present our prior polarity scoring schemein section 6 we present the design of our tree kernelin section 7 we give details of our feature based approachin section 8 we present our experiments and discuss the resultswe conclude and give future directions of research in section 9sentiment analysis has been handled as a natural language processing task at many levels of granularitystarting from being a document level classification task it has been handled at the sentence level and more recently at the phrase level microblog data like twitter on which users post real time reactions to and opinions about everything poses newer and different challengessome of the early and recent results on sentiment analysis of twitter data are by go et al and pak and paroubek go et al use distant learning to acquire sentiment datathey use tweets ending in positive emoticons like as positive and negative emoticons like and they report svm outperforms other classifiersin terms of feature space they try a unigram bigram model in conjunction with partsofspeech featuresthey note that the unigram model outperforms all other modelsspecifically bigrams and pos features do not helppak and paroubek collect data following a similar distant learning paradigmthey perform a different classification task though subjective versus objectivefor subjective data they collect the tweets ending with emoticons in the same manner as go et al for objective data they crawl twitter accounts of popular newspapers like new york times washington posts etcthey report that pos and bigrams both help both these approaches however are primarily based on ngram modelsmoreover the data they use for training and testing is collected by search queries and is therefore biasedin contrast we present features that achieve a significant gain over a unigram baselinein addition we explore a different method of data representation and report significant improvement over the unigram modelsanother contribution of this paper is that we report results on manually annotated data that does not suffer from any known biasesour data is a random sample of streaming tweets unlike data collected by using specific queriesthe size of our handlabeled data allows us to perform crossvalidation experiments and check for the variance in performance of the classifier across foldsanother significant effort for sentiment classification on twitter data is by barbosa and feng they use polarity predictions from three websites as noisy labels to train a model and use 1000 manually labeled tweets for tuning and another 1000 manually labeled tweets for testingthey however do not mention how they collect their test datathey propose the use of syntax features of tweets like retweet hashtags link punctuation and exclamation marks in conjunction with features like prior polarity of words and pos of wordswe extend their approach by using real valued prior polarity and by combining prior polarity with posour results show that the features that enhance the performance of our classifiers the most are features that combine prior polarity of words with their parts of speechthe tweet syntax features help but only marginallygamon perform sentiment analysis on feeadback data from global support services surveyone aim of their paper is to analyze the role of linguistic features like pos tagsthey perform extensive feature analysis and feature selection and demonstrate that abstract linguistic analysis features contributes to the classifier accuracyin this paper we perform extensive feature analysis and show that the use of only 100 abstract linguistic features performs as well as a hard unigram baselinetwitter is a social networking and microblogging service that allows users to post real time messages called tweetstweets are short messages restricted to 140 characters in lengthdue to the nature of this microblogging service people use acronyms make spelling mistakes use emoticons and other characters that express special meaningsfollowing is a brief terminology associated with tweetsemoticons these are facial expressions pictorially represented using punctuation and letters they express the users moodtarget users of twitter use the symbol to refer to other users on the microblogreferring to other users in this manner automatically alerts themhashtags users usually use hashtags to mark topicsthis is primarily done to increase the visibility of their tweetswe acquire 11875 manually annotated twitter data from a commercial sourcethey have made part of their data publicly availablefor information on how to obtain the data see acknowledgments section at the end of the paperthey collected the data by archiving the realtime streamno language location or any other kind of restriction was made during the streaming processin fact their collection consists of tweets in foreign languagesthey use google translate to convert it into english before the annotation processeach tweet is labeled by a human annotator as positive negative neutral or junkthe junk label means that the tweet cannot be understood by a human annotatora manual analysis of a random sample of tweets labeled as junk suggested that many of these tweets were those that were not translated well using google translatewe eliminate the tweets with junk label for experimentsthis leaves us with an unbalanced sample of 8753 tweetswe use stratified sampling to get a balanced dataset of 5127 tweets in this paper we introduce two new resources for preprocessing twitter data 1 an emoticon dictionary and 2 an acronym dictionarywe prepare the emoticon dictionary by labeling 170 emoticons listed on wikipedia1 with their emotional statefor example is labeled as positive whereas replace all the emoticons with a their sentiment polarity by looking up the emoticon dictionary b replace all urls with a tag you c replace targets with tag t d replace all negations by tag not and e replace a sequence of repeated characters by three characters for example convert coooooooool to cooolwe do not replace the sequence by only two characters since we want to differentiate between the regular usage and emphasized usage of the wordacronym english expansion gr8 gr8t great lol laughing out loud rotf rolling on the floor bff best friend forever we present some preliminary statistics about the data in table 3we use the stanford tokenizer to tokenize the tweetswe use a stop word dictionary3 to identify stop wordsall the other words which are found in wordnet are counted as english wordswe use the standard tagset defined by the penn treebank for identifying punctuationwe record the occurrence of three standard twitter tags emoticons urls and targetsthe remaining tokens are either non english words or other symbolsin table 3 we see that 383 of the tokens are stop words 301 of the tokens are found in wordnet and 12 tokens are negation words118 of all the tokens are punctuation marks excluding exclamation marks which make up for 28 of all tokensin total 841 of all tokens are tokens that we expect to see in a typical english language textthere are 42 tags that are specific to twitter which include emoticons target hastags and rt the remaining 117 tokens are either words that cannot be found in wordnet or special symbols which do not fall in the category of twitter tagsa number of our features are based on prior polarity of wordsfor obtaining the prior polarity of words we take motivation from work by agarwal et al we use dictionary of affect in language and extend it using wordnetthis dictionary of about 8000 english language words assigns every word a pleasantness score between 1 3 we first normalize the scores by diving each score my the scale we consider words with polarity less than 05 as negative higher than 08 as positive and the rest as neutralif a word is not directly found in the dictionary we retrieve all synonyms from wordnetwe then look for each of the synonyms in dalif any synonym is found in dal we assign the original word the same pleasantness score as its synonymif none of the synonyms is present in dal the word is not associated with any prior polarityfor the given data we directly found prior polarity of 811 of the wordswe find polarity of other 78 of the words by using wordnetso we find prior polarity of about 889 of english language wordswe design a tree representation of tweets to combine many categories of features in one succinct convenient representationfor calculating the similarity between two trees we use a partial tree kernel first proposed by moschitti a pt kernel calculates the similarity between two trees by comparing all possible subtreesthis tree kernel is an instance of a general class of convolution kernelsconvolution kernels first introduced by haussler can be used to compare abstract objects like strings instead of feature vectorsthis is because these kernels involve a recursive calculation over the parts of abstract objectthis calculation is made computationally efficient by using dynamic programming techniquesby considering all possible combinations of fragments tree kernels capture any possible correlation between features and categories of featuresfigure 1 shows an example of the tree structure we designthis tree is for a synthesized tweet fernando this is not a great day for playing the harpwe use the following procedure to convert a tweet into a tree representation initialize the main tree to be rootthen tokenize each tweet and for each token a if the token is a target emoticon exclamation mark other punctuation mark or a negation word add a leaf node to the root with the corresponding tagfor example in the tree in figure 1 we add tag t for fernando add tag not for the token nt add tag exc for the exclamation mark at the end of the sentence and add p for the emoticon representing positive mood b if the token is a stop word we simply add the subtree to rootfor instance we add a subtree corresponding to each of the stop words this is and for c if the token is an english language word we map the word to its partofspeech tag calculate the prior polarity of the word using the procedure described in section 5 and add the subtree to the rootfor example we add the subtree for the word greatew refers to english word d for any other token we add subtree to the rootne refers to nonenglishthe pt tree kernel creates all possible subtrees and compares them to each otherthese subtrees include subtrees in which nonadjacent branches become adjacent by excising other branches though order is preservedin figure 1 we show some of the tree fragments that the pt kernel will attempt to compare with tree fragments from other treesfor example given the tree the pt kernel will use and this means that the pt tree kernel attempts to use full information and also abstracts away from specific information in this manner it is not necessary to create by hand features at all levels of abstractionwe propose a set of features listed in table 4 for our experimentsthese are a total of 50 type of featureswe calculate these features for the whole tweet and for the last onethird of the tweetin total we get 100 additional featureswe refer to these features as sentifeatures throughout the paperour features can be divided into three broad categories ones that are primarily counts of various features and therefore the value of the feature is a natural number e n second features whose value is a real number e r these are primarily features that capture the score retrieved from dalthirdly features whose values are boolean e bthese are bag of words presence of exclamation marks and capitalized texteach of these broad categories is divided into two subcategories polar features and nonpolar featureswe refer to a feature as polar if we calculate its prior polarity either by looking it up in dal or in the emoticon dictionaryall other features which are not associated with any prior polarity fall in the nonpolar categoryeach of polar and nonpolar features is further subdivided into two categories pos and otherpos refers to features that capture statistics about partsofspeech of words and other refers to all other types of featuresin reference to table 4 row f1 belongs to the category polar pos and refers to the count of number of positive and negative partsofspeech in a tweet rows f2 f3 f4 belongs to the category polar other and refers to count of number of negation words count of words that have positive and negative prior polarity count of emoticons per polarity type count of hashtags capitalized words and words with exclamation marks associated with words that have prior polarity row f5 belongs to the category nonpolar pos and refers to counts of different partsofspeech tags rows f6 f7 belong to the category nonpolar other and refer to count of number of slangs latin alphabets and other words without polarityit also relates to special terms such as the number of hashtags urls targets and newlinesrow f8 belongs to the category polar pos and captures the summation of prior polarity scores of words with pos of jj rb vb and nnsimilarly row f9 belongs to the category polar other and calculates the summation of prior polarity scores of all words row f10 refers to the category nonpolar other and calculates the percentage of tweet that is capitalizedfinally row f11 belongs to the category nonpolar other and refers to presence of exclamation and presence of capitalized words as featuresin this section we present experiments and results for two classification tasks 1 positive versus negative and 2 positive versus negative versus neutralfor each of the classification tasks we present three models as well as results for two combinations of these models for the unigram plus sentifeatures model we present feature analysis to gain insight about what kinds of features are adding most value to the modelwe also present learning curves for each of the models and compare learning abilities of models when provided limited dataexperimentalsetup for all our experiments we use support vector machines and report averaged 5fold crossvalidation test resultswe tune the c parameter for svm using an embedded 5fold crossvalidation on the training data of each fold ie for each fold we first run 5fold crossvalidation only on the training data of that fold for different values of c we pick the setting that yields the best crossvalidation error and use that c for determining test error for that foldas usual the reported accuracies is the average over the five foldsthis is a binary classification task with two classes of sentiment polarity positive and negativewe use a balanced dataset of 1709 instances for each class and therefore the chance baseline is 50we use a unigram model as our baselineresearchers report stateoftheart performance for sentiment analysis on twitter data using a unigram model table 5 compares the performance of three models unigram model feature based model using only 100 sentifeatures and the tree kernel modelwe report mean and standard deviation of 5fold test accuracywe observe that the tree kernels outperform the unigram and the sentifeatures by 258 and 266 respectivelythe 100 sentifeatures described in table 4 performs as well as the unigram model that uses about 10000 featureswe also experiment with combination of modelscombining unigrams with sentifeatures outperforms the combination of kernels with sentifeatures by 078this is our best performing system for the positive versus negative task gaining about 404 absolute gain over a hard unigram baselinetable 6 presents classifier accuracy and f1measure when features are added incrementallywe start with our baseline unigram model and subsequently add various sets of featuresfirst we add all nonpolar features and observe no improvement in the performancenext we add all partofspeech based features and observe a gain of 349 over the unigram baselinewe see an additional increase in accuracy by 055 when we add other prior polarity features from these experiments we conclude that the most important features in sentifeatures are those that involve prior polarity of partsofspeechall other features play a marginal role in achieving the best performing systemin fact we experimented by using unigrams with only prior polarity pos features and achieved a performance of 751 which is only slightly lower than using all sentifeaturesin terms of unigram features we use information gain as the attribute evaluation metric to do feature selectionin table 7 we present a list of unigrams that consistently appear as top 15 unigram features across all foldswords having positive or negative prior polarity top the listemoticons also appear as important unigramssurprisingly though the word for appeared as a top featurea preliminary analysis revealed that the word for appears as frequently in positive tweets as it does in negative tweetshowever tweets containing phrases like for you and for me tend to be positive even in the absence of any other explicit prior polarity wordsowing to previous research the url appearing as a top feature is less surprising because go et al report that tweets containing urls tend to be positivethe learning curve for the 2way classification task is in figure 2the curve shows that when limited data is used the advantages in the performance of our best performing systems is even more pronouncedthis implies that with limited amount of training data simply using unigrams has a critical disadvantage while both tree kernel and unigram model with our features exhibit promising performancethis is a 3way classification task with classes of sentiment polarity positive negative and neutralwe use a balanced dataset of 1709 instances for each class and therefore the chance baseline is 3333for this task the unigram model achieves a gain of 2325 over chance baselinetable 8 compares the performance of our three modelswe report mean and standard deviation of 5fold test accuracywe observe that the tree kernels outperform the unigram and the sentifeatures model by 402 and 429 absolute respectivelywe note that this difference is much more pronounced comparing to the two way classification taskonce again our 100 sentifeatures perform almost as well as the unigram baseline which has about 13000 featureswe also experiment with the combination of modelsfor this classification task the combination of tree kernel with sentifeatures outperforms the combination of unigrams with sentifeatures by a small marginthis is our best performing system for the 3way classification task gaining 425 over the unigram baselinethe learning curve for the 3way classification task is similar to the curve of the 2way classification task and we omit ittable 9 presents classifier accuracy and f1measure when features are added incrementallywe start with our baseline unigram model and subsequently add various sets of featuresfirst we add all nonpolar features and observe an small improvement in the performancenext we add all partofspeech based features and observe a gain of 328 over the unigram baselinewe see an additional increase in accuracy by 064 when we add other prior polarity features these results are in line with our observations for the 2way classification taskonce again the main contribution comes from features that involve prior polarity of partsofspeechthe top ranked unigram features for the 3way classification task are mostly similar to that of the 2way classification task except several terms with neutral polarity appear to be discriminative features such as to have and sowe presented results for sentiment analysis on twitterwe use previously proposed stateoftheart unigram model as our baseline and report an overall gain of over 4 for two classification tasks a binary positive versus negative and a 3way positive versus negative versus neutralwe presented a comprehensive set of experiments for both these tasks on manually annotated data that is a random sample of stream of tweetswe investigated two kinds of models tree kernel and feature based models and demonstrate that both these models outperform the unigram baselinefor our featurebased approach we do feature analysis which reveals that the most important features are those that combine the prior polarity of words and their partsofspeech tagswe tentatively conclude that sentiment analysis for twitter data is not that different from sentiment analysis for other genresin future work we will explore even richer linguistic analysis for example parsing semantic analysis and topic modelingagarwal and rambow are funded by nsf grant iis0713548vovsha is funded by nsf grant iis0916200we would like to thank nextgen invent corporation for providing us with the twitter dataplease contact deepak mittal about obtaining the data
W11-0705
sentiment analysis of twitter datawe examine sentiment analysis on twitter datathe contributions of this paper are we introduce posspecific prior polarity features we explore the use of a tree kernel to obviate the need for tedious feature engineeringthe new features and the tree kernel perform approximately at the same level both outperforming the stateoftheart baselinein our work a study was conducted on a reduced corpus of tweets labelled manually
overview of bionlp shared task 2011 the bionlp shared task 2011 an information extraction task held over 6 months up to march 2011 met with communitywide participation receiving 46 final submissions from 24 teams five main tasks and three supporting tasks were arranged and their results show advances in the state of the art in finegrained biomedical domain information extraction and demonstrate that extraction methods successfully generalize in various aspects the bionlp shared task series represents a communitywide move toward finegrained information extraction in particular biomolecular event extraction the series is complementary to biocreative while biocreative emphasizes the shortterm applicability of introduced ie methods for tasks such as database curation bionlpst places more emphasis on the measurability of the stateoftheart and traceability of challenges in extraction through an approach more closely tied to textthese goals were pursued in the first event bionlpst 2009 through high quality benchmark data provided for system development and detailed evaluation performed to identify remaining problems hindering extraction performancealso as the complexity of the task was high and system development time limited we encouraged focus on finegrained ie by providing gold annotation for named entities as well as various supporting resourcesbionlpst 2009 attracted wide attention with 24 teams submitting final resultsthe task setup and data since have served as the basis for numerous studies as the second event of the series bionlpst 2011 preserves the general design and goals of the previous event but adds a new focus on variability to address a limitation of bionlpst 2009 the benchmark data sets were based on the genia corpus restricting the communitywide effort to resources developed by a single group for a small subdomain of molecular biologybionlpst 2011 is organized as a joint effort of several groups preparing various tasks and resources in which variability is pursued in three primary directions text types event types and subject domainsconsequently generalization of fine grained bioie in these directions is emphasized as the main theme of the second eventthis paper summarizes the entire bionlpst 2011 covering the relationships between tasks and similar broad issueseach task is presented in detail in separate overview papers and extraction systems in papers by participantsbionlpst 2011 includes four main tracks representing finegrained bioiethe ge task preserves the task definition of bionlpst 2009 arranged based on the genia corpus the data represents a focused domain of molecular biology transcription factors in human blood cellsthe purpose of the ge task is twofold to measure the progress of the community since the last event and to evaluate generalization of the technology to full papersfor the second purpose the provided data is composed of two collections the abstract collection identical to the bionlpst 2009 data and the new full paper collectionprogress on the task is measured through the unchanged task definition and the abstract collection while generalization to full papers is measured on the full paper collectionin this way the ge task is intended to connect the entire event to the previous onethe epi task focuses on ie for protein and dna modifications with particular emphasis on events of epigenetics interestwhile the basic task setup and entity definitions follow those of the ge task epi extends on the extraction targets by defining 14 new event types relevant to task topics including major protein modification types and their reverse reactionsfor capturing the ways in which different entities participate in these events the task extends the ge argument roles with two new roles specific to the domain sidechain and contextgenethe task design and setup are oriented toward the needs of pathway extraction and curation for domain databases and are informed by previous studies on extraction of the target events the id task concerns the extraction of events relevant to biomolecular mechanisms of infectious diseases from fulltext publicationsthe task follows the basic design of bionlpst 2009 and the id entities and extraction targets are a superset of the ge onesthe task extends considerably on core entities adding to protein four new entity types including chemical and organismthe events extend on the ge definitions in allowing arguments of the new entity types as well as in introducing a new event category for highlevel biological processesthe task was implemented in collaboration with domain experts and informed by prior studies on domain information extraction requirements including the support of systems such as patric the bacteria track consists of two tasks bb and bi241 bacteria biotope task the aim of the bb task is to extract the habitats of bacteria mentioned in textbooklevel texts written for nonexpertsthe texts are web pages about the state of the art knowledge about bacterial speciesbb targets general relations localization and partof and is challenging in that texts contain more coreferences than usual habitat references are not necessarily named entities and unlike in other bionlpst 2011 tasks all entities need to be recognized by participantsbb is the first task to target phenotypic information and as habitats are yet to be normalized by the field community presents an opportunity for the bionlp community to contribute to the standardization effortthe bi task is devoted to the extraction of bacterial molecular interactions and regulations from publication abstractsmainly focused on gene transcriptional regulation in bacillus subtilis the bi corpus is provided to participants with rich semantic annotation derived from a recently proposed ontology defining ten entity types such as gene protein and derivatives as well as dna sitesmotifstheir interactions are described through ten relation typesthe bi corpus consists of the sentences of the lll corpus provided with manually checked linguistic annotationsthe main tasks are characterized in table 1from the text type perspective bionlpst 2011 generalizes from abstracts in 2009 to full papers and web pages it also includes data collections for a variety of specific subject domains and a task whose scope is not defined through a domain but rather event typesin terms of the target event types id targets a superset of ge events and epi extends on the representation for phosphorylation events of gethe two bacteria track tasks represent an independent perspective relatively far from other tasks in terms of their target informationbionlpst 2011 includes three supporting tasks designed to assist in primary the extraction tasksother supporting resources made available to participants are presented in the co task concerns the recognition of coreferences to protein referencesit is motivated from a finding from bionlpst 2009 result analysis coreference structures in biomedical text hinder the extraction results of finegrained ie systemswhile finding connections between event triggers and protein references is a major part of event extraction it becomes much harder if one is replaced with a coreferencing expressionthe co task seeks to address this problemthe data sets for the task were produced based on medco annotation and other genia resources the rel task involves the recognition of two binary partof relations between entities proteincomponent and subunitcomplexthe task is motivated by specific challenges the identification of the components of proteins in text is relevant eg to the recognition of site arguments and relations between proteins and their complexes relevant to any task involving themrel setup is informed by recent semantic relation tasks the task data consisting of new annotations for ge data extends a previously introduced resource the ren task objective is to extract renaming pairs of bacillus subtilis geneprotein names from pubmed abstracts motivated by discrepancies between nomenclature databases that interfere with search and complicate normalizationren relations partially overlap several concepts explicit renaming mentions synonymy and renaming deduced from biological proofwhile the task is related to synonymy relation extraction it has a novel definition of renaming one name permanently replacing the othertable 2 shows the task schedule split into two phases to allow the use of supporting task results in addressing the main tasksin recognition of their higher complexity a longer development period was arranged for the main tasks bionlpst 2011 received 46 submissions from 24 teams while seven teams participated in multiple tasks only one team uturku submitted final results to all the tasksthe remaining 17 teams participated in only single tasksdisappointingly only two teams performed both supporting and main tasks and neither used supporting task analyses for the main tasksdetailed evaluation results and analyses are presented in individual task papers but interesting observations can be obtained also by comparisons over the taskstable 4 summarizes best results for various criteria the community has made a significant improvement in the repeated ge task with an over 10 reduction in error from 09 to geathree teams achieved better results than m10 the best previously reported individual result on the 09 datathis indicates a beneficial role from focused efforts like bionlpstthe gef and id results show that generalization to full papers is feasible with very modest loss in performance compared to abstracts the results for phosphorylation events in ge and epi are comparable with the small drop for the epi result suggesting that the removal of the ge domain specificity does not compromise extraction performanceepic results indicate some challenges in generalization to similar event types and epif suggest substantial further challenges in additional argument extractionthe complexity of id is comparable to ge also reflected to their final results which further indicate successful generalization to a new subject domain as well as to new argument typesthe bb task is in part comparable to gel and involves a representation similar to rel with lower results likely in part because bb requires entity recognitionthe bi task is comparable to lll challenge though bi involves more entity and event typesthe bi result is 20 points above the lll best result indicating a substantial progress of the community in five yearsmeeting with wide participation from the community bionlpst 2011 produced a wealth of valuable resources for the advancement of finegrained ie in biology and biomedicine and demonstrated that event extraction methods can successfully generalize to new text types event types and domainshowever the goal to observe the capacity of supporting tasks to assist the main tasks was not metthe entire shared task period was very long more than 6 months and the complexity of the task was high which could be an excessive burden for participants limiting the application of novel resourcesthere have been ongoing efforts since bionlpst 2009 to develop ie systems based on the task resources and we hope to see continued efforts also following bionlpst 2011 especially exploring the use of supporting task resources for main tasks
W11-1801
overview of bionlp shared task 2011the bionlp shared task 2011 an information extraction task held over 6 months up to march 2011 met with communitywide participation receiving 46 final submissions from 24 teamsfive main tasks and three supporting tasks were arranged and their results show advances in the state of the art in finegrained biomedical domain information extraction and demonstrate that extraction methods successfully fully generalize in various aspectsthe bionlp 2011 shared task series generalized this defining a series of tasks involving more text types domains and target event types
overview of genia event task in bionlp shared task 2011 the genia event task a biomolecular event extraction task is arranged as one of the main tasks of bionlp shared task 2011 as its second time to be arranged for communitywide focused efforts it aimed to measure the advance of the community since 2009 and to evaluate generalization of the technology to full text papers after a 3month system development period 15 teams submitted their performance results on test cases the results show the community has made a significant advancement in terms of both performance improvement and generalization the bionlp shared task is a series of efforts to promote a communitywide collaboration towards finegrained information extraction in biomedical domainthe first event bionlpst 2009 introducing a biomolecular event extraction task to the community attracted a wide attention with 42 teams being registered for participation and 24 teams submitting final results to establish a community effort the organizers provided the task definition benchmark data and evaluations and the participants competed in developing systems to perform the taskmeanwhile participants and organizers communicated to develop a better setup of evaluation and some provided their tools and resources for other participants making it a collaborative competitionthe final results enabled to observe the stateoftheart performance of the community on the bioevent extraction task which showed that the automatic extraction of simple events those with unary arguments eg gene expression localization phosphorylation could be achieved at the performance level of 70 in fscore but the extraction of complex events eg binding and regulation was a lot more challenging having achieved 40 of performance levelafter bionlpst 2009 all the resources from the event were released to the public to encourage continuous efforts for further advancementsince then several improvements have been reported for example miwa et al reported a significant improvement with binding events achieving 50 of performance levelthe task introduced in bionlpst 2009 was renamed to genia event task and was hosted again in bionlpst 2011 which also hosted four other ie tasks and three supporting tasks as the sole task that was repeated in the two events the ge task was referenced during the development of other tasks and took the role of connecting the results of the 2009 event to the main tasks of 2011the ge task in 2011 received final submissions from 15 teamsthe results show the community made a significant progress with the task and also show the technology can be generalized to full papers at moderate cost of performancethis paper presents the task setup preparation and discusses the resultsthe ge task follows the task definition of bionlpst 2009 which is briefly described in this sectionfor more detail please refer to table 1 shows the event types to be addressed in the taskfor each event type the primary and secondary arguments to be extracted with an event are definedfor example a phosphorylation event is primarily extracted with the protein to be phosphorylatedas secondary information the specific site to be phosphorylated may be extractedfrom a computational point of view the event types represent different levels of complexitywhen only primary arguments are considered the first five event types in table 1 are classified as simple event types requiring only unary argumentsthe binding and regulation types are more complex binding requires detection of an arbitrary number of arguments and regulation requires detection of recursive event structurebased on the definition of event types the entire task is divided to three subtasks addressing event extraction at different levels of specificity task 1core event extraction addresses the extraction of typed events together with their primary argumentstask 2event enrichment addresses the extraction of secondary arguments that further specify the events extracted in task 1 addresses the detection of negations and speculations over the extracted eventstask 1 serves as the backbone of the ge task and is mandatory for all participants while the other two are optionalfigure 1 shows an example of event annotationthe event encoded in the text is represented in a standoffstyle annotation as follows the annotation t1 identifies the entity referred to by the string between the character offsets 15 and 18 to be a proteint2 identifies the string translocation to refer to a localization evententities other than proteins or event type references are classified into a default class entity as in t3e1 then represents the event defined by the three entities as defined in table 1note that for task 1 the entity t3 does not need to be identified and the event e1 may be identified without specification of the secondary argument toloct1 finding the full representation of e1 is the goal of task 2in the example the localization event e1 is negated as expressed in the failure offinding the negation m1 is the goal of task 3the data sets are prepared in two collections the abstract and the full text collectionsthe abstract collection includes the same data used for bionlpst 2009 and is meant to be used to measure the progress of the communitythe full text collection includes full papers which are newly annotated and is meant to be used to measure the generalization of the technology to full paperstable 2 shows the statistics of the annotations in the ge task data setssince the training data from the full text collection is relatively small despite of the expected rich variety of expressions in full text it is expected that generalization of a model from the abstract collection to full papers would be a key technique to get a reasonable performancea full paper consists of several sections including the title abstract introduction results conclusion methods and so ondifferent sections would be written with different purposes which may affect the type of information that are found in the sectionstable 3 shows the distribution of annotations in different sectionsit indicates that event mentions according to the event definition in table 1 in methods and captions are much less frequent than in the other tiab intro and rdc sectionsfigure 2 illustrates the different distribution of annotated event types in the five sectionsit is notable that the methods section shows very different distribution compared to others while regulation and positive regulation events are not as frequent as in other sections negative regulation is relatively much more frequentit may agree with an intuition that experimental devices which will be explained in methods sections often consists of artificial processes that are designed to cause a negative regulatory effect eg mutation addition of inhibitor proteins etcthis observation suggests a different event annotation scheme or a different event extraction strategy would be required for methods sectionsin total 15 teams submitted final resultsall 15 teams participated in the mandatory task 1 four teams in task 2 and two teams in task 3only one team uturku completed all the three taskstable 4 shows the profile of the teams excepting three who chose to remain anonymousa brief examination on the team organization suggests the importance of a computer science background c and bi to perform the ge task which agrees with the same observation made in 2009it is interpreted as follows the role of computer scientists may be emphasized in part due to the fact that the task requires complex computational modeling demanding particular efforts in framework design and implementation and computational resourcesthe 09 column suggests that previous experience in the task may have affected to the performance of the teams especially in a complex task like the ge tasktable 5 shows the profile of the systemsa notable observation is that four teams developed their systems based on the model of uturku09 which was the winning system of bionlpst 2009it may show an influence of the bionlpst series in the taskfor syntactic analyses the prevailing use of charniak johnson reranking parser using the selftrained biomedical model from mcclosky which is converted to stanford dependency is notable which may also be an influence from the results of bionlpst 2009the last two teams xabionlp and hcmus who did not use syntactic analyses could not get a performance comparable to the others which may suggest the importance of using syntactic analyses for a complex ie task like ge tasktable 6 shows the final evaluation results of task 1for reference the reported performance of the two systems uturku09 and miwa10 is listed in the toputurku09 was the winning system of task 1 in 2009 and miwa10 was the best system reported after bionlpst 2009 particularly the latter made an impressive improvement with binding events the best performance in task 1 this time is achieved by the faust system which adopts a combination model of umass and stanfordits performance on the abstract collection 5604 demonstrates a significant improvement of the community in the repeated ge task when compared to both uturku09 5195 and miwa10 5329the biggest improvement is made to the regulation events which requires a complex modeling for recursive event structure an event may become an argument of another eventthe second ranked system umass shows the best performance on the full paper collectionit suggests that what faust obtained from the model combination might be a better optimization to abstractsthe concordu system is notable as it is the sole rulebased system that is ranked above the averageit shows a performance optimized for precision with relatively low recallthe same tendency is roughly replicated by other rulebased systems ccpbtmg tmscs xabionlp and hcmusit suggests that a rulebased system might not be a good choice if a high coverage is desiredhowever the performance of concordu for simple events suggests that a high precision can be achieved by a rule based system with a modest loss of recallit might be more true when the task is less complexthis time three teams achieved better results than miwa10 which indicates some role of focused efforts like bionlpstthe comparison between the performance on abstract and full paper collections shows that generalization to full papers is feasible with very modest loss in performance also generalization to full papersto our disappointment however an effective use of supporting task results was not observed which thus remains as future work for further improvementtables 7 shows final evaluation results of task 2for reference the reported performance of the taskwinning system in 2009 utdbcls09 is shown in the topthe first and second ranked system faust and umass which share a same author with riedel09 made a significant improvement over riedel09 in the abstract collectionuturku achieved the best performance in finding sites arguments but did not produce location argumentsin table 7 the performance of all the systems in full text collection suggests that finding secondary arguments in full text is much more challengingin detail a significant improvement was made for location arguments a further breakdown of the results of site extraction shown in table 8 shows that finding site arguments for phosphorylation binding and regulation events are all significantly improved but in different waysthe extraction of protein sites to be phosphorylated is approaching a practical level of performance while protein sites to be bound or to be regulated remains challenging to be extractedtable 9 shows final evaluation results of task 3for reference the reported performance of the taskwinning system in 2009 kilicoglu09 is shown in the topamong the two teams participated in the task uturku showed a better performance in extracting negated events while concordu showed a better performance in extracting speculated eventsthe genia event task which was repeated for bionlpst 2009 and 2011 took a role of measuring the progress of the community and generalization ie technology to full papersthe results from 15 teams who made their final submissions to the task show that a clear advance of the community in terms of the performance on a focused domain andthis work is supported by the integrated database project funded by the ministry of education culture sports science and technology of japan
W11-1802
overview of genia event task in bionlp shared task 2011the genia event task a biomolecular event extraction task is arranged as one of the main tasks of bionlp shared task 2011as its second time to be arranged for communitywide focused efforts it aimed to measure the advance of the community since 2009 and to evaluate generalization of the technology to full text papersafter a 3month system development period 15 teams submitted their performance results on test casesthe results show the community has made a significant advancement in terms of both performance improvement and generalization
conll2011 shared task modeling unrestricted coreference in ontonotes the conll2011 shared task involved predicting coreference using ontonotes data resources in this field have tended to be limited to noun phrase coreference often on a set of entities such as entities ontonotes provides a largescale corpus of general anaphoric coreference not restricted to noun phrases or to a specified set of entity types ontonotes also provides additional layers of integrated annotation capturing additional shallow semantic structure this paper briefly describes the ontonotes annotation and then describes the parameters of the shared task including the format preprocessing information and evaluation criteria and presents and discusses the results achieved by the participating systems having a standard test set and evaluation parameters all based on a new resource that provides multiple integrated annotation layers that could support joint models should help to energize ongoing research in the task of entity and event coreference the importance of coreference resolution for the entityevent detection task namely identifying all mentions of entities and events in text and clustering them into equivalence classes has been well recognized in the natural language processing communityautomatic identification of coreferring entities and events in text has been an uphill battle for several decades partly because it can require world knowledge which is not welldefined and partly owing to the lack of substantial annotated dataearly work on corpusbased coreference resolution dates back to the mid90s by mccarthy and lenhert where they experimented with using decision trees and handwritten rulesa systematic study was then conducted using decision trees by soon et al significant improvements have been made in the field of language processing in general and improved learning techniques have been developed to push the state of the art in coreference resolution forward various different knowledge sources from shallow semantics to encyclopedic knowledge are being exploited researchers continued finding novel ways of exploiting ontologies such as wordnetgiven that wordnet is a static ontology and as such has limitation on coverage more recently there have been successful attempts to utilize information from much larger collaboratively built resources such as wikipedia in spite of all the progress current techniques still rely primarily on surface level features such as string match proximity and edit distance syntactic features such as apposition and shallow semantic features such as number gender named entities semantic class hobbs distance etca better idea of the progress in the field can be obtained by reading recent survey articles and tutorials dedicated to this subjectcorpora to support supervised learning of this task date back to the message understanding conferences these corpora were tagged with coreferring entities identified by noun phrases in the textthe de facto standard datasets for current coreference studies are the muc and the ace1 corporathe muc corpora cover all noun phrases in text but represent small training and test setsthe ace corpora on the other hand have much more annotation but are restricted to a small subset of entitiesthey are also less consistent in terms of interannotator agreement this lessens the reliability of statistical evidence in the form of lexical coverage and semantic relatedness that could be derived from the data and used by a classifier to generate better predictive modelsthe importance of a welldefined tagging scheme and consistent ita has been well recognized and studied in the past there is a growing consensus that in order for these to be most useful for language understanding applications such as question answering or distillation both of which seek to take information access technology to the next level we need more consistent annotation of larger amounts of broad coverage data for training better automatic techniques for entity and event identificationidentification and encoding of richer knowledge possibly linked to knowledge sources and development of learning algorithms that would effectively incorporate them is a necessary next step towards improving the current state of the artthe computational learning community in general is also witnessing a move towards evaluations based on joint inference with the two previous conll tasks devoted to joint learning of syntactic and semantic dependenciesa principle ingredient for joint learning is the presence of multiple layers of semantic informationone fundamental question still remains and that is what would it take to improve the state of the art in coreference resolution that has not been attempted so farmany different algorithms have been tried in the past 15 years but one thing that is still lacking is a corpus comprehensively tagged on a large scale with consistent multiple layers of semantic informationone of the many goals of the ontonotes project2 is to explore whether it can fill this void and help push the progress further not only in coreference but with the various layers of semantics that it tries to captureas one of its layers it has created a corpus for general anaphoric coreference that covers entities and events not limited to noun phrases or a limited set of entity typesa small portion of this corpus from the newswire and broadcast news genres was recently used for a semeval task as mentioned earlier the coreference layer in ontonotes constitutes just one part of a multilayered integrated annotation of shallow semantic structure in text with high interannotator agreement which also provides a unique opportunity for performing joint inference over a substantial body of datathe remainder of this paper is organized as followssection 2 presents an overview of the ontonotes corpussection 3 describes the coreference annotation in ontonotessection 4 then describes the shared task including the data provided and the evaluation criteriasections 5 and 6 then describe the participating system results and analyze the approaches and section 7 concludesthe ontonotes project has created a corpus of largescale accurate and integrated annotation of multiple levels of the shallow semantic structure in textthe idea is that this rich integrated annotation covering many layers will allow for richer crosslayer models enabling significantly better automatic semantic analysisin addition to coreference this data is also tagged with syntactic trees high coverage verb and some noun propositions partial verb and noun word senses and 18 named entity typeshowever such multilayer annotations with complex crosslayer dependencies demands a robust efficient scalable mechanism for storing them while providing efficient convenient integrated access to the the underlying structureto this effect it uses a relational database representation that captures both the inter and intralayer dependencies and also provides an objectoriented api for efficient multitiered access to this data this should facilitate the creation of crosslayer features in integrated predictive models that will make use of these annotationsalthough ontonotes is a multilingual resource with all layers of annotation covering three languages english chinese and arabic for the scope of this paper we will just look at the english portionover the years of the development of this corpus there were various priorities that came into play and therefore not all the data in the english portion is annotated with all the different layers of annotationthere is a core portion however which is roughly 13m words which has been annotated with all the layersit comprises 450k words from newswire 150k from magazine articles 200k from broadcast news 200k from broadcast conversations and 200k web dataontonotes comprises the following layers of annotationgeneral anaphoric coreference that spans a rich set of entities and events not restricted to a few types as has been characteristic of most coreference data available until now has been tagged with a high degree of consistencyattributive coreference is tagged separately from the more common identity coreferencetwo different types of coreference are distinguished in the ontonotes data identical and appositive appositives are treated separately because they function as attributions as described further belowthe ident type is used for anaphoric coreference meaning links between pronominal nominal and named mentions of specific referentsit does not include mentions of generic underspecified or abstract entitiescoreference is annotated for all specific entities and eventsthere is no limit on the semantic types of np entities that can be considered for coreference and in particular coreference is not limited to ace typesthe mentions over which ident coreference applies are typically pronominal named or definite nominalthe annotation process begins by automatically extracting all of the np mentions from the penn treebank though the annotators can also add additional mentions when appropriatein the following two examples the phrases notated in bold form the links of an ident chainverbs are added as singleword spans if they can be coreferenced with a noun phrase or with another verbthe intent is to annotate the vp but we mark the singleword head for conveniencethis includes morphologically related nominalizations and noun phrases that refer to the same event even if they are lexically distinct from the verb in the following two examples only the chains related to the growth event are shownall pronouns and demonstratives are linked to anything that they refer to and pronouns in quoted speech are also markedexpletive or pleonastic pronouns are not considered for tagging and generic you is not markedin the following example the pronoun you and it would not be marked before a boldface phrase identifies entityevent mentions that would not be tagged as coreferentgeneric nominal mentions can be linked with referring pronouns and other definite mentions but are not linked to other generic nominal mentionsthis would allow linking of the bracketed mentions in and but not sell the phacoflex intraocular lens the first foldable silicone lens available for cataract surgerythe lens foldability enables it to be inserted in smaller incisions than are now possible for cataract surgerybare plurals as in and are always considered genericin example below there are two generic instances of parentsthese are marked as distinct ident chains each containing a generic and the related referring pronouns parentsx should be involved with theirx childrens education at home not in schooltheyx should see to it that theirx kids do not play truant theyx should make certain that the children spend enough time doing homework theyx should scrutinize the report cardparentsy are too likely to blame schools for the educational limitations of theiry childrenif parentsz are dissatisfied with a school theyz should have the option of switching to anotherin below the verb halve cannot be linked to a reduction of 50 since a reduction is indefinite argentina said it will ask creditor banks to halve its foreign debt of 64 billion the thirdhighest in the developing world argentina aspires to reach a reduction of 50 in the value of its external debtproper premodifiers can be coreferenced but proper nouns that are in a morphologically adjectival form are treated as adjectives and not coreferencedfor example adjectival forms of gpes such as chinese in the chinese leader would not be linkedthus we could coreference united states in the united states policy with another referent but not american the american policy gpes and nationality acronyms are also considered adjectivalpremodifier acronyms can be coreferenced unless they refer to a nationalitythus in the examples below fbi can be coreferenced to other mentions but yous cannotdates and monetary amounts can be considered part of a coreference chain even when they occur as premodifiers the current account deficit on frances balance of payments narrowed to 148 billion french francs in august from a revised 21 billion francs in july the finance ministry saidpreviously the july figure was estimated at a deficit of 613 million francs the companys 150 offer was unexpectedthe firm balked at the priceattributes signaled by copular structures are not marked these are attributes of the referent they modify and their relationship to that referent will be captured through word sense and propositional argument taggingcopular verbs are those verbs that function as a copula and are followed by a subject complementsome common copular verbs are be appear feel look seem remain stay become end up getsubject complements following such verbs are considered attributes and not linkedsince called is copular neither ident nor appos coreference is marked in the following caselike copulas small clause constructions are not markedthe following example is treated as if the copula were present deictic expressions such as now then today tomorrow yesterday etc can be linked as well as other temporal expressions that are relative to the time of the writing of the article and which may therefore require knowledge of the time of the writing to resolve the coreferenceannotators were allowed to use knowledge from outside the text in resolving these casesin the following example the end of this period and that time can be coreferenced as can this period and from three years to seven years the limit could range from three years to seven yearsx depending on the composition of the management team and the nature of its strategic planat xy the poison pill would be eliminated automatically unless a new poison pill were approved by the thencurrent shareholders who would have an opportunity to evaluate the corporations strategy and management team at that timeyin multidate temporal expressions embedded dates are not separately connected to to other mentions of that datefor example in november 2 1999 november would not be linked to another instance of november later in the textbecause they logically represent attributions appositives are tagged separately from identity coreferencethey consist of a head or referent and one or more attributes of that referentan appositive construction contains a noun phrase that modifies an immediatelyadjacent noun phrase it often serves to rename or further define the first mentionmarking appositive constructions allows us to capture the attributed property even though there is no explicit copulathe head of each appositive construction is distinguished from the attribute according to the following heuristic specificity scale in a decreasing order from top to bottom this leads to the following cases in cases where the two members of the appositive are equivalent in specificity the leftmost member of the appositive is marked as the headreferentdefinite nps include nps with a definite marker as well as nps with a possessive adjective thus the first element is the head in all of the following cases in the specificity scale specific names of diseases and technologies are classified as proper names whether they are capitalized or notwhen the entity to which an appositive refers is also mentioned elsewhere only the single span containing the entire appositive construction is included in the larger ident chainnone of the nested np spans are linkedin the example below the entire span can be linked to later mentions to richard godownthe subspans are not included separately in the ident chainin addition to the ones above there are some special cases such as gpes are linked to references to their governments even when the references are nested nps or the modifier and head of a single nptable 1 shows the interannotator and annotatoradjudicator agreement on all the genres of ontonoteswe also analyzed about 15k disagreements in various parts of the data and grouped them into one of the categories shown in figure 1figure 2 shows the distribution of these different types that were found in that sampleit can be seen that genuine ambiguity and annotator error are the biggest contributors the latter of which is usually captured during adjudication thus showing the increased agreement between the adjudicated version and the individual annotator versionthis section describes the conll2011 coreference task including its closed and open track versions and characterizes the data used for the task and how it was prepareddespite close to a twodecade history of evaluations on coreference tasks variation in the evaluation criteria and in the training data used have made it difficult for researchers to be clear about the state of the art or to determine which particular areas require further attentionthere are many different parameters involved in defining a coreference tasklooking at various numbers reported in literature can greatly affect the perceived difficulty of the taskit can seem to be a very hard problem or one that is somewhat easier given the space constraints we refer the reader to stoyanov et al for a detailed treatment of the issuelimitations in the size and scope of the available datasets have also constrained research progressthe muc and ace corpora are the two that have been used most for reporting comparative results but they differ in the types of entities and coreference annotatedthe ace corpus is also one that evolved over a period of almost five years with different incarnations of the task definition and different corpus crosssections on which performance numbers have been reported making it hard to untangle and interpret the resultsthe availability of the ontonotes data offered an opportunity to define a coreference task based on a larger more broadcoverage corpuswe have tried to design the task so that it not only can support the current evaluation but also can provide an ongoing resource for comparing different coreference algorithms and approachesthe conll2011 shared task was based on the english portion of the ontonotes 40 datathe task was to automatically identify mentions of entities and events in text and to link the coreferring mentions together to form entityevent chainsthe target coreference decisions could be made using automatically predicted information on the other structural layers including the parses semantic roles word senses and named entitiesas is customary for conll tasks there were two tracks closed and openfor the closed track systems were limited to using the distributed resources in order to allow a fair comparison of algorithm performance while the open track allowed for almost unrestricted use of external resources in addition to the provided datain the closed track systems were limited to the provided data plus the use of two prespecified external resources i wordnet and ii a precomputed number and gender table by bergsma and lin for the training and test data in addition to the underlying text predicted versions of all the supplementary layers of annotation were provided where those predictions were derived using offtheshelf tools as described in section 442for the training data however in addition to predicted values for the other layers we also provided manual goldstandard annotations for all the layersparticipants were allowed to use either the goldstandard or predicted annotation for training their systemsthey were also free to use the goldstandard data to train their own models for the various layers of annotation if they judged that those would either provide more accurate predictions or alternative predictions for use as multiple views or wished to use a lattice of predictionsmore so than previous conll tasks coreference predictions depend on world knowledge and many stateoftheart systems use information from external resources such as wordnet which can add a layer that helps the system to recognize semantic connections between the various lexicalized mentions in the texttherefore the use of wordnet was allowed even for the closed tracksince word senses in ontonotes are predominantly3 coarsegrained groupings of wordnet senses systems could also map from the predicted or goldstandard word senses provided to the sets of underlying wordnet sensesanother significant piece of knowledge that is particularly useful for coreference but that is not available in the layers of ontonotes is that of number and genderthere are many different ways of predicting these values with differing accuracies so in order to ensure that participants in the closed track were working from the same data thus allowing clearer algorithmic comparisons we specified a particular table of number and gender predictions generated by bergsma and lin for use during both training and testingfollowing the recent conll tradition participants were allowed to use both the training and the development data for training the final modelin addition to resources available in the closed track the open track systems were allowed to use external resources such as wikipedia gazetteers etcthis track is mainly to get an idea of a performance ceiling on the task at the cost of not getting a comparison across all systemsanother advantage of the open track is that it might reduce the barriers to participation by allowing participants to field existing research systems that already depend on external resources especially if there were hard dependencies on these resourcesthey can participate in the task with minimal or no modification to their existing systemsince there are no previously reported numbers on the full version of ontonotes we had to create a traindevelopmenttest partitionthe only portion of ontonotes that has a previously determined widely used standard split is the wsj portion of the newswire datafor that subcorpus we maintained the same partitionfor all the other portions we created stratified training development and test partitions over all the sources in ontonotes using the procedure shown in algorithm 1the list of training development and test document ids can be found on the task webpage4 this section gives details of the different annotation layers including the automatic models that were used to predict them and describes the formats in which the data were provided to the participantswe will take a look at the manually annotated or gold layers of information that were made available for the training datacoreference the manual coreference annotation is stored as chains of linked mentions connecting multiple mentions of the same entitycoreference is the only documentlevel phenomenon in ontonotes and the complexity of annotation increases nonlinearly with the length of a documentunfortunately some of the documents especially ones in the broadcast conversation weblogs and telephone conversation genre are very long which prohibited us from efficiently annotating them in entiretythese had to be split into smaller partswe conducted a few passes to join some adjacent parts but since some documents had as many as 17 parts there are still multipart documents in the corpussince the coreference chains are coherent only within each of these document parts for this task each such part is treated as a separate documentanother thing to note is that there were some cases of subtoken annotation in the corpus owing to the fact that tokens were not split at hyphenscases such as prowalmart had the subspan walmart linked with another instance of the samethe recent treebank revision which split tokens at most hyphens made a majority of these subtoken annotations go awaythere were still some residual subtoken annotationssince subtoken annotations cannot be represented in the conll format and they were a very small quantity much less than even half a percent we decided to ignore themfor various reasons not all the documents in ontonotes have been annotated with all the different layers of annotation with full coverage6 there is a core portion however which is roughly 13m words which has been annotated with all the layersthis is the portion that we used for the shared taskthe number of documents in the corpus for this task for each of the different genres are shown in table 2tables 3 and 4 shows the distribution of mentions by the syntactic categories and the counts of entities links and mentions in the corpus respectivelyall of this data has been treebanked and propbanked either as part of the ontonotes effort or some preceding effortfor comparison purposes table 2 also lists the number of documents in the muc6 muc7 and ace corporathe muc6 data was taken from the wall street journal whereas the muc7 data was from the new york timesthe ace data spanned many different genres similar to the ones in ontonotesparse trees this represents the syntactic layer that is a revised version of the penn treebankfor purposes of this task traces were removed from the syntactic trees since the conllstyle data format being indexed by tokens does not provide any good means of conveying that informationfunction tags were also removed since the parsers that we used for the predicted syntax layer did not provide themone thing that needs to be dealt with in conversational data is the presence of disfluencies in the original ontonotes parses these are marked using a special edited7 phrase tag as was the case for the switchboard treebankgiven the frequency of disfluencies and the performance with which one can identify them automatically8 a probable processing pipeline would filter them out before parsingsince we did not have a readily available tagger for tagging disfluencies we decided to remove them using oracle information available in the treebankpropositions the propositions in ontonotes constitute propbank semantic rolesmost of the verb predicates in the corpus have been annotated with their argumentsrecent enhancements to the propbank to make it synchronize better with the treebank have enhanced the information in the proposition by the addition of two types of links that represent pragmatic coreference and selectional preferences more details can be found in the addendum to the propbank guidelines9 in the ontonotes 40 re7there is another phrase type embed in the telephone conversation genre which is similar to the edited phrase type and sometimes identifies insertions but sometimes contains logical continuation of phrases so we decided not to remove that from the data leasesince the community is not used to this representation which relies heavily on the trace structure in the treebank which we are excluding we decided to unfold the links back to their original representation as in the release 10 of the proposition bankthis functionality is part of the ontonotes db tool10 word sense gold word sense annotation was supplied using sense numbers as specified in the ontonotes list of senses for each lemma11 the sense inventories that were provided in the ontonotes 40 release were not all mapped to the latest version 30 of wordnet so we provided a revised version of the sense inventories containing mapping to wordnet 30 on the task page for the participantsnamed entities named entities in ontonotes data are specified using a catalog of 18 name typesother layers discourse plays a vital role in coreference resolutionin the case of broadcast conversation or telephone conversation data it partially manifests in the form of speakers of a given utterance whereas in weblogs or newsgroups it does so as the writer or commenter of a particular article or threadthis information provides an important clue for correctly linking anaphoric pronouns with the right antecedentsthis information could be automatically deduced but since it would add additional complexity to the already complex task we decided to provide oracle information of this metadata both during training and testingin other words speaker and author identification was not treated as an annotation layer that needed to be predictedthis information was provided in the form of another column in the conll tablethere were some cases of interruptions and interjections that ideally would associate parts of a sentence to two different speakers but since the frequency of this was quite small we decided to make an assumption of one speakerwriter per sentencethe predicted annotation layers were derived using automatic models trained using crossvalidation on other portions of ontonotes dataas mentioned earlier there are some portions of the ontonotes corpus that have not been annotated for coreference but that have been annotated for other layersfor training models for each of the layers where feasible we used all the data that we could for that layer from the training portion of the entire ontonotes releaseparse trees predicted parse trees were produced using the charniak parser 12 some additional tag types used in the ontonotes trees were added to the parsers tagset including the nml tag that has recently been added to capture internal np structure and the rules used to determine head words were appropriately extendedthe parser was then retrained on the training portion of the release 40 data using 10fold crossvalidationtable 5 shows the performance of the retrained charniak parser on the conll2011 test setwe did not get a chance to retrain the reranker and since the stock reranker crashes when run on nbest parses containing nmls because it has not seen that tag in training we could not make use of itword sense we trained a word sense tagger using a svm classifier and contextual word and part of speech features on all the training portion of the ontonotes datathe ontonotes 40 corpus comprises a total of 14662 sense definitions across 4877 verb and noun lemmas13the distribution of senses per lemma is as shown in table 6table 7 shows the performance of this classifier over both the verbs and nouns in the conll2011 test setagain this performance is not directly comparable to any reported in the literature before and it seems lower then performances reported on previous versions of ontonotes because this is over all the genres of ontonotes and aggregated over both verbs and nouns in the conll2011 test setpropositions to predict propositional structure assert14 was used retrained also on all the training portion of the release 12httpbllipcsbrownedudownloadrerankingparseraug06targz 13the number of lemmas in table 6 do not add up to this number because not all of them have examples in the training data where the total number of instantiated senses amounts to 793340 datagiven time constraints we had to perform two modifications i instead of a single model that predicts all arguments including null arguments we had to use the twostage mode where the null arguments are first filtered out and the remaining nonnull arguments are classified into one of the argument types and ii the argument identification module used an ensemble of ten classifiers each trained on a tenth of the training data and performed an unweighted voting among themthis should still give a close to state of the art performance given that the argument identification performance tends to start to be asymptotic around 10k training instancesat first glance the performance on the newswire genre is much lower than what has been reported for wsj section 23this could be attributed to two factors i the fact that we had to compromise on the training method but more importantly because ii the newswire in ontonotes not only contains wsj data but also xinhua newsone could try to verify using just the wsj portion of the data but it would be hard as it is not only a subset of the documents that the performance has been reported on previously but also the annotation has been significantly revised it includes propositions for be verbs missing from the original propbank and the training data is a subset of the original data as welltable 8 shows the detailed performance numbersin addition to automatically predicting the arguments we also trained a classifier to tag propbank frameset ids in the data using the same word sense module as mentioned earlierontonotes 40 contains a total of 7337 framesets across 5433 verb lemmas15 an overwhelming number of them are monosemous but the more frequent verbs tend to be polysemoustable 9 gives the distribution of number of framesets per lemma in the propbank layer of the ontonotes 40 dataduring automatic processing of the data we tagged all the tokens that were tagged with a part of speech vbxthis means that there would be cases where the wrong token would be tagged with propositionsthe conll2005 scorer was used to generate the scoresnamed entities bbns identifindertmsystem was used to predict the named entitiesgiven the 15the number of lemmas in table 9 do not add up to this number because not all of them have examples in the training data where the total number of instantiated senses amounts to 4229 time constraints we could not retrain it on the ontonotes data and so an existing pretrained model was used therefore the results are not a good indicator of the models best performancethe pretrained model had also used a somewhat different catalog of name types which did not include the ontonotes norp type so that category was never predictedtable 10 shows the overall performance of the tagger on the conll2011 test set as well as the performance broken down by individual name typesidentifinder performance has been reported to be in the low 90s on wsj test setother layers as noted above systems were allowed to make use of gender and number predictions for nps using the table from bergsma and lin in order to organize the multiple rich layers of annotation the ontonotes project has created a database representation for the raw annotation layers along with a python api to manipulate them in the ontonotes distribution the data is organized as one file per layer per documentthe api requires a certain hierarchical structure with documents at the leaves inside a hierarchy of language genre source and sectionit comes with various ways of cleanly querying and manipulating the data and allows convenient access to the sense inventory and propbank frame files instead of having to interpret the raw xml versionshowever maintaining format consistency with earlier conll tasks was deemed convenient for sites that already had tools configured to deal with that formattherefore in order to distribute the data so that one could make the best of both worlds we created a new file type called conll which logically served as another layer in addition to the parse prop name and coref layerseach conll file contained a merged representation of all the ontonotes layers in the conllstyle tabular format with one line per token and with multiple columns for each token specifying the input annotation layers relevant to that token with the final column specifying the target coreference layerbecause ontonotes is not authorized to distribute the underlying text and many of the layers contain inline annotation we had to provide a skeletal form the link based muc metric ii the mention based bcubed metric and iii the entity based ceaf metric very recently blanc measure has been proposed as welleach of the metric tries to address the shortcomings or biases of the earlier metricsgiven a set of key entities k and a set of response entities r with each entity comprising one or more mentions each metric generates its variation of a precision and recall measurethe muc measure if the oldest and most widely usedit focuses on the links in the data18 the number of common links between entities in k and r divided by the number of links in k represents the recall whereas precision is the number of common links between entities in k and r divided by the number of links in r this metric prefers systems that have more mentions per entity a system that creates a single entity of all the mentions will get a 100 recall without significant degradation in its precisionand it ignores recall for singleton entities or entities with only one mentionthe bcubed metric tries to addresses mucss shortcomings by focusing on the mentions and computes recall and precision scores for each mentionif k is the key entity containing mention m and r is the response entity containing mention m then recall for the mention m is computed as kr k and precision for the same is is computed as kr roverall recall and precision are the average of the individual mention scoresceaf aligns every response entity with at most one key entity by finding the best onetoone mapping between the entities using an entity similarity metricthis is a maximum bipartite matching problem and can be solved by the kuhnmunkres algorithmthis is thus a entity based measuredepending on the similarity there are two variations entity based ceaf ceaf and a mention based ceaf ceafrecall is the total similarity divided by the number of mentions in k and precision is the total similarity divided by the number of mentions in r finally blanc uses a variation on the rand index suitable for evaluating coreferencethere are a few other measures one being the ace value but since this is specific to a restricted set of entities we did not consider itin order to determine the best performing system in the shared task we needed to associate a single number with each systemthis could have been one of the metrics above or some combination of more than one of themthe choice was not simple and while we consulted various researchers in the field hoping for a strong consensus their conclusion seemed to be that each metric had its pros and conswe settled on the mela metric by denis and baldridge which takes a weighted average of three metrics muc bcubed and ceafthe rationale for the combination is that each of the three metrics represents a different important dimension the muc measure being based on links the bcubed based on mentions and the ceaf based on entitiesfor a given task a weighted average of the three might be optimal but since we do not have an end task in mind we decided to use the unweighted mean of the three metrics as the score on which the winning system was judgedwe decided to use ceaf instead of ceafmwe used the same core scorer implementation19 that was used for the semeval2010 task and which implemented all the different metricsthere were a couple of modifications done to this scorer after it was used for the semeval2010 tasksince there are differences in the version used for conll and the one available on the download site and it is possible that the latter would be revised in the future we have archived the version of the scorer on the conll2011 task webpage20about 65 different groups demonstrated interest in the shared task by registering on the task webpageof these 23 groups submitted system outputs on the test set during the evaluation week18 groups submitted only closed track results 3 groups only open track results and 2 groups submitted both closed and open track results2 participants in the closed track did not write system papers so we do not use their results in the discussiontheir results will be reported on the task webpagethe official results for the 18 systems that submitted closed track outputs are shown in table 12 with those for the 5 systems that submitted open track results in table 13the official ranking score the arithmetic mean of the fscores of muc bcubed and ceaf is shown in the rightmost columnfor convenience systems will be referred to here using the first portion of the full name which is unique within each tablefor completeness the tables include the raw precision and recall scores from which the fscores were derivedthe tables also include two additional scores that did not factor into the official ranking scoreuseful further analysis may be possible based on these results beyond the preliminary results presented hereas discussed previously in the task description we will consider three different test input conditions i predicted only ii predicted plus gold mention boundaries and iii predicted plus gold mentions for the official test beyond the raw source text coreference systems were provided only with the predictions from automatic engines as to the other annotation layers in this evaluation it is important to note that the mention detection score cannot be considered in isolation of the coreference task as has usually been the casethis is mainly owing to the fact that there are no singleton entities in the ontonotes datamost systems removed singletons from the response as a postprocessing step so not only will they not get credit for the singleton entities that they correctly removed from the data but they will be penalized for the ones that they accidentally linked with another mentionwhat this number does indicate is the ceiling on recall that a system would have got in absence of being penalized for making mistakes in coreference resolutiona close look at the table 12 indicates a possible outlier in case of the sapena systemthe recall for this system is very high and precision way lower than any other systemfurther investigations uncovered that the reason for this aberrant behavior was that fact that this system opted to keep singletons in the responseby design the scorer removes singletons that might be still present in the system but it does so after the mention detection accuracy is computedthe official scores top out in the high 50swhile this is lower than the figures cited in previous coreference evaluations that is as expected given that the task here includes predicting the underlying mentions and mention boundaries the insistence on exact match and given that the relatively easier appositive coreference cases are not included in this measurethe topperforming system had a score of 5779 which is about 18 points higher than that of the second and third ranking systems which scored 5599 and 5596 respectivelyanother 15 points separates them from the fourth best score of 5453 thus the performance differences between the betterscoring systems were not large with only about three points separating the top four systemsthis becomes even clearer if we merge in the results of systems that participated only in the open track but that made relatively limited use of outside resources21 comparing that way the cai system scores in the same ball park as the second rank systems the uryupina system similarly scores very close to nuguess 5453 given that our choice of the official metric was somewhat arbitrary if is also useful to look at the individual metrics including the mentionbased ceafm and blanc metrics that were not part of the official metricthe lee system which scored the best using the official metric does slightly worse than song on the muc metric and also does slightly worse than chang on the bcubed and blanc metricshowever it does much better than every other group on the entitybased ceafe and this is the primary reason for its 18 point advantage in the official scoreif the ceafe measure does indicate the accuracy of entities in the response this suggests that the lee system is doing better on getting coherent entities than any other systemthis could be partly due to the fact that that system is primarily a precisionbased system that would tend to create purer entitiesthe ceafe measure also seems to penalize other systems more harshly than do the other measureswe cannot compare these results to the ones obtained in the semeval2010 coreference task using a small portion of ontonotes data because it was only using nominal entities and had heuristically added singleton mentions to the ontonotes data22 21the cai system specifically mentions that and the only resource that the uryupina system used outside of the closed track setting was the stanford named entity tagger22the documentation that comes with the semeval data package from ldc states only nominal mentions and identical types were taken from the ontonotes coreference annotation thus excluding coreference we also explored performance when the systems were provided with the gold mention boundaries that is with the exact spans for all of the np constituents in the humanannotated parse trees for the test datasystems could use this additional data to ensure that the output mention spans in their entity chains would not clash with those in the answer setsince this was a secondary evaluation it was an optional element and not all participants ran their systems on this task variationthe results for those systems that did participate in this optional task are shown in tables 14 and 15 most of the better scoring systems did supply these resultswhile all systems did slightly better here in terms of raw scores the performance was not much different from the official task indicating that mention boundary errors resulting from problems in parsing do not contribute significantly to the final output23 one side benefit of performing this supplemental evaluation was that it revealed a subtle bug in the automatic scoring routine that we were using that could doublecount duplicate correct mentions in a given entity chainthese can occur for example if the system considers a unitproduction npprp combination as two mentions that identify the exact same token in the text and reports them as separate mentionsmost systems had a filter in their processing that selected only one of these duplicate mentions but the kobdani system considered both as potential mentions and its developers tuned their algorithm using that flawed version of the scorerwhen we fixed the scorer and reevaluated all of the systems the kobdani system was the only one whose score was affected significantly dropping by about 8 points which lowered that systems rank from second to ninthit is not clear how much of this was owing to the fact that the systems paramrelations with verbs and appositivessince ontonotes is only annotated with multimention entities singleton referential elements were identified heuristically all nps and possessive determiners were annotated as singletons excluding those functioning as appositives or as premodifiers but for nps in the possessive casein coordinated nps single constituents as well as the entire nps were considered to be mentionsthere is no reliable heuristic to automatically detect english expletive pronouns thus they were also annotated as singletons 23it would be interesting to measure the overlap between the entity clusters for these two cases to see whether there was any substantial difference in the mention chains besides the expected differences in boundaries for individual mentions eters had been tuned using the scorer with the bug which doublecredited duplicate mentionsto find out for sure one would have to retune the system using the modified scorerone difficulty with this supplementary evaluation using gold mention boundaries is that those boundaries alone provide only very partial informationfor the roughly 10 of mentions that the automatic parser did not correctly identify while the systems knew the correct boundaries they had no hierarchical parser or semantic role label information and they also had to further approximate the already heuristic head word identificationthis incomplete data complicated the systems task and also complicates interpretation of the resultsthe final supplementary condition that we explored was if the systems were supplied with the manuallyannotated spans for exactly those mentions that did participate in the gold standard coreference chainsthis supplies significantly more information than the previous case where exact spans were supplied for all nps since the gold mentions list here will also include verb headwords that are linked to event nps but will not include singleton mentions which do not end up as part of any chainthe latter constraint makes this test seem somewhat artificial since it directly reveals part of what the systems are designed to determine but it still has some value in quantifying the impact that mention detection has on the overall task and what the results are if the mention detection is perfectsince this was a logical extension of the task and since the data was available to the participants for the development set a few of the sites did run experiments of this typetherefore we decided to provide the gold mentions data to a few sites who had reported these scores so that we could compute the performance on the test setthe results of these experiments are shown in tables 16 and 17the results show that performance does go up significantly indicating that it is markedly easier for the systems to generate better entities given gold mentionsalthough ideally one would expect a perfect mention detection score it is the case that one of the two systems lee did not get a 100 recallthis could possibly be owing to unlinked singletons that were removed in postprocessingthe lee system developers also ran a further experiment where both gold mentions for the elements of the coreference chains and also gold annotations for all the other layers were available to the systemsurprisingly the improvement in coreference performance from having gold annotation of the other layers was almost negligiblethis suggests that either i the automatic models are predicting those layers well enough that switching to gold does not make much difference ii information from the other layers does not provide much leverage for coreference resolution or iii current coreference models are not capable of utilizing the information from these other layers effectivelygiven the performance numbers on the individual layers cited earlier seems unlikely and we hope that further research in how best to leverage these layers will result in models that can benefit from them more definitivelyin order to check how stringent the official exact match scoring is we also performed a relaxed scoringunlike ace and muc the ontonotes data does not have manually annotated minimum spans that a mention must contain to be considered correcthowever ontonotes does have manual syntactic analysis in the form of the treebanktherefore we decided to approximate the minimum spans by using the head words of the mentions using the gold standard syntax treeif the response mention contained the head word and did not exceed the true mention boundary then it was considered correct both from the point of view of mention detection and coreference resolutionthe scores using this relaxed strategy for the open and closed track submissions using predicted data are shown in tables 18 and 19it can be observed that the relaxed head word based scoring does not improve performance very muchthe only exception was the klenner system whose performance increased from 5177 to 5528overall the ranking remained quite stable though it did change for some adjacent systems which had very close exact match scoresin order to check how the systems did on various genres we scored their performance per genre as welltables 20 and 21 summarize genre based performance for the closed and open track participants respectivelysystem performance does not seem to vary as much across the different genres as is normally the case with language processing tasks which could suggest that coreference is relatively genre insensitive or it is possible that scores are two low for the difference to be apparentcomparisons are difficult however because the spoken genres were treated here with perfect speech recognition accuracy and perfect speaker turn informationunder more realistic application conditions the spread in performance between genres might be greatertables 22 and 23 summarize the approaches of the participating systems along with some of the important dimensionsmost of the systems broke the problem into two phases first identifying the potential mentions in the text and then linking the mentions to form coreference chainsmost participants also used rulebased approaches for mention detection though two did use trained modelswhile trained morels seem able to better balance precision and recall and thus to achieve a higher fscore on the mention task itself their recall tends to be quite a bit lower than that achievable by rulebased systems designed to favor recallthis impacts coreference scores because the full coreference system has no way to recover if the mention detection stage misses a potentially anaphoric mentiononly one of the participating systems cai attempted to do joint mention detection and coreference resolutionwhile it did not happen to be among the topperforming systems the difference in performance could be due to the richer features used by other systems rather than to the use of a joint modelmost systems represented the markable mentions internally in terms of the parse tree np constituent span but some systems used shared attribute models where the attributes of the merged entity are determined collectively by heuristically merging the attribute types and values of the different constituent mentionsvarious types of trained models were used for predicting coreferenceit is interesting to note that some of the systems including the bestperforming one used a completely rulebased approach even for this componentmost participants appear not to have focused much on eventive coreference those coreference chains that build off verbs in the datathis usually meant that mentions that should have linked to the eventive verb were instead linked in with some other entityparticipants may have chosen not to focus on events because they pose unique challenges while making up only a small portion of the dataroughly 91 of mentions in the data are nps and pronounsin the systems that used trained models many systems used the approach described in soon et al for selecting the positive and negative training examples while others used some of the alternative approaches that have been introduced in the research literature more recentlymany of the trained systems also were able to improve their performance by using feature selection though things varied some depending on the example selection strategy and the classifier usedalmost half of the trained systems used the feature selection strategy from soon et al and found it beneficialit is not clear whether the other systems did not explore this path or whether it just did not prove as useful in their casein this paper we described the anaphoric coreference information and other layers of annotation in the mention pairs with less than threshold number of different attribute values are considered all mention pairs and longer of nested mentions with common head kept ontonotes corpus and presented the results from an evaluation on learning such unrestricted entities and events in textthe following represent our conclusions on reviewing the results perhaps the most surprising finding was that the bestperforming system was completely rulebased rather than trainedthis suggests that their rulebased approach was able to do a more effective job of combining the multiple sources of evidence than the trained systemsthe features for coreference prediction are certainly more complex than for many other language processing tasks which makes it more challenging to generate effective feature combinationsthe rulebased approach used by the bestperforming system seemed to benefit from a heuristic that captured the most confident links before considering less confident ones and also made use of the information in the guidelines in a slightly more refined manner than other systemsthey also included appositives and copular constructions in their calculationsalthough ontonotes does not count those as instances of ident coreference using that information may have helped their system discover additional useful links it is interesting to note that the developers of the lee system also did the experiment of running their system using gold standard information on the individual layers rather than automatic model predictionsthe somewhat surprising result was that using perfect information for the other layers did not end up improving coreference performance much if at allit is not clear whether this means that i automatic predictors for the individual layers are accurate enough already ii information captured by those supplementary layers actually does not provide much leverage for resolving coreference or iii researchers have yet have found an effective way of capturing and utilizing the extra information provided by these layers it does seem that collecting information about an entity by merging information across the various attributes of the mentions that comprise it can be useful though not all systems that attempted this achieved a benefit system performance did not seem to vary as much across the different genres as is normally the case with language processing tasks which could suggest that coreference is relatively genre insensitive or it is possible that scores are two low for the difference to be apparentcomparisons are difficult however because the spoken genres were treated here with perfect speech recognition accuracy and perfect speaker turn informationunder more realistic application conditions the spread in performance between genres might be greater it is noteworthy that systems did not seem to attempt the kind of joint inference that could make use of the full potential of various layers available in ontonotes but this could well have been owing to the limited time available for the shared task we had expected to see more attention paid to event coreference which is a novel feature in this data but again given the time constraints and given that events represent only a small portion of the total it is not surprising that most systems chose not to focus on it versions of the ceaf metric which tries to capture the goodness of the entities in the output seem much lower than the other metric though it is not clear whether that means that our systems are doing a poor job of creating coherent entities or whether that metric is just especially harshfinally it is interesting to note that the problem of coreference does not seem to be following the same kind of learning curve that we are used to with other problems of this sortwhile performance has improved somewhat it is not clear how far we will be able to go given the strategies at hand or whether new techniques will be needed to capture additional information from the texts or from world knowledgewe hope that this corpus and task will provide a useful resource for continued experimentation to help resolve this issuewe gratefully acknowledge the support of the defense advanced research projects agency under the gale program darpacmo contract nohr001106c0022we would like to thank all the participantswithout their hard work patience and perseverance this evaluation would not have been a successwe would also like to thank the linguistic data consortium for making the ontonotes 40 corpus freely and timely available to the participantsemili sapena who graciously allowed the use of his scorer implementation and made available enhancements and immediately fixed issues that were uncovered during the evaluationfinally we offer our special thanks to llufs marquez and joakim nivre for their wonderful support and guidance without which this task would not have been successful
W11-1901
conll2011 shared task modeling unrestricted coreference in ontonotesthe conll2011 shared task involved predicting coreference using ontonotes dataresources in this field have tended to be limited to noun phrase coreference often on a restricted set of entities such as ace entitiesontonotes provides a largescale corpus of general anaphoric coreference not restricted to noun phrases or to a specified set of entity typesontonotes also provides additional layers of integrated annotation capturing additional shallow semantic structurethis paper briefly describes the ontonotes annotation and then describes the parameters of the shared task including the format preprocessing information and evaluation criteria and presents and discusses the results achieved by the participating systemshaving a standard test set and evaluation parameters all based on a new resource that provides multiple integrated annotation layers that could support joint models should help to energize ongoing research in the task of entity and event coreferencean overview of all systems participating in the conll2011 shared task and their results is provided here
stanfordâs multipass sieve coreference resolution system at the conll2011 shared task this paper details the coreference resolution system submitted by stanford at the conll 2011 shared task our system is a collection of deterministic coreference resolution models that incorporate lexical syntactic semantic and discourse information all these models use global documentlevel information by sharing mention attributes such as gender and number across mentions in the same cluster we participated in both the open and closed tracks and submitted results using both predicted and gold mentions our system was ranked first in both tracks with a score of 578 in the closed track and 583 in the open track this paper describes the coreference resolution system used by stanford at the conll2011 shared task our system extends the multipass sieve system of raghunathan et al which applies tiers of deterministic coreference models one at a time from highest to lowest precisioneach tier builds on the entity clusters constructed by previous models in the sieve guaranteeing that stronger features are given precedence over weaker onesfurthermore this model propagates global information by sharing attributes across mentions in the same clusterwe made three considerable extensions to the raghunathan et al modelfirst we added five additional sieves the majority of which address the semantic similarity between mentions eg using wordnet distance and shallow discourse understanding eg linking speakers to compatible pronounssecond we incorporated a mention detection sieve at the beginning of the processing flowthis sieve filters our syntactic constituents unlikely to be mentions using a simple set of rules on top of the syntactic analysis of textand lastly we added a postprocessing step which guarantees that the output of our system is compatible with the shared task and ontonotes specifications using this system we participated in both the closed1 and open2 tracks using both predicted and gold mentionsusing predicted mentions our system had an overall score of 578 in the closed track and 583 in the open trackthese were the top scores in both tracksusing gold mentions our system scored 607 in the closed track in 614 in the open trackwe describe the architecture of our entire system in section 2in section 3 we show the results of several experiments which compare the impact of the various features in our system and analyze the performance drop as we switch from gold mentions and annotations to predicted informationwe also report in this section our official results in the testing partitionour system consists of three main stages mention detection followed by coreference resolution and finally postprocessingin the first stage mentions are extracted and relevant information about mentions eg gender and number is prepared for the next stepthe second stage implements the actual coreference resolution of the identified mentionssieves in this stage are sorted from highest to lowest precisionfor example the first sieve requires an exact string match between a mention and its antecedent whereas the last one implements pronominal coreference resolutionpostprocessing is performed to adjust our output to the task specific constraints eg removing singletonsit is important to note that the first system stage ie the mention detection sieve favors recall heavily whereas the second stage which includes the actual coreference resolution sieves is precision orientedour results show that this design lead to stateoftheart performance despite the simplicity of the individual componentsthis strategy has been successfully used before for information extraction eg in the bionlp 2009 event extraction shared task several of the top systems had a first highrecall component to identify event anchors followed by highprecision classifiers which identified event arguments and removed unlikely event candidates in the coreference resolution space several works have shown that applying a list of rules from highest to lowest precision is beneficial for coreference resolution however we believe we are the first to show that this highrecallhighprecision strategy yields competitive results for the complete task of coreference resolution ie including mention detection and both nominal and pronominal coreferencein our particular setup the recall of the mention detection component is more important than its precision because any missed mentions are guaranteed to affect the final score but spurious mentions may not impact the overall score if they are left as singletons which are discarded by our postprocessing steptherefore our mention detection algorithm focuses on attaining high recall rather than high precisionwe achieve our goal based on the list of sieves sorted by recall each sieve uses syntactic parse trees identified named entity mentions and a few manually written patterns based on heuristics and ontonotes specifications in the first and highest recall sieve we mark all noun phrase possessive pronoun and named entity mentions in each sentence as candidate mentionsin the following sieves we remove from this set all mentions that match any of the exclusion rules below of 8 words eg there ltd hmmnote that the above rules extract both mentions in appositive and copulative relations eg yongkang zhou the general manager or mr savoca had been a consultant these relations are not annotated in the ontonotes corpus eg in the text yongkang zhou the general manager only the larger mention is annotatedhowever appositive and copulative relations provide useful information to our coreference sievesfor this reason we keep these mentions as candidates and remove them later during postprocessingonce mentions are extracted we sort them by sentence number and lefttoright breadthfirst traversal order in syntactic trees in the same sentence we select for resolution only the first mentions in each cluster3 for two reasons the first mention tends to be better defined which provides a richer environment for feature extraction and it has fewer antecedent candidates which means fewer opportunities to make a mistakefor example given the following ordered list of mentions mi m2 m3 m4 m1 m6 where the subscript indicates textual order and the superscript indicates cluster id our model will attempt to resolve only m2 and m4furthermore we discard first mentions that start with indefinite pronouns or indefinite articles if they have no antecedents that have the exact same string extentsfor each selected mention mi all previous mentions mi1 m1 become antecedent candidatesall sieves traverse the candidate list until they find a coreferent antecedent according to their criteria or reach the end of the listcrucially when comparing two mentions our approach uses information from the entire clusters that contain these mentions instead of using just information local to the corresponding mentionsspecifically mentions in a cluster share their attributes between them so coreference decision are better informedfor example if a cluster contains two mentions a group of students which is singular and five students which is plural the number attribute of the entire cluster becomes singular or plural which allows it to match other mentions that are both singular and pluralplease see for more detailsthe core of our coreference resolution system is an incremental extension of the system described in raghunathan et al our core model includes two new sieves that address nominal mentions and are inserted based on their precision in a heldout corpus since these two sieves use 3we initialize the clusters as singletons and grow them progressively in each sieveordered sieves simple lexical constraints without semantic information we consider them part of the baseline modelrelaxed string match this sieve considers two nominal mentions as coreferent if the strings obtained by dropping the text following their head words are identical eg clinton and clinton whose term ends in januaryproper head word match this sieve marks two mentions headed by proper nouns as coreferent if they have the same head word and satisfy the following constraints not iwithini same as raghunathan et al no location mismatches the modifiers of two mentions cannot contain different location named entities other proper nouns or spatial modifiersfor example lebanon and southern lebanon are not coreferentno numeric mismatches the second mention cannot have a number that does not appear in the antecedent eg people and around 200 people are not coreferentin addition to the above a few more rules are added to get better performance for predicted mentionspronoun distance sentence distance between a pronoun and its antecedent cannot be larger than 3bare plurals bare plurals are generic and cannot have a coreferent antecedentwe first extend the above system with two new sieves that exploit semantics from wordnet wikipedia infoboxes and freebase records drawing on previous coreference work using these databases since the input to a sieve is a collection of mention clusters built by the previous sieves we need to link mention clusters to records in these three knowledge basesthe following steps generate a query for these resources from a mention clusterfirst we select the most representative mention in a cluster by preferring mentions headed by proper nouns to mentions headed by common nouns and nominal mentions to pronominal onesin case of ties we select the longer stringfor example the mention selected from the cluster president george w bush president he is president george w bushsecond if this mention returns nothing from the knowledge bases we implement the following query relaxation algorithm remove the text following the mention head word select the lowest noun phrase in the parse tree that includes the mention head word use the longest proper noun sequence that ends with the head word select the head wordfor example the query president bill clinton whose term ends in january is successively changed to president bill clinton then bill clinton and finally clintonif multiple records are returned we keep the top two for wikipedia and freebase and all synsets for wordnetthis sieve addresses name aliases which are detected as followstwo mentions headed by proper nouns are marked as aliases if they appear in the same wikipedia infobox or freebase record in either the name or alias field or they appear in the same synset in wordnetas an example this sieve correctly detects america online and aol as aliaseswe also tested the utility of wikipedia categories but found little gain over morphosyntactic featuresthis sieve marks two nominal mentions as coreferent if they are linked by a wordnet lexical chain that traverses hypernymy or synonymy relationswe use all synsets for each mention but restrict it to mentions that are at most three sentences apart and lexical chains of length at most fourthis sieve correctly links britain with country and plane with aircraftto increase the precision of the above two sieves we use additional constraints before two mentions can match attribute agreement no iwithini no location or numeric mismatches and we do not use the abstract entity synset in wordnet except in chains that include organizationthis sieve matches speakers to compatible pronouns using shallow discourse understanding to handle quotations and conversation transcriptsalthough more complex discourse constraints have been proposed it has been difficult to show improvements we begin by identifying speakers within textin nonconversational text we use a simple heuristic that searches for the subjects of reporting verbs in the same sentence or neighboring sentences to a quotationin conversational text speaker information is provided in the datasetthe extracted speakers then allow us to implement the following sieve heuristics for example i my and she in the following sentence are coreferent i voted for nader because he was most aligned with my values she saidin addition to the above sieve we impose speaker constraints on decisions made by subsequent sieves for example my and he are not coreferent in the above example to guarantee that the output of our system matches the shared task requirements and the ontonotes annotation specification we implement two postprocessing stepstable 2 shows the performance of our mention detection algorithmwe show results before and after coreference resolution and postprocessing we also list results with gold and predicted linguistic annotations the table shows that the recall of our approach is 928 or 879 in both cases precision is low because our algorithm generates many spurious mentions due to its local naturehowever as the table indicates many of these mentions are removed during postprocessing because they are assigned to singleton clusters during coreference resolutionthe two main causes for our recall errors are lack of recognition of event mentions and parsing errorsparsing errors often introduce incorrect mention boundaries which yield both recall and precision errorsfor example our system generates the predicted mention the working meeting of the 863 program today for the gold mention the working meeting of the 863 programdue to this boundary mismatch all mentions found to be coreferent with this predicted mention are counted as precision errors and all mentions in the same coreference cluster with the gold mention are counted as recall errorstable 3 lists the results of our endtoend system on the development partitionexternal resources which were used only in the open track includes a handbuilt list of genders of first names that we created incorporating frequent names from census lists and other sources an animacy list a country and state gazetteer and a demonym listdiscourse stands for the sieve introduced in section 233semantics stands for the sieves presented in section 232the table shows that the discourse sieve yields an improvement of almost 2 points to the overall score and external resources contribute 05 pointson the other hand the semantic sieves do not help the latter result contradicts our initial experiments where we measured a minor improvement when these sieves were enabled and gold mentions were usedour hypothesis is that when predicted mentions are used the semantic sieves are more likely to link spurious mentions to existing clusters thus introducing precision errorsthis suggests that a different tuning of the sieve parameters is required for the predicted mention scenariofor this reason we did not use the semantic sieves for our submissionhence rows 2 and 3 in the table show the performance of our official submission in the development set in the closed and open tracks respectivelythe last three rows in table 3 give insight on the impact of gold informationthis analysis indicates that using gold linguistic annotation yields an improvement of only 2 pointsthis implies that the quality of current linguistic processors is sufficient for the task of coreference resolutionon the other hand using gold mentions raises the overall score by 15 pointsthis clearly indicates that pipeline architectures where mentions are identified first are inadequate for this task and that coreference resolution might benefit from the joint modeling of mentions and coreference chainsfinally table 4 lists our results on the heldout testing partitionnote that in this dataset the gold mentions included singletons and generic mentions as well whereas in development gold mentions included only mentions part of an actual coreference chainthis explains the large difference between say line 6 in table 3 and line 4 in table 4our scores are comparable to previously reported stateoftheart results for coreference resolution with predicted mentionsfor example haghighi and klein compare four stateoftheart systems on three different corpora and report b3 scores between 63 and 77 pointswhile the corpora used in are different from the one in this shared task our result of 68 b3 suggests that our systems performance is competitivein this task our submissions in both the open and the closed track obtained the highest scoresin this work we showed how a competitive endtoend coreference resolution system can be built using only deterministic models our approach starts with a highrecall mention detection component which identifies mentions using only syntactic information and named entity boundaries followed by a battery of highprecision deterministic coreference sieves applied one at a time from highest to lowest precisionthese models incorporate lexical syntactic semantic and discourse information and have access to documentlevel information for this shared task we extended our existing system with new sieves that model shallow discourse and semantics our results demonstrate that despite their simplicity deterministic models for coreference resolution obtain competitive results eg we obtained the highest scores in both the closed and open tracks the code used for this shared task is publicly released5we thank the shared task organizers for their effortthis material is based upon work supported by the air force research laboratory under prime contract nofa875009c0181any opinions findings and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the air force research laboratory
W11-1902
stanfordâs multipass sieve coreference resolution system at the conll2011 shared taskthis paper details the coreference resolution system submitted by stanford at the conll2011 shared taskour system is a collection of deterministic coreference resolution models that incorporate lexical syntactic semantic and discourse informationall these models use global documentlevel information by sharing mention attributes such as gender and number across mentions in the same clusterwe participated in both the open and closed tracks and submitted results using both predicted and gold mentionsour system was ranked first in both tracks with a score of 578 in the closed track and 583 in the open trackthe stanford coreference resolver which won the conll2011 shared task on coreference resolution adopts the oncepopular rulebased approach resolving pronouns simply with rules that encode the aforementioned traditional linguistic constraints on coreference such as the binding constraints and gender and number agreement the infrequency of occurrences of difficult pronouns in these standard evaluation corpora by no means undermines their significance however
findings of the 2011 workshop on statistical machine translation this paper presents the results of the wmt11 shared tasks which included a translation task a system combination task and a task for machine translation evaluation metrics we conducted a largescale manual evaluation of 148 machine translation systems and 41 system combination entries we used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 21 evaluation metrics this year featured a haitian creole to english task translating sms messages sent to an emergency response service in the aftermath of the haitian earthquake we also conducted a pilot tunable metrics task to test whether optimizing a fixed system to different metrics would result in perceptibly different translation quality this paper presents the results of the shared tasks of the workshop on statistical machine translation which was held at emnlp 2011this workshop builds on five previous wmt workshops the workshops feature three shared tasks a translation task between english and other languages a task to combine the output of multiple machine translation systems and a task to predict human judgments of translation quality using automatic evaluation metricsthe performance for each of these shared tasks is determined through a comprehensive human evaluationthere were a two additions to this years workshop that were not part of previous workshops haitian creole featured task in addition to translation between european language pairs we featured a new translation task translating haitian creole sms messages that were sent to an emergency response hotline in the immediate aftermath of the 2010 haitian earthquakethe goal of this task is to encourage researchers to focus on challenges that may arise in future humanitarian criseswe invited will lewis rob munro and stephan vogel to publish a paper about their experience developing translation technology in response to the crisis they provided the data used in the haitian creole featured translation taskwe hope that the introduction of this new dataset will provide a testbed for dealing with low resource languages and the informal language usage found in sms messages tunable metric shared task we conducted a pilot of a new shared task to use evaluation metrics to tune the parameters of a machine translation systemalthough previous workshops have shown evaluation metrics other than bleu are more strongly correlated with human judgments when ranking outputs from multiple systems bleu remains widely used by system developers to optimize their system parameterswe challenged metric developers to tune the parameters of a fixed system to see if their metrics would lead to perceptibly better translation quality for the systems resulting outputthe primary objectives of wmt are to evaluate the state of the art in machine translation to disseminate common test sets and public training data with published performance numbers and to refine evaluation methodologies for machine translationas with previous workshops all of the data translations and collected human judgments are publicly available1 we hope these datasets form a valuable resource for research into statistical machine translation system combination and automatic evaluation of translation qualitythe recurring task of the workshop examines translation between english and four other languages german spanish french and czechwe created a test set for each language pair by translating newspaper articleswe additionally provided training data and two baseline systemsthe test data for this years task was created by hiring people to translate news articles that were drawn from a variety of sources from early december 2010a total of 110 articles were selected in roughly equal amounts from a variety of czech english french german and spanish news sites2 czech aktualnecz novinkycz ihnedcz idnescz french canoe le devoir le monde les echos liberation spanish abces cinco dias el periodico milenio noroeste english economist los angeles times new york times washington post german faz frankfurter rundschau financial times deutschland der spiegel suddeutsche zeitung the translations were created by the professional translation agency ceet3 all of the translations were done directly and not via an intermediate languagealthough the translations were done professionally in some cases errors still cropped upfor instance in parts of the englishfrench translations some of the english source remains in the french reference as if the translator forgot to delete itas in past years we provided parallel corpora to train translation models monolingual corpora to train language models and development sets to tune system parameterssome statistics about the training materials are given in figure 1to lower the barrier of entry for newcomers to the field we provided two open source toolkits for phrasebased and parsingbased statistical machine translation we received submissions from 56 groups across 37 institutions as listed in tables 1 2 and 3we also included two commercial offtheshelf mt systems two online statistical mt systems and five online rulebased mt systemswe note that these nine companies did not submit entries themselves and are therefore anonymized in this paperrather their entries were created by translating the test data via their web interfaces4 the data used to construct these systems is not subject to the same constraints as the shared task participantsit is possible that part of the reference translations that were taken from online news sites could have been included in the online systems models for instancewe therefore categorize all commercial systems as unconstrained when evaluating the resultsin total we had 148 primary system entries and 60 contrastive entriesthese were made available to distinct words is based on the provided tokenizerparticipants in the system combination shared taskcontinuing our practice from last years workshop we separated the test set into a tuning set and a final heldout test set for system combinationsthe tuning portion was distributed to system combination participants along with reference translations to aid them set any system parametersin the european language pairs the tuning set consisted of 1003 segments taken from 37 documents whereas the test set consisted of 2000 segments taken from 73 documentsin the haitian creole task the split was 674 segments for tuning and 600 for testingtable 2 lists the 10 participants in the system combination taskthe featured translation task of wmt11 was to translate haitian creole sms messages into englishthese text messages were sent by people in haiti in the aftermath of the january 2010 earthquakein the wake of the earthquake much of the countrys conventional emergency response services failedsince cell phone towers remained standing after the earthquake text messages were a viable mode of communicationmunro describes how a textmessagebased emergency reporting system was set up by a consortium of volunteer organizations named mission 4636 after a free sms short code telephone number that they establishedthe sms messages were routed to a system for reporting trapped people and other emergenciessearch and rescue teams within haiti including the us military recognized the quantity and reliability of actionable information in these messages and used them to provide aidthe majority of the sms messages were written in haitian creole which was not spoken by most of first responders deployed from overseasa distributed online translation effort was established drawing volunteers from haitian creole and frenchspeaking communities around the worldthe volunteers not only translated messages but also categorized them and pinpointed them on a map5 collaborating online they employed their local knowledge of locations regional slang abbreviations and spelling variants to process more than 40000 messages in the first six weeks alonefirst responders indicated that this volunteer effort helped to save hundreds of lives and helped direct the first food and aid to tens of thousandssecretary of state clinton described one success of the mission 4636 programthe technology community has set up interactive maps to help us identify needs and target resourcesand on monday a sevenyearold girl and two women were pulled from the rubble of a collapsed supermarket by an american searchandrescue team after they sent a text message calling for help ushahiditufts described anotherthe world food program delivered food to an informal camp of 2500 people having yet to receive food or water in diquini to a location that 4636 had identified for them in parallel with rob munros crowdsourcing translation efforts the microsoft translator team developed a haitian creole statistical machine translation engine from scratch in a compressed timeframe despite the impressive number of translations completed by volunteers machine translation was viewed as a potentially useful tool for higher volume applications or to provide translations of english medical documents into haitian creolethe microsoft translator team quickly assembled parallel data from a number of sources including mission 4636 and from the archives of carnegie mellons diplomat project through a series of rapid prototyping efforts the team improved their system to deal with nonstandard orthography reduced pronouns and sms shorthandthey deployed a functional translation system to relief workers in the field in less than 5 days impressive even when measured against previous rapid mt development efforts like darpas surprise language exercise we were inspired by the efforts of rob munro and will lewis on translating haitian creole in the aftermath of the disaster so we worked with them to create a featured task at wmt11we thank them for generously sharing the data they assembled in their own effortswe invited rob munro will lewis and stephan vogel to speak at the workshop on the topic of developing translation technology for future crises and they recorded their thoughts in an invited publication for the wmt11 featured translation task we anonymized the sms haitian creole messages along with the translations that the mission 4636 volunteers createdexamples of these messages are given in table 4the goal of anonymizing the sms data was so that it may be shared with researchers who are developing translation and mapping technologies to support future emergency relief efforts and social developmentwe ask that any researcher working with these messages to be aware that they are actual communications sent by people in need in a time of crisisresearchers who use this data are asked to be cognizant of the following should be to understand how we can better respond to future crisesparticipants who received the haitian creole data for wmt11 were given anonymization guidelines mwen se firstname mwen gen twaset ki mouri mwen mande nou ed pou nou edem map tan repons i am firstname i have three sisters who have diedi ask help for us i await your responseki kote yap bay manje where are they giving out foodeske lekol kolej marie anne krazemesi was the college marie anne school destroyedthank younou pa ka anpeche moustik yo mode nou paske yo anpilwe cannot prevent the mosquitoes from biting because there are so many tanpri kthem ap kase mwen pa ka pran nouvel manmanmplease heart is breaking because i have no news of my mother4636opital medesen san fwontie delmas 19 la femenopital sen lwi gonzag nan delma 33 pran an chaj gratwitman tout moun ki malad ou blese mwen resevoua mesaj nou yo 5 sou 5 men mwen ta vle di yon bagay kile e koman nap kapab fthem jwin ed sa yo pou moune b la kay mwen ki sinistwe adres la se 4636 the doctors without borders hospital in delmas 19 is closedthe saint louis gonzaga hospital in delmas 33 is taking in sick and wounded people for free i received your message 55 but i would like to ask one thing when and how will you be able to get the aid to me for the people around my house who are victims of the earthquakethe address is sil vous plait map chehe lastnamefirstnamedi yo relem nan phonenumbermwen se lastname firstname bonswa mwen rele firstname lastname kay mwen krase mwen pagin anyin poum mange ak fanmim tampri di yon mo pou mwen fem jwen yon tante tou ak mangemrete n i am looking for lastnamefirstnametell him to call me at phonenumber i am lastname firstname hello my name is firstname lastnamemy house fell down i have had nothing to eat and i am hungryplease help me find foodi live mw rele firstname mw fe mason epi mw abite laplenyo dim minustah ap bay djob mason ki kote pou mw ta pase si mw ta vle jwenn nan djob sa yosouple mande lapolis pou fe on ti pase nan magloire ambroise prolonge zone muler ak cadet jeremie ginyin jen gason ki ap pase nan zone sa yo e ki agresii am a young student in computer science who has suffered a lot during and after the earthquake of january 12thall my family has died and i feel alonei wanted to go livemy name is firstname i am a construction worker and i live in la plainei heard that the minustah was giving jobs to construction workerswhat do i have to go to find one of these jobs please ask the police to go to magloire ambroise going towards the muler area and cadet jeremie because there are very aggressive young men in these areas where can people get food to eat outside of the capital because since january 12th they have sent food for the people but we never received anythingwe are dying of hunger mwen se firstnamelastname mwen nan aken mwen se yon jen ki ansent mwen te genyen yon paran ki tap ede li mouri potoprens mwen pral akouye nan komansman feviye alongside the sms datathe wmt organizers requested that if they discovered messages with incorrect or incomplete anonymization that they notify us and correct the anonymization using the version control repositoryto define the shared translation task we divided the sms messages into an indomain training set along with designated dev devtest and test setswe coordinated with microsoft and cmu to make available additional outofdomain parallel corporadetails of the data are given in table 5in addition to this data participants in the featured task were allowed to use any of the data provided in the standard translation task as well as linguistic tools such as taggers parsers or morphological analyzerswe provided two sets of testing and development dataparticipants used their systems to translate two test sets consisting of 1274 unseen haitian creole sms messagesone of the test sets contains the raw sms messages as they were sent and the other contains messages that were cleaned up by human posteditorsthe english side is the same in both cases and the only difference is the haitian creole input sentencesthe posteditors were haitian creole language informants hired by microsoft researchthey provided a number of corrections to the sms messages including expanding sms shorthands correcting spellinggrammarcapitalization restoring diacritics that were left out of the original message and cleaning up accented characters that were lost when the message was transmitted in the wrong encodingsil vou ple ede mwen avek moun ki viktim yo nan tranbleman de te aki kite potoprins ki vini nan provins mwen ede ak ti kob mwen te ginyin kounie a 4636 manje vin pi che nan pap apre tranbleman tea mamit diri ap vann 250gd kounye sete 200gd avanmayia 125gd avan sete 100gd silvouple ede mwen avek moun ki viktim yo nan tranblemannte a ki kite potoprens ki vini nan pwovens mwen ede ak ti kob mwen te genyen kounye a 4636 manje vin pi che nan pap apre tranblemannte a mamit diri ap vann 250gd kounye a sete 200gd avanmayia 125gd avan sete 100gdfor the test and development sets the informants also edited the english translationsfor instance there were cases where the original crowdsourced translation summarized the content of the message instead of translating it instances where parts of the source were omitted and where explanatory notes were addedthe editors improved the translations so that they were more suitable for machine translation making them more literal correcting disfluencies on the english side and retranslating them when they were summarieswe are in the area of petit goave we would like we need tents and medication for flucolds we are in the area of petit goave we would like to receive assistance however it should not be the way i see the minustah guys are handling the peoplewe need lots of tents and medication for flucolds and fever the edited english is provided as the reference for both the clean and the raw sets since we intend that distinction to refer to the form that the source language comes in rather than the target languagetables 47 and 48 in the appendix show a significant difference in the translation quality between the clean and the raw test setsin most cases systems output for the raw condition was 4 bleu points lower than for the clean conditionwe believe that the difference in performance on the raw vs cleaned test sets highlight the importance of handling noisy input dataall of the indomain training data is in the raw formatthe original sms messages are unaltered and the translations are just as the volunteered provided themin some cases the original sms messages are written in french or english instead of haitian creole or contain a mixture of languagesit may be possible to further improve the quality of machine translation systems trained from this data by improving the quality of the data itselfthe goals of the haitian creole to english translation task were there are many challenges in translating noisy data in a low resource language and there are a variety of strategies that might be considered to attempt to tackle themfor instance it is our hope that by introducing this data as a shared challenge at wmt11 that we will establish a useful community resource so that researchers may explore these challenges and publish about them in the futureas with past workshops we placed greater emphasis on the human evaluation than on the automatic evaluation metric scoresit is our contention that automatic measures are an imperfect substitute for human assessment of translation qualitytherefore we define the manual evaluation to be primary and use the human judgments to validate automatic metricsmanual evaluation is time consuming and it requires a large effort to conduct on the scale of our workshopwe distributed the workload across a number of people including sharedtask participants interested volunteers and a small number of paid annotators more than 130 people participated in the manual evaluation with 91 people putting in more than an hours worth of effort and 29 putting in more than four hoursthere was a collective total of 361 hours of laborwe asked annotators to evaluate system outputs by ranking translated sentences relative to each otherthis was our official determinant of translation qualitythe total number of judgments collected for the different ranking tasks is given in table 6we performed the manual evaluation of the individual systems separately from the manual evaluation of the system combination entries rather than comparing them directly against each otherlast years results made it clear that there is a large gap in performance between the two groupsthis year we opted to reduce the number of pairwise comparisons with the hope that we would be more likely to find statistically significant differences between the systems in the same groupsto that same end we also eliminated the editingacceptability task that was featured in last years evaluation instead we had annotators focus solely on the system ranking taskranking translations relative to each other is a reasonably intuitive taskwe therefore kept the instructions simple you are shown a source sentence followed by several candidate translationsyour task is to rank the translations from best to worst each screen for this task involved judging translations of three consecutive source segmentsfor each source segment the annotator was shown the outputs of five submissions and asked to rank themwith the exception of a few tasks in the system combination track there were many more than 5 systems participating in any given taskup to 23 for the englishgerman individual systems trackrather than attempting to get a complete ordering over the systems we instead relied on random selection and a reasonably large sample size to make the comparisons fairwe use the collected rank labels to assign each system a score that reflects how highly that system was usually ranked by the annotatorsthe score for some system a reflects how frequently it was judged to be better than or equal to other systemsspecifically each block in which a appears includes four implicit pairwise comparisons a is rewarded once for each of the four comparisons in which a wins or tiesas score is the number of such winning pairwise comparisons divided by the total number of pairwise comparisons involving athe system scores are reported in section 5appendix a provides detailed tables that contain pairwise headtohead comparisons between pairs of systemswe were interested in determining the inter and intraannotator agreement for the ranking task since a reasonable degree of agreement must exist to support our process as a valid evaluation setupto ensure we had enough data to measure agreement we purposely designed the sampling of source segments and translations shown to annotators in a way that ensured some items would be repeated both within the screens completed by an individual annotator and across screens completed by different annotatorswe did so by ensuring that 10 of the generated screens are exact repetitions of previously generated screen within the same batch of screensfurthermore even within the other 90 we ensured that a source segment appearing in one screen appears again in two more screens those two details intentional repetition of source sentences and intentional repetition of system outputs ensured we had enough data to compute meaningful inter and intraannotator agreement rateswe measured pairwise agreement among annotators using cohens kappa coefficient which is defined as where p is the proportion of times that the annotators agree and p is the proportion of time that they would agree by chancenote that n is basically a normalized version of p one which takes into account how meaningful it is for annotators to agree with each other by incorporating pnote also that n has a value of at most 1 with higher rates of agreement resulting in higher n the above definition of n is actually used by several definitions of agreement measures which differ in how p and p are computedwe calculate p by examining all pairs of systems which had been judged by two or more judges and calculating the proportion of time that they agreed that a b a b or a b pnusteslaf goes from being a bottom entry to being a top entry with cusemposbleu also benefiting changing from the middle to the top rankeither way we see that a bleu tuned system is performing just as well as systems tuned to the other metricsthis might be an indication that some work remains to be done before a move away from bleutuning is fully justifiedon the other hand the close results might be an artifact of the language pair choiceurduenglish translation is still a relatively difficult problem and mt outputs are still of a relatively low qualityit might be the case that human annotators are simply not very good at distinguishing one bad translation from another bad translation especially at such a finegrained levelit is worth noting that the designers of the tesla family replicated the setup of this tunable metric task for three european language pairs and found that human judges did perceive a difference in quality between a teslatuned system and a bleu tuned system this years effort was a pilot of the task so we intentionally limited the task to some degree to make it easier to iron out the detailspossible changes for next year includeas in previous editions of this workshop we carried out an extensive manual and automatic evaluation of machine translation performance for translating from european languages into english and vice versathe number of participants grew slightly compared to previous editions of the wmt workshop with 36 groups from 27 institutions participating in the translation task of wmt11 10 groups from 10 institutions participating in the system combination task and 10 groups from 8 institutions participating in the featured translation task this year was also the first time that we included a language pair with noneuropean source language and with very limited resources for the source language sidealso the genre of the haitianenglish task differed from previous wmt tasks as the haitianenglish translations are sms messageswmt11 also introduced a new shared task focusing on evaluation metrics to tune the parameters of a statistical machine translation system in which 6 groups have participatedas in previous years all data sets generated by this workshop including the human judgments system translations and automatic scores are publicly available for other researchers to analyze8this work was supported in parts by the euromatrixplus project funded by the european commission the gale program of the us defense advanced research projects agency contract nohr001106c0022 the us national science foundation under grant iis0713448 and the cosyne project fp7ict4248531 funded by the european commissionthe views and findings are the authors alonea big thank you to ondˇrej bojar simon carter christian federmann will lewis rob munro and herve saintamand and to the shared task participantstables 1938 show pairwise comparisons between systems for each language pairthe numbers in each of the tables cells indicate the percentage of times that the system in that column was judged to be better than the system in that rowbolding indicates the winner of the two systemsthe difference between 100 and the sum of the complementary cells is the percent of time that the two systems were judged to be equalbecause there were so many systems and data conditions the significance of each pairwise comparison needs to be quantifiedwe applied the sign test to measure which comparisons indicate genuine differences in the following tables indicates statistical significance at p 010 t indicates statistical significance at p 005 and t indicates statistical significance at p 001 according to the sign testtables 3948 give the automatic scores for each of the systems
W11-2103
findings of the 2011 workshop on statistical machine translationthis paper presents the results of the wmt11 shared tasks which included a translation task a system combination task and a task for machine translation evaluation metricswe conducted a largescale manual evaluation of 148 machine translation systems and 41 system combination entrieswe used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 21 evaluation metricsthis year featured a haitian creole to english task translating sms messages sent to an emergency response service in the aftermath of the haitian earthquakewe also conducted a pilot tunable metrics task to test whether optimizing a fixed system to different metrics would result in perceptibly different translation qualitydespite avoiding languagespecific resources and using only the training data provided by the workshop an extensive manual evaluation determined that the outputs produced were of significantly higher quality than both statistical and rulebased systems that made use of languagespecific resources
meteor 13 automatic metric for reliable optimization and evaluation of machine translation systems this paper describes meteor 13 our submission to the 2011 emnlp workshop on statistical machine translation automatic evaluation metric tasks new metric features include improved text normalization higherprecision paraphrase matching and discrimination between content and function words we include ranking and adequacy versions of the metric shown to have high correlation with human judgments of translation quality as well as a more balanced tuning version shown to outperform bleu in minimum error rate training for a phrasebased urduenglish system the meteor1 metric has been shown to have high correlation with human judgments in evaluations such as the 2010 acl workshop on statistical machine translation and nist metrics matr however previous versions of the metric are still limited by lack of punctuation handling noise in paraphrase matching and lack of discrimination between word typeswe introduce new resources for all wmt languages including text normalizers filtered paraphrase tables and function word listswe show that the addition of these resources to meteor allows tuning versions of the metric that show higher correlation with human translation rankings and adequacy scores on unseen test datathe evaluation resources are modular usable with any other evaluation metric or mt softwarewe also conduct a mt system tuning experiment on urduenglish data to compare the effectiveness of using multiple versions of meteor in minimum error rate trainingwhile versions tuned to various types of human judgments do not perform as well as the widely used bleu metric a balanced tuning version of meteor consistently outperforms bleu over multiple endtoend tunetest runs on this data setthe versions of meteor corresponding to the translation evaluation task submissions are described in sections 3 through 5 while the submission to the tunable metrics task is described in section 6whereas previous versions of meteor simply strip punctuation characters prior to scoring version 13 includes a new text normalizer intended specifically for translation evaluationthe normalizer first replicates the behavior of the tokenizer distributed with the moses toolkit including handling of nonbreaking prefixesafter tokenization we add several rules for normalization intended to reduce meaningequivalent punctuation styles to common formsthe following two rules are particularly helpful consider the behavior of the moses tokenizer and meteor normalizers given a reference translation containing the phrase yousbased organization of these only the meteor 13 normalization allows metrics to match all of the following stylizations while intended for meteor evaluation use of this normalizer is a suitable preprocessing step for other metrics to improve accuracy when reference sentences are stylistically different from hypothesesthe original meteor paraphrase tables are constructed using the phrase table pivoting technique described by bannard and callisonburch many paraphrases suffer from word accumulation the appending of unaligned words to one or both sides of a phrase rather than finding a true rewording from elsewhere in parallel datato improve the precision of the paraphrase tables we filter out all cases of word accumulation by removing paraphrases where one phrase is a substring of the othertable 1 lists the number of phrase pairs found in each paraphrase table before and after filteringin addition to improving accuracy the reduction of phrase table sizes also reduces the load time and memory usage of the meteor paraphrase matcherthe tables are a modular resource suitable for other mt or nlp softwarecommonly used metrics such as bleu and earlier versions of meteor make no distinction between content and function wordsthis can be problematic for rankingbased evaluations where two system outputs can differ by a single word such as mistranslating either a main verb or a determinerto improve meteors discriminative power in such cases we introduce a function word list for each wmt language and a new 6 parameter to adjust the relative weight given to content words versus function words function word lists are estimated according to relative frequency in large monolingual corporafor each language we pool freely available wmt 2011 data consisting of europarl news and news commentary dataany word with relative frequency of 103 or greater is added to the function word listtable 2 lists corpus size and number of function words learned for each languagein addition to common words punctuation symbols consistently rise to the tops of function word listsmeteor evaluates translation hypotheses by aligning them to reference translations and calculating sentencelevel similarity scoresthis section describes our extended version of the metricfor a hypothesisreference pair the search space of possible alignments is constructed by identifying all possible matches between the two sentences according to the following matchers exact match words if their surface forms are identicalstem stem words using a languageappropriate snowball stemmer and match if the stems are identicalsynonym match words if they share membership in any synonym set according to the wordnet databaseparaphrase match phrases if they are listed as paraphrases in the paraphrase tables described in section 22all matches are generalized to phrase matches with a start position and phrase length in each sentenceany word occurring less than length positions after a match start is considered covered by the matchthe exact and paraphrase matchers support all five wmt languages while the stem matcher is limited to english french german and spanish and the synonym matcher is limited to englishonce matches are identified the final alignment is resolved as the largest subset of all matches meeting the following criteria in order of importance given an alignment the metric score is calculated as followscontent and function words are identified in the hypothesis and reference according to the function word lists described in section 23for each of the matchers count the number of content and function words covered by matches of this type in the hypothesis mi and reference micalculate weighted precision and recall using matcher weights and contentfunction word weight the parameterized harmonic mean of p and are is then calculated to account for gaps and differences in word order a fragmentation penalty is calculated using the total number of matched words and number of chunks the parameters α β γ δ and wiwn are tuned to maximize correlation with human judgmentsthe 2009 and 2010 wmt shared evaluation data sets are made available as development data for wmt 2011data sets include mt system outputs reference translations and human rankings of translation qualitytable 3 lists the number of judgments for each evaluation and combined totalsto evaluate a metrics performance on a data set we count the number of pairwise translation rankings preserved when translations are reranked by metric scorewe then compute kendalls τ correlation coefficient as follows for each wmt language we learn meteor parameters that maximize t over the combined 2009 and 2010 data sets using an exhaustive parametric sweepthe resulting parameters listed in table 4 are used in the default ranking version of meteor 13for each language the 6 parameter is above 05 indicating a preference for content words over function wordsin addition the fragmentation penalties are generally less severe across languagesthe additional features in meteor 13 allow for more balanced parameters that distribute responsibility for penalizing various types of erroneous translationsto compare meteor 13 against previous versions of the metric on the task of evaluating mt system outputs we tune a version for each language on 2009 wmt data and evaluate on 2010 datathis replicates the 2010 wmt shared evaluation task allowing comparison to meteor 12table 5 lists correlation of each metric version with ranking judgments on tune and test datameteor 13 shows significantly higher correlation on both tune and test data for english french and spanish while czech and german demonstrate overfitting with higher correlation on tune data but lower on test datathis overfitting effect is likely due to the limited number of systems providing translations into these languages and the difficulty of these target languages leading to significantly noisier translations skewing the space of metric scoreswe believe that tuning to combined 2009 and 2010 data will counter these issues for the official ranking versionto evaluate the impact of new features on other evaluation tasks we follow denkowski and lavie tuning versions of meteor to maximize lengthweighted sentencelevel pearsons r correlation coefficient with adequacy and hter scores of translationsdata sets include 2008 and 2009 nist open machine translation evaluation adequacy data and gale p2 and p3 hter data for each type of judgment metric versions are tuned and tested on each year and scores are comparedwe compare meteor 13 results with those from version 12 with results shown in table 6for both adequacy data sets meteor 13 significantly outperforms version 12 on both tune and test datathe version tuned on mt09 data is selected as the official adequacy version of meteor 13hter versions either show no improvement or degradation due to overfittingexamination of the optimal hter parameter sets reveals a mismatch between evaluation metric and human judgment typeas hter evaluation is ultimately limited by the ter aligner there is no distinction between content and function words and words sharing stems are considered nonmatchesas such these features do not help meteor improve correlation but rather act as a source of additional possibility for overfittingthe 2011 wmt tunable metrics task consists of using zmert to tune a prebuilt urduenglish joshua system to a new evaluation metric on a tuning set with 4 reference translations and decoding a test set using the resulting parameter setas this task does not provide a devtest set we select a version of meteor by exploring the effectiveness of using multiple versions of the metric to tune phrasebased translation systems for the same language pairwe use the 2009 nist open machine translation evaluation urduenglish parallel data plus 900m words of monolingual data from the english gigaword corpus to build a standard moses system as followsparallel data is word aligned using the mgiza toolkit and alignments are symmetrized using the growdiagfinaland heuristicphrases are extracted using standard phrasebased heuristics and used to build a translation table and lexicalized reordering modela standard sri 5gram language model is estimated from monolingual datausing zmert we tune this system to baseline metrics as well as the versions of meteor discussed in previous sectionswe also tune to a balanced tuning version of meteor designed to minimize biasthis data set provides a single set of reference translations for mertto account for the variance of mert we run endtoend tuning 3 times for each metric and report the average results on two unseen test sets newswire and weblogtest set translations are evaluated using bleu ter and meteor 12the parameters for each meteor version are listed in table 7 while the results are listed in table 8the results are fairly consistent across both test sets the tuning version of meteor outperforms bleu across all metrics while versions of meteor that perform well on other tasks perform poorly in tuningthis illustrates the differences between evaluation and tuning tasksin evaluation tasks metrics are engineered to score 1best translations from systems most often tuned to bleuas listed in table 7 these parameters are often skewed to emphasize the differences between system outputsin the tuning scenario mert optimizes translation quality with respect to the tuning metricif a metric is biased it will guide the mert search toward pathological translations that receive lower scores across other metricsbalanced between precision and recall content and function words and word choice versus fragmentation the tuning version of meteor is significantly less susceptible to gamingchosen as the official submission for wmt 2011 we believe that this tuning version of meteor will further generalize to other tuning scenarioswe have presented ranking adequacy and tuning versions of meteor 13the ranking and adequacy versions are shown to have high correlation with human judgments except in cases of overfitting due to skewed tuning datawe believe that these overfitting issues are lessened when tuning to combined 2009 and 2010 data due to increased variety in translation characteristicsthe tuning version of meteor is shown to outperform bleu in minimum error rate training of a phrasebased system on small urduenglish data and we believe that it will generalize well to other tuning scenariosthe source code and all resources for meteor 13 and the version of zmert with meteor integration will be available for download from the meteor website
W11-2107
meteor 13 automatic metric for reliable optimization and evaluation of machine translation systemsthis paper describes meteor 13 our submission to the 2011 emnlp workshop on statistical machine translation automatic evaluation metric tasksnew metric features include improved text normalization higherprecision paraphrase matching and discrimination between content and function wordswe include ranking and adequacy versions of the metric shown to have high correlation with human judgments of translation quality as well as a more balanced tuning version shown to outperform bleu in minimum error rate training for a phrasebased urduenglish system
kenlm faster and smaller language model queries we present kenlm a library that implements two data structures for efficient language model queries reducing both time and costs the structure uses linear probing hash tables and is designed for speed compared with the widely srilm our is 24 times as fast while using 57 of the mem the structure is a trie with bitlevel packing sorted records interpolation search and optional quantization aimed lower memory consumption simultaneously uses less memory than the smallest lossless baseline and less cpu than the baseline our code is threadsafe and integrated into the moses cdec and joshua translation systems this paper describes the several performance techniques used and presents benchmarks against alternative implementations language models are widely applied in natural language processing and applications such as machine translation make very frequent queriesthis paper presents methods to query ngram language models minimizing time and space costsqueries take the form p where wn1 is an ngrambackoffsmoothed models estimate this probability based on the observed entry with longest matching history wnf returning where the probability p and backoff penalties b are given by an alreadyestimated modelthe problem is to store these two values for a large and sparse set of ngrams in a way that makes queries efficientmany packages perform language model queriesthroughout this paper we compare with several packages srilm 1512 is a popular toolkit based on tries used in several decodersirstlm 56002 is a sorted trie implementation designed for lower memory consumptionmitlm 04 is mostly designed for accurate model estimation but can also compute perplexityrandlm 02 stores largescale models in less memory using randomized data structuresberkeleylm revision 152 implements tries based on hash tables and sorted arrays in java with lossy quantizationsheffield guthrie and hepple explore several randomized compression techniques but did not release codetpt germann et al describe tries with better locality properties but did not release codethese packages are further described in section 3we substantially outperform all of them on query speed and offer lower memory consumption than lossless alternativesperformance improvements transfer to the moses cdec and joshua translation systems where our code has been integratedour opensource implementation is also available for download as a standalone package with minimal dependencieswe implement two data structures probing designed for speed and trie optimized for memorythe set of ngrams appearing in a model is sparse and we want to efficiently find their associated probabilities and backoff penaltiesan important subproblem of language model storage is therefore sparse mapping storing values for sparse keys using little memory then retrieving values given keys using little timewe use two common techniques hash tables and sorted arrays describing each before the model that uses the techniquehash tables are a common sparse mapping technique used by srilms default and berkeleylms hashed variantkeys to the table are hashed using for example austin applebys murmurhash2 to integers evenly distributed over a large rangethis range is collapsed to a number of buckets typically by taking the hash modulo the number of bucketsentries landing in the same bucket are said to collideseveral methods exist to handle collisions we use linear probing because it has less memory overhead when entries are smalllinear probing places at most one entry in each bucketwhen a collision occurs linear probing places the entry to be inserted in the next empty bucket wrapping around as necessarytherefore a populated probing hash table consists of an array of buckets that contain either one entry or are emptynonempty buckets contain an entry belonging to them or to a preceding bucket where a conflict occurredsearching a probing hash table consists of hashing the key indexing the corresponding bucket and scanning buckets until a matching key is found or an empty bucket is encountered in which case the key does not exist in the tablelinear probing hash tables must have more buckets than entries or else an empty bucket will never be foundthe ratio of buckets to entries is controlled by space multiplier m 1as the name implies space is o and linear in the number of entriesthe fraction of buckets that are empty is m1 m so average lookup time is o and crucially constant in the number of entrieswhen keys are longer than 64 bits we conserve space by replacing the keys with their 64bit hasheswith a good hash function collisions of the full 64bit hash are exceedingly rare one in 266 billion queries for our baseline model will falsely find a key not presentcollisions between two keys in the table can be identified at model building timefurther the special hash 0 suffices to flag empty bucketsthe probing data structure is a rather straightforward application of these hash tables to store ngram language modelsunigram lookup is dense so we use an array of probability and backoff valuesfor 2 k then you pivotinterpolation search is therefore a form of binary search with better estimates informed by the uniform key distributionif the key distributions range is also known then interpolation search can use this information instead of reading a0 and aa 1 to estimate pivots this optimization alone led to a 24 speed improvementthe improvement is due to the cost of bitlevel reads and avoiding reads that may fall in different virtual memory pagesvocabulary lookup is a sorted array of 64bit word hashesthe index in this array is the vocabulary identifierthis has the effect of randomly permuting vocabulary identifiers meeting the requirements of interpolation search when vocabulary identifiers are used as keyswhile sorted arrays could be used to implement the same data structure as probing effectively making m 1 we abandoned this implementation because it is slower and larger than a trie implementationthe trie data structure is commonly used for language modelingour trie implements the popular reverse trie in which the last word of an ngram is looked up first as do srilm irstlms inverted variant and berkeleylm except for the scrolling variantfigure 1 shows an examplenodes in the trie are based on arrays sorted by vocabulary identifierwe maintain a separate array for each length n containing all ngram entries sorted in suffix ordertherefore for ngram wn1 all leftward extensions wn0 are an adjacent block in the n 1gram arraythe record for wn1 stores the offset at which its extensions beginreading the following records offset indicates where the block endsthis technique was introduced by clarkson and rosenfeld and is also implemented by irstlm and berkeleylms compressed optionsrilm inefficiently stores 64bit pointersunigram records store probability backoff and an index in the bigram tableentries for 2 n n store a vocabulary identifier probability backoff and an index into the n 1gram tablethe highestorder ngram array omits backoff and the index since these are not applicablevalues in the trie are minimally sized at the bit level improving memory consumption over trie implementations in srilm irstlm and berkeleylmgiven ngram counts cnnn1 we use flog2 c1 bits per vocabulary identifier and flog2 cn per index into the table of ngramswhen srilm estimates a model it sometimes removes ngrams but not n 1grams that extend it to the leftin a model we built with default settings 12 of n 1grams were missing their ngram suffixthis causes a problem for reverse trie implementations including srilm itself because it leaves n1grams without an ngram node pointing to themwe resolve this problem by inserting an entry with probability set to an otherwiseinvalid value queries detect the invalid probability using the node only if it leads to a longer matchby contrast berkeleylms hash and compressed variants will return incorrect results based on an n 1gramfloating point values may be stored in the trie exactly using 31 bits for nonpositive log probability and 32 bits for backoff5to conserve memory at the expense of accuracy values may be quantized using q bits per probability and r bits per backoff6we allow any number of bits from 2 to 25 unlike irstlm and berkeleylm to quantize we use the binning method that sorts values divides into equally sized bins and averages within each binthe cost of storing these averages in bits is because there are comparatively few unigrams we elected to store them bytealigned and unquantized making every query fasterunigrams also have 64bit overhead for vocabulary lookupusing cn to denote the number of ngrams total memory consumption of trie in bits is plus quantization tables if usedthe size of trie is particularly sensitive to f1092 c11 so vocabulary filtering is quite effective at reducing model sizesrilm is widely used within academiait is generally considered to be fast with a default implementation based on hash tables within each trie nodeeach trie node is individually allocated and full 64bit pointers are used to find them wasting memorythe compact variant uses sorted arrays instead of hash tables within each node saving some memory but still stores full 64bit pointerswith some minor api changes namely returning the length of the ngram matched it could also be fasterthough this would be at the expense of an optimization we explain in section 41the probing model was designed to improve upon srilm by using linear probing hash tables allocating memory all at once and being easy to compileirstlm is an opensource toolkit for building and querying language modelsthe developers aimed to reduce memory consumption at the expense of timetheir default variant implements a forward trie in which words are looked up in their natural lefttoright orderhowever their inverted variant implements a reverse trie using less cpu and the same amount of memory7each trie node contains a sorted array of entries and they use binary searchcompared with srilm irstlm adds several features lower memory consumption a binary file format with memory mapping caching to increase speed and quantizationour trie implementation is designed to improve upon irstlm using a reverse trie with improved search bit level packing and stateful queriesirstlms quantized variant is the inspiration for our quantized variantunfortunately we were unable to correctly run the irstlm quantized variantthe developers suggested some changes such as building the model from scratch with irstlm but these did not resolve the problemour code has been publicly available and intergrated into moses since october 2010later berkeleylm described ideas similar to oursmost similar is scrolling queries wherein lefttoright queries that add one word at a time are optimizedboth implementations employ a state object opaque to the application that carries information from one query to the next we discuss both further in section 42state is implemented in their scrolling variant which is a trie annotated with forward and backward pointersthe hash variant is a reverse trie with hash tables a more memoryefficient version of srilms defaultwhile the paper mentioned a sorted variant code was never releasedthe compressed variant uses block compression and is rather slow as a resulta directmapped cache makes berkeleylm faster on repeated queries but their fastest cached version is still slower than uncached probing even on cachefriendly queriesfor all variants we found that berkeleylm always rounds the floatingpoint mantissa to 12 bits then stores indices to unique rounded floatsthe 1bit sign is almost always negative and the 8bit exponent is not fully used on the range of values so in practice this corresponds to quantization ranging from 17 to 20 total bitslossy compressed models randlm and sheffield offer better memory consumption at the expense of cpu and accuracythese enable much larger models in memory compensating for lost accuracytypical data structures are generalized bloom filters that guarantee a customizable probability of returning the correct answerminimal perfect hashing is used to find the index at which a quantized probability and possibly backoff are storedthese models generally outperform our memory consumption but are much slower even when cachedin addition to the optimizations specific to each datastructure described in section 2 we implement several general optimizations for language modelingapplications such as machine translation use language model probability as a feature to assist in choosing between hypothesesdynamic programming efficiently scores many hypotheses by exploiting the fact that an ngram language model conditions on at most n 1 preceding wordswe call these n 1 words statewhen two partial hypotheses have equal state they can be recombined and thereafter efficiently handled as a single packed hypothesisif there are too many distinct states the decoder prunes lowscoring partial hypotheses possibly leading to a search errortherefore we want state to encode the minimum amount of information necessary to properly compute language model scores so that the decoder will be faster and make fewer search errorswe offer a state function s wn where substring wn is guaranteed to extend in the same way that wn1 does for purposes of language modelingthe state function is integrated into the query process so that in lieu of the query p the application issues query p which also returns sthe returned state s may then be used in a followon query p that extends the previous query by one wordthese make lefttoright query patterns convenient as the application need only provide a state and the word to append then use the returned state to append another word etcwe have modified moses to keep our state with hypotheses to conserve memory phrases do not keep statesyntactic decoders such as cdec build state from null context then store it in the hypergraph node for later extensionlanguage models that contain wi must also contain prefixes wi for 1 g i g k therefore when the model is queried for p but the longest matching suffix is wnf it may return state s wnf since no longer context will be foundirstlm and berkeleylm use this state function but it is more strict than necessary so decoders using these packages will miss some recombination opportunitiesstate will ultimately be used as context in a subsequent queryif the context wnf will never extend to the right then no subsequent query will match the full contextif the log backoff of wnf is also zero then wf should be omitted from the statethis logic applies recursively if wnf1 similarly does not extend and has zero log backoff it too should be omitted terminating with a possibly empty contextwe indicate whether a context with zero log backoff will extend using the sign bit 00 for contexts that extend and 00 for contexts that do not extendrandlm and srilm also remove context that will not extend but srilm performs a second lookup in its trie whereas our approach has minimal additional costsection 41 explained that state s is stored by applications with partial hypotheses to determine when they can be recombinedin this section we extend state to optimize lefttoright queriesall language model queries issued by machine translation decoders follow a lefttoright pattern starting with either the begin of sentence token or null context for midsentence fragmentsstoring state therefore becomes a timespace tradeoff for example we store state with partial hypotheses in moses but not with each phraseto optimize lefttoright queries we extend state to store backoff information where m is the minimal context from section 41 and b is the backoff penaltybecause b is a function no additional hypothesis splitting happensas noted in section 1 our code finds the longest matching entry wnf for query p the probability p is stored with wnf and the backoffs are immediately accessible in the provided state sthese are written to the state s and returned so that they can be used for the following querysaving state allows our code to walk the data structure exactly once per queryother packages walk their respective data structures once to find wnf and again to find bf1 i1if necessaryin both cases srilm walks its trie an additional time to minimize context as mentioned in section 41berkeleylm uses states to optimistically search for longer ngram matches first and must perform twice as many random accesses to retrieve backoff informationfurther it needs extra pointers in the trie increasing model size by 40this makes memory usage comparable to our probing modelthe probing model can perform optimistic searches by jumping to any ngram without needing state and without any additional memoryhowever this optimistic search would not visit the entries necessary to store backoff information in the outgoing statethough we do not directly compare state implementations performance metrics in table 1 indicate our overall method is fasteronly irstlm does not support threadingin our case multithreading is trivial because our data structures are readonly and uncachedmemory mapping also allows the same model to be shared across processes on the same machinealong with irstlm and tpt our binary format is memory mapped meaning the file and inmemory representation are the samethis is especially effective at reducing load time since raw bytes are read directly to memoryor as happens with repeatedly used models are already in the disk cachelazy mapping reduces memory requirements by loading pages from disk only as necessaryhowever lazy mapping is generally slow because queries against uncached pages must wait for the diskthis is especially bad with probing because it is based on hashing and performs random lookups but it is not intended to be used in lowmemory scenariostrie uses less memory and has better localityhowever trie partitions storage by ngram length so walking the trie reads n disjoint pagestpt has theoretically better locality because it stores ngrams near their suffixes thereby placing reads for a single query in the same or adjacent pageswe do not experiment with models larger than physical memory in this paper because tpt is unreleased factors such as disk speed are hard to replicate and in such situations we recommend switching to a more compact representation such as randlmin all of our experiments the binary file is loaded into the disk cache in advance so that lazy mapping will never fault to diskthis is similar to using the linux map populate flag that is our default loading mechanismthis section measures performance on shared tasks in order of increasing complexity sparse lookups evaluating perplexity of a large file and translation with mosesour test machine has two intel xeon e5410 processors totaling eight cores 32 gb ram and four seagate barracuda disks in software raid 0 running linux 2618sparse lookup is a key subproblem of language model querieswe compare three hash tables our probing implementation gccs hash set and boosts8 unorderedfor sorted lookup we compare interpolation search standard c binary search and standard c set based on redblack treesthe data structure was populated with 64bit integers sampled uniformly without replacementfor queries we uniformly sampled 10 million hits and 10 million missesthe same numbers were used for each data structuretime includes all queries but excludes random number generation and data structure populationfigure 2 shows timing resultsfor the probing implementation hash table sizes are in the millions so the most relevant values are on the right size of the graph where linear probing winsit also uses less memory with 8 bytes of overhead per entry linked list implementations hash set and unordered require at least 8 bytes per entry for pointersfurther the probing hash table does only one random lookup per query explaining why it is faster on large datainterpolation search has a more expensive pivot but performs less pivoting and reads so it is slow on small data and faster on large datathis suggests a strategy run interpolation search until the range narrows to 4096 or fewer entries then switch to binary searchhowever reads in the trie data structure are more expensive due to bitlevel packing so we found that it is faster to use interpolation search the entire timememory usage is the same as with binary search and lower than with setfor the perplexity and translation tasks we used srilm to build a 5gram english language model on 834 million tokens from europarl v6 and the 2011 workshop on machine translation news crawl corpus with duplicate lines removedthe model was built with open vocabulary modified kneserney smoothing and default pruning settings that remove singletons of order 3 and higherunlike germann et al we chose a model size so that all benchmarks fit comfortably in main memorybenchmarks use the packages binary format our code is also the fastest at building a binary fileas noted in section 44 disk cache state is controlled by reading the entire binary file before each test beginsfor randlm we used the settings in the documentation 8 bits per value and false positive probability 1 256we evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the english gigaword corpus tokens were converted to vocabulary identifiers in advance and state was carried from each query to the nexttable 1 shows results of the benchmarkcompared to decoding this task is cacheunfriendly in that repeated queries happen only as they naturally occur in texttherefore performance is more closely tied to the underlying data structure than to the cachein fact we found that enabling irstlms cache made it slightly slower so results in table 1 use irstlm without cachingmoses sets the cache size parameter to 50 so we did as well the resulting cache size is 282 gbthe results in table 1 show probing is 81 faster than trie which is in turn 31 faster than the fastest baselinememory usage in probing is high though srilm is even larger so where memory is of concern we recommend using trie if it fits in memoryfor even larger models we recommend randlm the memory consumption of the cache is not expected to grow with model size and it has been reported to scale wellanother option is the closedsource data structures from sheffield though we are not able to calculate their memory usage on our model results reported in their paper suggest lower memory consumption than trie on largescale models at the expense of cpu timethis task measures how well each package performs in machine translationwe run the baseline moses system for the frenchenglish track of the 2011 workshop on machine translation9 translating the 3003sentence test setbased on revision 4041 we modified moses to print process statistics before terminatingprocess statistics are already collected by the kernel srilms compact variant has an incredibly expensive destructor dwarfing the time it takes to perform translation and so we also modified moses to avoiding the destructor by calling exit instead of returning normallysince our destructor is an efficient call to munmap bypassing the destructor favors only other packagesthe binary language model from section 52 and text phrase table were forced into disk cache before each runtime starts when moses is launched and therefore includes model loading timethese conauses lossy compression bthe 8bit quantized variant returned incorrect probabilities as explained in section 3it did 402 queriesms using 180 gb cmemory use increased during scoring due to batch processing or caching the first value reports use immediately after loading while the second reports the increase during scoring dberkeleylm is written in java which requires memory be specified in advancetiming is based on plentiful memorythen we ran binary search to determine the least amount of memory with which it would runthe first value reports resident size after loading the second is the gap between postloading resident memory and peak virtual memorythe developer explained that the loading process requires extra memory that it then frees ebased on the ratio to sris speed reported in guthrie and hepple under different conditionsmemory usage is likely much lower than ours fthe original paper provided only 2s of query timing and compared with sri when it exceeded available ramthe authors provided us with a ratio between tpt and sri under different conditions alossy compression with the same weights blossy compression with retuned weights ditions make the value appropriate for estimating repeated run times such as in parameter tuningtable 2 shows singlethreaded results mostly for comparison to irstlm and table 3 shows multithreaded resultspart of the gap between resident and virtual memory is due to the time at which data was collectedstatistics are printed before moses exits and after parts of the decoder have been destroyedmoses keeps language models and many other resources in static variables so these are still resident in memoryfurther we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kerneloverall language modeling significantly impacts decoder performancein line with perplexity results from table 1 the probing model is the fastest followed by trie and subsequently other packageswe incur some additional memory cost due to storing state in each hypothesis though this is minimal compared with the size of the model itselfthe trie model continues to use the least memory of ing with map populate the defaultirst is not threadsafetime for moses itself to load including loading the language model and phrase table is includedalong with locking and background kernel operations such as prefaulting this explains why wall time is not oneeighth that of the singlethreaded case alossy compression with the same weights blossy compression with retuned weights the nonlossy optionsfor randlm and irstlm the effect of caching can be seen on speed and memory usagethis is most severe with randlm in the multithreaded case where each thread keeps a separate cache exceeding the original model sizeas noted for the perplexity task we do not expect cache to grow substantially with model size so randlm remains a lowmemory optioncaching for irstlm is smaller at 009 gb resident memory though it supports only a single threadthe berkeleylm directmapped cache is in principle faster than caches implemented by randlm and by irstlm so we may write a c equivalent implementation as future workrandlms stupid backoff variant stores counts instead of probabilities and backoffsit also does not prune so comparing to our pruned model would be unfairusing randlm and the documented settings we built a stupid backoff model on the same data as in section 52we used this data to build an unpruned arpa file with irstlms improvedkneserney option and the default three piecestable 4 shows the resultswe elected run moses singlethreaded to minimize the impact of randlms cache on memory userandlm is the clear winner in ram utilization but is also slower and lower qualityhowever the point of randlm is to scale to even larger data compensating for this loss in qualitythere any many techniques for improving language model speed and reducing memory consumptionfor speed we plan to implement the directmapped cache from berkeleylmmuch could be done to further reduce memory consumptionraj and whittaker show that integers in a trie implementation can be compressed substantiallyquantization can be improved by jointly encoding probability and backofffor even larger models storing counts is a possibilitybeyond optimizing the memory size of trie there are alternative data structures such as those in guthrie and hepple finally other packages implement language model estimation while we are currently dependent on them to generate an arpa filewhile we have minimized forwardlooking state in section 41 machine translation systems could also benefit by minimizing backwardlooking statefor example syntactic decoders perform dynamic programming parametrized by both backward and forwardlooking stateif they knew that the first four words in a hypergraph node would never extend to the left and form a 5gram then three or even fewer words could be kept in the backward statethis information is readily available in trie where adjacent records with equal pointers indicate no further extension of context is possibleexposing this information to the decoder will lead to better hypothesis recombinationgeneralizing state minimization the model could also provide explicit bounds on probability for both backward and forward extensionthis would result in better rest cost estimation and better pruning10 in general tighter but well factored integration between the decoder and language model should produce a significant speed improvementwe have described two data structures for language modeling that achieve substantial reductions in time and memory costthe probing model is 24 times as fast as the fastest alternative srilm and uses less memory toothe trie model uses less memory than the smallest lossless alternative and is still faster than srilmthese performance gains transfer to improved system runtime performance though we focused on moses our code is the best lossless option with cdec and joshuawe attain these results using several optimizations hashing custom lookup tables bitlevel packing and state for lefttoright query patternsthe code is opensource has minimal dependencies and offers both c and java interfaces for integrationalon lavie advised on this workhieu hoang named the code kenlm and assisted with moses along with barry haddowadam pauls provided a prerelease comparison to berkeleylm and an initial java interfacenicola bertoldi and marcello federico assisted with irstlmchris dyer integrated the code into cdecjuri ganitkevitch answered questions about joshuathis material is based upon work supported by the national science foundation graduate research fellowship under grant no0750271 and by the darpa gale program
W11-2123
kenlm faster and smaller language model querieswe present kenlm a library that implements two data structures for efficient language model queries reducing both time and memory coststhe probing data structure uses linear probing hash tables and is designed for speedcompared with the widelyused srilm our probing model is 24 times as fast while using 57 of the memorythe trie data structure is a trie with bitlevel packing sorted records interpolation search and optional quantization aimed at lower memory consumptiontrie simultaneously uses less memory than the smallest lossless baseline and less cpu than the fastest baselineour code is opensource threadsafe and integrated into the moses cdec and joshua translation systemsthis paper describes the several performance techniques used and presents benchmarks against alternative implementationswe describe a language modeling library
findings of the 2012 workshop on statistical machine translation this paper presents the results of the wmt12 shared tasks which included a translation task a task for machine translation evaluation metrics and a task for runtime estimation of machine translation quality we conducted a largescale manual evaluation of 103 machine translation systems submitted by 34 teams we used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 12 evaluation metrics we introduced a new quality estimation task this year and evaluated submissions from 11 teams this paper presents the results of the shared tasks of the workshop on statistical machine translation which was held at naacl 2012this workshop builds on six previous wmt workshops in the past the workshops have featured a number of shared tasks a translation task between english and other languages a task for automatic evaluation metrics to predict human judgments of translation quality and a system combination task to get better translation quality by combining the outputs of multiple translation systemsthis year we discontinued the system combination task and introduced a new task in its place ficulty is not uniform across all input typesit would thus be useful to have some measure of confidence in the quality of the output which has potential usefulness in a range of settings such as deciding whether output needs human postediting or selecting the best translation from outputs from a number of systemsthis shared task focused on sentencelevel estimation and challenged participants to rate the quality of sentences produced by a standard moses translation system on an englishspanish news corpus in one of two tasks ranking and scoringpredictions were scored against a blind test set manually annotated with relevant quality judgmentsthe primary objectives of wmt are to evaluate the state of the art in machine translation to disseminate common test sets and public training data with published performance numbers and to refine evaluation methodologies for machine translationas with previous workshops all of the data translations and collected human judgments are publicly available1 we hope these datasets form a valuable resource for research into statistical machine translation system combination and automatic evaluation or automatic prediction of translation quality2 overview of the shared translation task the recurring task of the workshop examines translation between english and four other languages german spanish french and czechwe created a test set for each language pair by translating newspaper articleswe additionally provided training data and two baseline systemsthe test data for this years task was created by hiring people to translate news articles that were drawn from a variety of sources from november 15 2011a total of 99 articles were selected in roughly equal amounts from a variety of czech english french german and spanish news sites2 czech blesk ctk e15 denık idnescz ihnedcz ukacko zheny french canoe croix le devoir les echos equipe le figaro liberation spanish abces milenio noroeste nacion el pais el periodico prensa libre el universal english cnn fox news los angeles times new york times newsweek time washington post german berliner kurier faz giessener allgemeine morgenpost spiegel welt the translations were created by the professional translation agency ceet3 all of the translations were done directly and not via an intermediate languagealthough the translations were done professionally we observed a number of errorsthese errors ranged from minor typographical mistakes to more serious errors of incorrect verb choices and nonsensical constructionsan example of the latter is the french sentence il a gratte une planche de beton perdit des pieces du vehiculehere the french verb gratter is incorrect and the phrase planche de beton does not make any sensewe did not quantify errors but collected a number of examples during the course of the manual evaluationthese errors were present in the data available to all the systems and therefore did not bias the results but we suggest that next year a manual review of the professionallycollected translations be taken prior to releasing the data in order to correct mistakes and provide feedback to the translation agencyas in past years we provided parallel corpora to train translation models monolingual corpora to train language models and development sets to tune system parameterssome statistics about the training materials are given in figure 1we received submissions from 34 groups across 18 institutionsthe participants are listed in table 1we also included two commercial offtheshelf mt systems three online statistical mt systems and three online rulebased mt systemsnot all systems supported all language pairswe note that the eight companies that developed these systems did not submit entries themselves but were instead gathered by translating the test data via their interfaces 4 they are therefore anonymized in this paperthe data used to construct these systems is not subject to the same constraints as the shared task participantsit is possible that part of the reference translations that were taken from online news sites could have been included in the systems models for instancewe therefore categorize all commercial systems as unconstrained when evaluating the resultsas with past workshops we placed greater emphasis on the human evaluation than on the automatic evaluation metric scoresit is our contention that automatic measures are an imperfect substitute for human assessment of translation qualitytherefore we define the manual evaluation to be primary and distinct words is based on the provided tokenizer use the human judgments to validate automatic metricsmanual evaluation is time consuming and it requires a large effort to conduct on the scale of our workshopwe distributed the workload across a number of people beginning with sharedtask participants and interested volunteersthis year we also opened up the evaluation to nonexpert annotators hired on amazon mechanical turk to ensure that the turkers provided high quality annotations we used controls constructed from the machine translation ranking tasks from prior yearscontrol items were selected such that there was high agreement across the system developers who completed that itemin all there were 229 people who participated in the manual evaluation with 91 workers putting in more than an hours worth of effort and 21 putting in more than four hoursafter filtering turker rankings against the controls to discard turkers who fell below a threshold level of agreement on the control questions there was a collective total of 336 hours of usable laborthis is similar to the total of 361 hours of labor collected for wmt11we asked annotators to evaluate system outputs by ranking translated sentences relative to each otherthis was our official determinant of translation qualitythe total number of judgments collected for each of the language pairs is given in table 2ranking translations relative to each other is a reasonably intuitive taskwe therefore kept the instructions simple you are shown a source sentence followed by several candidate translationsyour task is to rank the translations from best to worst each screen for this task involved judging translations of three consecutive source segmentsfor each source segment the annotator was shown the outputs of five submissions and asked to rank themwe refer to each of these as ranking tasks or sometimes blocksevery language task had more than five participating systems up to a maximum of 16 for the germanenglish taskrather than attempting to get a complete ordering over the systems in each ranking task we instead relied on random selection and a reasonably large sample size to make the comparisons fairwe use the collected rank labels to assign each system a score that reflects how highly that system was usually ranked by the annotatorsthe score for some system a reflects how frequently it was judged to be better than other systemsspecifically each block in which a appears includes four implicit pairwise comparisons a is rewarded once for each of the four comparisons in which a wins and its score is the number of such winning pairwise comparisons divided by the total number of nontying pairwise comparisons involving athis scoring metric is different from that used in prior years in two waysfirst the score previously included ties between system rankingsin that case the score for a reflected how often a was rated as better than or equal to other systems and was normalized by all comparisons involving ahowever this approach unfairly rewards systems that are similar this is problematic since many of the systems use variations of the same underlying decoder a second difference is that this year we no longer include comparisons against reference translationsin the past reference translations were included among the systems to be ranked as controls and the pairwise comparisons were used in determining the best systemhowever workers have a very clear preference for reference translations so including them unduly penalized systems that through luck of the draw were pitted against the references more oftenthese changes are part of a broader discussion of the best way to produce the system ranking which we discuss at length in section 4the system scores are reported in section 33appendix a provides detailed tables that contain pairwise headtohead comparisons between pairs of systemseach year we calculate the inter and intraannotator agreement for the human evaluation since a reasonable degree of agreement must exist to support our process as a valid evaluation setupto ensure we had enough data to measure agreement we occasionally showed annotators items that were repeated from previously completed itemsthese repeated items were drawn from ones completed by the same annotator and from different annotatorswe measured pairwise agreement among annotators using cohens kappa coefficient which is defined as where p is the proportion of times that the annotators agree and p is the proportion of time that they would agree by chancenote that n is basically a normalized version of p one which takes into account how meaningful it is for annotators to agree with each other by incorporating pnote also that n has a value of at most 1 with higher rates of agreement resulting in higher n we calculate p by examining all pairs of systems which had been judged by two or more judges and calculating the proportion of time that they agreed that a b a b or a 1we started the datacreation process from a total of 2000 sentences for the training set and the final 1832 sentences we selected as training data were the ones that passed through this intermediate cleaning stepbesides score disagreement we noticed another trend on the human judgements of peeffortsome judges tend to give more moderate scores while others like to commit also to scores that are more in the extremes of the available rangesince the quality estimation task would be negatively influenced by having most of the scores in the middle of the range we have chosen to compute the final effort scores as an weighted average between the three peeffort scores with more weight given to the judges with higher standard deviation from their own mean scorewe have used weights 3 2 and 1 for the three peeffort judges according to this criterionthere is an additional advantage resulting from this weighted average score instead of obtaining average numbers only at values x0 x33 and x66 7 the weighted averages are spread more evenly in the range 1 5a few variations of the training data were provided including version with cases restored and a version detokenizedin addition engineinternal information from moses such as phrase and word alignments detailed model scores etc nbest lists and stack information from the search graph as a word graph as produced by the moses engine were providedthe rationale behind releasing this engineinternal data was to make it possible for this sharedtask to address quality estimation using a glassbox approach that is making use of information from the internal workings of the mt enginethe test data was a subset of the wmt12 englishspanish test set consisting of 442 sentencesthe test data consists of the following files the first two files were the input for the qualityestimation sharedtask participating systemssince the moses engine used to create the mt outputs was the same as the one used for generating the training data the engineinternal resources are the same as the ones we released as part of the training data packagethe effort scores were released after the participants submitted their sharedtask submission and were solely used to evaluate the submissions according to the established metricsthe guidelines used by the peeffort judges to assign 15 scores were the same as the ones used for creating the training datawe have used the same criteria to ensure the consistency of the human judgmentsthe initial set of candidates consisted of 604 sentences of which only 442 met this criteriathe final scores used as goldvalues have been obtained using the same weightedaverage scheme as for the training datain addition to the training and test materials we made several additional resources that were used for the baseline qe system andor the smt system that produced the training and test datasets for the ranking task we defined a novel metric that provides some advantages over a more traditional ranking metrics like spearman correlationour metric called deltaavg assumes that the reference test set has a number associated with each entry that represents its extrinsic valuefor instance using the effort scale we described in section 61 we associate a value between 1 and 5 with each sentence representing the quality of that sentencegiven these values our metric does not need an explicit reference ranking the way the spearman ranking correlation does9 the goal of the deltaavg metric is to measure how valuable a proposed ranking is according to the extrinsic values associated with the test entrieswe first define a parameterized version of this metric called deltaavgnthe following notations are used for a given entry sentence s v represents the function that associates an extrinsic value to that entry we extend this notation to a set s with v representing the average of all v s s intuitively v is a quantitative measure of the quality of the set s as induced by the extrinsic values associated with the entries in s for a set of ranked entries s and a parameter n we denote by s1 the first quantile of set s s2 the second quantile and so on for n quantiles of equal sizes10 we also use the notation si vki skusing these notations we define quantile s1 and the overall quality for n 3 deltaavg3 v 2v v 2 hence it measures an average difference across two cases between the quality of the top quantile and the overall quality and between the quality of the top two quantiles and the overall qualityin general deltaavgn measures an average difference in quality across n 1 cases with each case measuring the impact in quality of adding an additional quantile from top to bottomfinally we define where n s2as before we write deltaavg for deltaavgv when the valuation function v is clear from the contextthe deltaavg metric is an average across all deltaavgn values for those n values for which the resulting quantiles have at least 2 entries the deltaavg metric has some important properties that are desired for a ranking metric deltaavgv n en1 v it measures the quality of a hypothesis rankk1 v ing from an extrinsic perspective when the valuation function v is clear from the context we write deltaavgn for deltaavgv nthe parameter n represents the number of quantiles we want to split the set s intofor instance n 2 gives deltaavg2 v v hence it measures the difference between the quality of the top in the rest of this paper we present results for deltaavg using as valuation function v the postediting effort scores as defined in section 61we also report the results of the ranking task using the moretraditional spearman correlationfor the scoring task we use two metrics that have been traditionally used for measuring performance for regression tasks mean absolute error as a primary metric and root of mean squared error as a secondary metricfor a given test set 5 with entries si71 i 5 we denote by h the proposed score for entry si and by v the reference value for entry si we formally define our metrics as follows where n 5both these metrics are nonparametric automatic and deterministic and extrinsically interpretablefor instance a mae value of 05 means that on average the absolute difference between the hypothesized score and the reference score value is 05the interpretation of rmse is similar with the difference that rmse penalizes larger errors more eleven teams submitted one or more systems to the shared task with most teams submitting for both ranking and scoring subtaskseach team was allowed up to two submissions in the descriptions below participation in the ranking is denoted and scoring is denoted baseline system the baseline system used the feature extraction software it analyzed the source and translation files and the smt training corpus to extract the following 17 systemindependent features that were found to be relevant in previous work these features are used to train a support vector machine regression algorithm using a radial basis function kernel with the libsvm package they e and c parameters were optimized using a gridsearch and 5fold cross validation on the training setwe note that although the system is referred to as a baseline it is in fact a strong systemalthough it is simple it has proved to be robust across a range of language pairs mt systems and text domainsit is a simpler variant of the system used in the rationale behind having such a strong baseline was to push systems to exploit alternative sources of information and combination learning approachessdllw both systems use 3 sets of features the 17 baseline features 8 systemdependent features from the decoder logs of moses and 20 features developed internallysome of these features made use of additional data andor resources such as a secondary mt system that was used as pseudoreference for the hypothesis and pos taggers for both languagesfeatureselection algorithms were used to select subsets of features that directly optimize the metrics used in the tasksystem sdllw m5pbestavgdelta uses a resulting 15feature set optimized towards the avgdelta metricit employs an m5p model to learn a decisiontree with only two linear equationssystem sdllw svm uses a 20feature set and an svm epsilon regression model with radial basis function kernel with parameters c gamma and epsilon tuned on a development set the model was trained with 10fold cross validation and the tuning process was restarted several times using different starting points and step sizes to avoid overfittingthe final model was selected based on its performance on the development set and the number of support vectorsuu system uu best uses the 17 baseline features plus 82 features from hardmeier and constituency trees over input sentences generated by the stanford parser and dependency trees over both input and output sentences generated by the maltparsersystem uu bltk uses only the 17 baseline features plus constituency and dependency trees as abovethe machine learning component in both cases is svm regression for the ranking task the ranking induced by the regression output is usedthe system uses polynomial kernels of degree 2 and 3 as well as two different types of tree kernels for constituency and dependency trees respectivelythe svm marginerror tradeoff the mixture proportion between tree kernels and polynomial kernels and the degree of the polynomial kernels were optimised using grid search with 5fold crossvalidation over the training settcd tcd m5presourcesonly uses only the baseline features while tcd m5pall uses the baseline and additional featuresa number of metrics were proposed which work in the following way given a sentence to evaluate it is compared against some reference data using similarity measures the training data was used as reference along with the google ngrams datasetseveral learning methods were tested using weka on the training data the system submission uses the m5p algorithm which performed bestcontrary to what had been observed on the training data using crossvalidation tcd m5presourcesonly performs better than tcd m5pall on the test dataprhltupv the system addresses the task using a regression algorithm with 475 features including the 17 the baseline featuresmost of the features are defined as word scoresamong them the features obtained form a smoothed naive bayes classifier have shown to be particularly interestingdifferent methods to combine wordlevel scores into sentencelevel features were investigatedfor model building svm regression was usedgiven the large number of features the training data provided as part of the task was insufficient yielding unstable systems with not so good performancedifferent feature selection methods were implemented to determine a subset of relevant featuresthe final submission used these relevant features to train an svm system whose parameters were optimized with respect to the final evaluation metricsuedin the system uses the baseline features along with some additional features binary features for named entities in source using stanford ner tagger binary indicators for occurrence of quotes or parenthetical segments words in upper case and numbers geometric mean of target word probabilities and probability of worst scoring word under a discriminative word lexicon model sparse neural network directly mapping from source to target with source and target side either filtered to relevant words or hashed to reduce dimensionality number of times at least a 3gram is seen normalized by sentence length and levenshtein distance of either source or translation to closest entry of the smt training corpus on word or character levelan ensemble of neural networks optimized for rmse was used for prediction and rankingthe contribution of new features was tested by adding them to the baseline features using 5fold crossvalidationmost features did not result in any improvement over the baselinethe final submission was a combination of all feature sets that showed improvementsjtu the task is treated as a regression problem using the epsilonsvm methodall features are extracted from the official data involving no external nlp toolsresourcesmost of them come from the phrase table decoding data and smt training datathe focus is on special word relations and special phrase patterns thus several feature templates on this topic are extractedsince the training data is not large enough to assign weights to all features methods for estimating common strings or sequences of words are usedthe training data is divided in 34 for training and 14 for development to filter ineffective featuresbesides the baseline features the final submission contains 18 feature templates and about 4 million features in totalwlvshef the systems integrates novel linguistic features from the source and target texts in an attempt to overcome the limitations of existing shallow features for quality estimationthese linguisticallyinformed features include partofspeech information phrase constituency subjectverb agreement and target lexicon analysis which are extracted using parsers corpora and auxiliary resourcessystems are built using epsilonsvm regression with parameters optimised using 5fold crossvalidation on the training set and two different feature sets wlvshef bl uses the 17 baseline features plus 70 linguistically inspired features while wlvshef fs uses a larger set of 70 linguistic plus 77 shallow features although results indicate that the models fall slightly below the baseline further analysis shows that linguistic information is indeed informative and complementary to shallow indicatorsdfki dfki morphposibm1lm is a simple linear interpolation of pos 6gram language model scores morpheme 6gram language model scores ibm 1 scores for pos 4grams and for morphemesthe parallel news corpora from wmt10 is used as extra data to train the language model and the ibm 1 modeldfki cfsplsreg and dfki grcfsmars use a collection of 264 features generated containing the baseline features and additional resourcesnumerous methods of feature selection were tested using 10fold cross validation on the training data reducing these to 23 feature setsseveral regression and classification algorithms were employed to train prediction modelsthe bestperforming models included features derived from pcfg parsing language quality checking and lm scoring of both source and target besides features from the smt search graph and a few baseline featuresdfki cfsplsreg uses a best first correlationbased feature selection technique trained with partial least squares regression while dfki grcfsmars uses a greedy stepwise correlationbased feature selection technique trained with multivariate adaptive regression splinesdcusymc systems are based on a classification approach using a set of features that includes the baseline featuresthe manually assigned quality scores provided for each mt output in the training set were rounded in order to apply classification algorithms on a limited set of classes three classifiers were combined by averaging the predicted classes svm using sequential minimal optimization and rbf kernel naive bayes and random forestdcusymc constrained is based on a set of 70 features derived only from the data provided for the taskthese include a set of features which attempt to model translation adequacy using a bilingual topic model built using latent dirichlet allocationdcusymc unconstrained is based on 308 features including the constrained ones and others extracted using external tools grammaticality features extracted from the source segments using the treetagger partofspeech tagger an english precision grammar the xle parser and the brown reranking parser and features based on partofspeech tag counts extracted from the mt output using a spanish treetagger modelloria several numerical or boolean features are computed from the source and target sentences and used to train an svm regression algorithm with linear and radial basis function as kernelfor the radial basis function a grid search is performed to optimise the parameter ythe official submission use the baseline features and a number of features proposed in previous work amounting to 66 featuresa feature selection algorithm is used in order to remove noninformative featuresno additional data other than that provided for the shared task is consideredthe training data is split into a training part and a development part to learn the regression model and optimise the parameters of the regression and for feature selectionupc the systems use several features on top of the baseline featuresthese are mostly based on different language models estimated on reference and automatic spanish translations of the newsv7 corpusthe automatic translations are generated by the system used for the shared taskngram lms are estimated on word forms pos tags stop words interleaved by pos tags stopword patterns plus variants in which the pos tags are replaced with the stem or root of each target wordthe pos tags on the target side are obtained by projecting source side annotations via automatic alignmentsthe resulting features are the perplexity of each additional language model according to the two translations and the ratio between the two perplexitiesadditionally features that estimate the likelihood of the projection of dependency parses on the two translations are encodedfor learning linear svm regression is usedoptimization was done via 5fold crossvalidation on a development datafeatures are encoded by means of their zscores ie how many standard deviations the observed value is above or below the meana variant of the system upc2 uses an option of svmlight that removes inconsistent points from the training set and retrains the model until convergencehere we give the official results for the ranking and scoring subtasks followed by a discussion that highlights the main findings of the tasktable 12 gives the results for the ranking subtaskthe table is sorted from best to worse using the deltaavg metric scores as primary key and the spearman correlation scores as secondary keythe winning submissions for the ranking subtask are sdllws m5pbestdeltaavg and svm entries which have deltaavg scores of 063 and 061 respectivelythe difference with respect to all the other submissions is statistically significant at p 005 using pairwise bootstrap resampling the stateoftheart baseline system has a deltaavg score of 055 five other submissions have performances that are not different from the baseline at a statisticallysignificant level as shown by the gray area in the middle of table 12three submissions scored higher than the baseline system at p 005 which indicates that this sharedtask succeeded in pushing the stateoftheart performance to new levelsthe range of performance for the submissions in the ranking task varies from a deltaavg of 065 down to a deltaavg of 015 in addition to the performance of the official submission we report here results obtained by various oracle methodsthe oracle methods make use of various metrics that are associated in a oracle manner to the test input the goldlabel effort metric for oracle effort the hter metric computed against the postedited translations as reference for oracle hter and the bleu metric computed against the same postedited translations as reference for oracle bleu11 the oracle effort deltaavg score of 095 gives an upperbound in terms of deltaavg for the test set used in this evaluationit basically indicates that for this set 11we use the bleu notation to underscore the use of postedited translations as reference as opposed to using references that are not the product of a postediting process as for the traditional bleu metric the difference in pe effort between the topquality quantiles and the overall quality is 095 on averagewe would like to emphasize here that the deltaavg metric does not have any apriori range for its valuesthe upperbound for instance is testdependent and therefore an oracle effort score is useful for understanding the performance level of real systemsubmissionsthe oracle hter deltaavg score of 077 is a more realistic upperbound for the current setsince the hter metric is considered a good approximation for the effort required in postediting ranking the test set based on the hter scores provides a good oracle comparison pointthe oracle based on bleu gives a lower deltaavg score which can be interpreted to mean that the bleu metric provides a lower correlation to postediting effort compared to hterwe also note here that there is room for improvement between the highestscoring submission and the oracle hter deltaavg score of 077we are not sure if this difference can be bridged completely but having measured a quantitative difference between the current bestperformance and a realistic upperbound is an important achievement of this sharedtaskthe results for the scoring task are presented in table 13 sorted from best to worse by using the mae metric scores as primary key and the rmse metric scores as secondary keythe winning submission is sdllws m5pbestdeltaavg with an mae of 061 and an rmse of 075 the strong stateoftheart qualityestimation baseline system is measured to have an mae of 069 and rmse of 082 with six other submissions having performances that are not different from the baseline at a statisticallysignificant level as shown by the gray area in the middle of table 13five submissions scored higher than the baseline system at p 005 which indicates that this sharedtask also succeeded in pushing the stateoftheart performance to new levels in terms of absolute scoringthe range of performance for the submissions in the scoring task varies from an mae of 061 up to an mae of 087 we also calculate scoring oracles using the methods used for the ranking oraclesthe difference is that the hter and bleu oracles need a way of mapping their scores into the 1 5 rangefor the comparison here we did the mapping by excluding the 5 top and bottom outlier scores and then linearly mapping the remaining range into the 15 5 rangethe oracle effort scores are not very indicative in this casehowever the oracle hter mae score of 056 is a somewhat realistic lowerbound for the current set we argue that since the hter metric is considered a good approximation for the effort required in postediting effortlike scores derived from the hter score provide a good way to compute oracle scores in a deterministic mannernote that again the oracle based on bleu gives a worse mae score at 061 which support the interpretation that the bleu metric provides a lower correlation to postediting effort compared to teroverall we consider the mae values for these hter and bleubased oracles to indicate high error marginsmost notably the performance of the best system gets the same mae score as the bleu oracle at 061 maewe take this to mean that the scoring task is more difficult compared to the ranking task since even oraclebased solutions get high error scoreswhen looking back at the goals that we identified for this sharedtask most of them have been successfully accomplishedin addition we have achieved additional ones that were not explicitly stated from the beginningin this section we discuss the accomplishments of this sharedtask in more detail starting from the defined goals and beyondidentify new and effective quality indicators the vast majority of the participating systems use external resources in addition to those provided for the task such as parsers partofspeech taggers named entity recognizers etcthis has resulted in a wide variety of features being usedmany of the novel features have tried to exploit linguisticallyoriented featureswhile some systems did not achieve improvements over the baseline while exploiting such features others have another significant set of features that has been previously overlooked is the feature set of the mt decoderconsidering statistical engines these features are immediately available for quality prediction from the internal trace of the mt decoder and its contribution is significantthese features which reflect the confidence of the smt system on the translations it produces have been shown to be complementary to other systemindependent featuresfor example the sdllw submissions incorporate these features and their feature selection strategy consistently favored this feature setthe power of this set of features alone is enough to yield outputs that would have been placed 4th in the ranking task and 5th in the scoring task a remarkable achievementanother interesting feature used by the sdllw submissions rely on pseudoreferences ie translations produced by other mt systems for the same input sentenceidentify alternative machine learning techniques although svm regression was used to compute the baseline performance the baseline system provided for the task consisted solely of a software to extract features as opposed to a model built using the regression algorithmthe rationale behind this decision was to encourage participants to experiment with alternative methods for combining different quality indicatorsthis was achieved to a large extentthe bestperforming machine learning techniques were found to be the m5p regression trees and the svm regression modelsthe merit of the m5p regression trees is that it provides compact models that are less prone to overfittingin contrast the svr models can easily overfit given the small amount of training data available and the large numbers of features commonly usedindeed many of the submissions that fell below the baseline performance can blame overfitting for their suboptimal performancehowever svr models can achieve high performance through the use of tuning and feature selection techniques to avoid overfittingstructured learning techniques were successfully used by the uu submissions the second best performing team to represent parse treesthis seems an interesting direction to encode other sorts of linguistic information about source and translation textsother interesting learning techniques have been tried such as neural networks partial least squares regression or multivariate adaptive regression splines but their performance does not suggest they are strong candidates for learning highlyperforming qualityestimation modelstest the suitability of evaluation metrics for quality estimation deltaavg our proposed metric for measuring ranking performance proved suitable for scoring the ranking subtaskits high correlation with the spearman ranking metric coupled with its extrinsic interpretability makes it a preferred choice for future measurementsit is also versatile in the sense that the its valuation function v can change to reflect different extrinsic measures of qualityestablish the state of the art performance the results on both the ranking and the scoring subtasks established new state of the art levels on the test set used in this shared taskin addition to these levels the oracle performance numbers also help understand the current performance level and how much of a gap in performance there still existsadditional data points regarding quality estimation performance are needed to establish how stable this measure of the performance gap iscontrast the performance of regression and ranking techniques most of the submissions in the ranking task used the results provided by a regression solution to infer the rankingsalso optimizing for ranking performance via a regression solution seems to result in regression models that perform very well as in the case of the topranked submissionthere appear to be significant differences between considering the quality estimation task as a ranking problem versus a scoring problemthe rankingbased approach appears to be somewhat simpler and more easily amenable to automatic solutions and at the same time provides immediate benefits when integrated into larger applications the scoringbased approach is more difficult as the high error rate even of oraclebased solutions indicatesit is also wellknown from human evaluations of mt outputs that human judges also have a difficult time agreeing on absolutenumber judgements to translationsour experience in creating the current datasets confirms that even with highlytrained professionals it is difficult to arrive at consistent judgementswe plan to have future investigations on how to achieve more consistent ways of generating absolutenumber scores that reflect the quality of automated translationsas in previous incarnations of this workshop we carried out an extensive manual and automatic evaluation of machine translation performance and we used the human judgements that we collected to validate automatic metrics of translation qualitythis year was also the debut of a new quality estimation task which tries to predict the effort involved in having post editors correct mt outputthe quality estimation task differs from the metrics task in that it does not involve reference translationsas in previous years all data sets generated by this workshop including the human judgments system translations and automatic scores are publicly available for other researchers to analyze12this work was supported in parts by the euromatrixplus project funded by the european commission the gale program of the us defense advanced research projects agency contract nohr001106c0022 the us national science foundation under grant iis0713448 and the cosyne project fp7ict4248531 funded by the european commissionthe views and findings are the authors alonethanks for adam lopez for discussions about alternative ways of ranking the overall system scoresthe quality estimation shared task organizers thank wilker aziz for his help with the smt models and resources and mariano felice for his help with the system for the extraction of baseline features
W12-3102
findings of the 2012 workshop on statistical machine translationthis paper presents the results of the wmt12 shared tasks which included a translation task a task for machine translation evaluation metrics and a task for runtime estimation of machine translation qualitywe conducted a largescale manual evaluation of 103 machine translation systems submitted by 34 teamswe used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 12 evaluation metricswe introduced a new quality estimation task this year and evaluated submissions from 11 teamswe report for several automatic metrics on the whole wmt12 englishtoczech dataset
robust bilingual word alignment for machine aided translation we have developed a new program called aligning parallel text text such as the canadian hansards that are available in two or more languages the program takes the of 1993 a robust alternative to sentencebased alignment programs and applies wordlevel constraints usa version of brown model 2 modified and extended to deal robustness issues tested on a subset of canadian hansards supplied by simard the combination of word_align plus char_align reduces the variance by a factor of over more importantly because word_align and char_align were designed to work robustly on texts that are smaller and more noisy than the hansards it has been possible to successfully deploy the programs at att language line services a commercial translation service to help them with difficult terminology aligning parallel texts has recently received considerable attention these methods have been used in machine translation terminology research and translation aids bilingual lexicography collocation studies wordsense disambiguation and information retrieval in a multilingual environment the information retrieval application may be of particular relevance to this audienceit would be highly desirable for users to be able to express queries in whatever language they chose and retrieve documents that may or may not have been written in the same language as the querylandauer and littman used svd analysis on the canadian hansards parliamentary debates that are published in both english and french in order to estimate a kind of soft thesaurusthey then showed that these estimates could be used to retrieve documents appropriately in the bilingual condition where the query and the document were written in different languageswe have been most interested in the terminology applicationhow does microsoft or some other software vendor want quotdialog boxquot quottext boxquot and quotmenu boxquot to be translated in their manualsconsiderable time is spent on terminology questions many of which have already been solved by other translators working on similar textsit ought to be possible for a translator to point at an instance of quotdialog boxquot in the english version of the microsoft windows manual and see how it was translated in the french version of the same manualalternatively the translator can ask for a bilingual concordance as shown in figure 1a pcbased terminology reuse tool is being developed to do just exactly thisthe tool depends crucially on the results of an alignment program to determine which parts of the source text correspond with which parts of the target textin working with the translators at att language line services a commercial translation service we discovered that we needed to completely redesign our alignment programs in order to deal more effectively with texts supplied by language line customersall too often the texts are not available in electronic form and may need to be scanned in and processed by an ocr deviceeven if the texts are available in electronic form it may not be worth the effort to clean them up by handreal texts are not like the hansards real texts are much smaller and not nearly as clean as the ideal texts that have been used in previous studiesto deal with these robustness issues church developed a characterbased alignment method called char_alignthe method was intended as a replacement for sentencebased methods which are very sensitive to noisethis paper describes a new program called word_align that starts with an initial quotroughquot alignment and produces improved alignments by exploiting constraints at the wordlevelthe alignment algorithm consists of two steps estimate translation probabilities and use these probabilities to search for most probable alignment paththe two steps are described in the following sectionthe translation probabilities are estimated using a method based on brown et al model 2 which is summarized in the following subsection 211then in subsection 212 we describe modifications that achieve three goals enable word_align to accept input which may not be aligned by sentence reduce the number of parameters that need to be estimated and prepare the ground for the second step the search for the best alignment in the context of their statistical machine translation project brown et al estimate pr the probability that f a sentence in one language is the translation of e a sentence in the other language pr is computed using the concept of altgnment denoted by a which is a set of connections between each french word in f and the corresponding english word in e a connection which we will write f e as con specifies that position j in f is connected to position i in e if a french word in f does not correspond to any english word in e then it is connected to the special word null notice that this model is directional as each french position is connected to exactly one position in the english sentence and accordingly the number of connections in an alignment is equal to the length of the french sentencehowever an english word may be connected to several words in the french sentence or not connected at allusing alignments the translation probability for a pair of sentences is expressed as where a is the set of all combinatorially possible alignments for the sentences f and e in their paper brown et al present a series of 5 models of prthe first two of these 5 models are summarized heremodel 1 model 1 assumes that pr depends primarily on t the probability that an occurrence of the english word e is translated as the french word f that is where cf e an irrelevant constant accounts for certain dependencies on sentence lengths which are not important for our purposes hereexcept for cf e most of the notation is borrowed from brown et althe variable j is used to refer to a position in a french sentence and the variable i is used to refer to a position in an english sentencethe expression f is used to refer to the french word in position j of a french sentence and e is used to refer to the english word in position i of an english sentencean alignment a is a set of pairs each of which connects a position in a french sentence with a corresponding position in an english sentencethe expression ai is used to refer to the english position that is connected to the french position j and the expression is used to refer to the english word in position aithe variable m is used to denote the length of the french sentence and the variable i is used to denote the length of the english sentencethere are quite a number of constraints that could be used to estimate prmodel 1 depends primarily on the translation probabilities t and does not make use of constraints involving the positions within an alignmentthese constraints will be exploited in model 2brown et al estimate t on the basis of a training set a set of english and french sentences that have been aligned at the sentence levelthose values of t that maximize the probability of the training set are called the maximum likelihood estimatesbrown et a show that the maximum likelihood estimates satisfy it follows from the definition of model 1 that the probability of a connection satisfies recall that h refers to the french word in position j of the french sentence f of length m and that ei refers to the english word in position i of the english sentence e of length ialso remember that position 0 is reserved for the null wordequations 3 and 4 are used iteratively to estimate tthat is we start with an initial guess for we then evaluation the right hand side of equation 4 and compute the probability of the connections in the training setthen we evaluate equation 3 obtain new estimates for the translation probabilities and repeat the process until it convergesthis iterative process is known as the them algorithm and has been shown to converge to a stationary point moreover brown et a show that model 1 has a unique maximum and therefore in this special case the them algorithm is guaranteed to converge to the maximum likelihood solution and does not depend on the initial guessmodel 2 model 2 improves upon model 1 by making use of the positions within an alignmentfor instance it is much more likely that the first word of an english sentence will be connected to a word near the beginning of the corresponding french sentence than to some word near the end of the french sentencemodel 2 enhances model 1 with the assumption that the probability of a connection con depends also on j and i as well as on m and 1 this dependence is expressed through the term a which denotes the probability of connecting position j in a french sentence of length m with position i in an english sentence of length isince each french position is connected to exactly one english position the constraint elc where e cf is an irrelevant constant as in model 1 equation 3 holds for the maximum likelihood estimates of the translation probabilitiesthe corresponding equation for the maxwhere cotile and con denote sets of connections the set coarf contains all connections in the training data between f and e and the set con contains all connections between some french word and e the probability of a connection cone is the sum of the probabilities of all alignments that contain itnotice that equation 3 satisfies the constraint e t 1 for each english word e where conv denotes the set of connections in the training data between positions j and i in french and english sentences of lengths in and 1 respectivelysimilarly coa73 denotes the set of connections between position j and some english position in sentences of these lengthsinstead of equation 4 we obtain the following equation for the probability of a connection notice that model 1 is a special case of model 2 where a is held fixed at as before the them algorithm is used to compute maximum likelihood estimates for t and a however in this case model 2 does not have a unique maximum and therefore the results depend on the initial guessesbrown et al therefore use model 1 to obtain estimates for t which do not depend on the initial guessesthese values are then used as the initial guesses of te in model 2as mentioned in the introduction we are interested in aligning corpora that are smaller and noisier than the hansardsthis implies severe practical constraints on the word alignment algorithmas mentioned earlier we chose to start with the output of char_align because it is more robust than alternative sentencebased methodsthis choice of course requires certain modifications to the model of brown et al to accommodate as input an initial rough alignment instead of pairs of aligned sentencesit is also useful to reduce the number of parameters that we are trying to estimate because we have much less data and much more noisethe paragraphs below describe our modifications which are intended to meet these somewhat different requirementsthe two major modifications are replacing the sentencebysentence alignment with a single global alignment for the entire corpus and replacing the set of probabilities a with a small set of offset probabilitiesword_align starts with an initial rough alignment i which maps french positions to english positions our goal is to find a global alignmenta which is more accurate than ito achieve this goal we first use i to determine which connections will be considered for alet conii denote a connection between position j in the french corpus and position i in the english corpus we assume that conii is a possible connection only if i falls within a limited window which is centered around 1 such that where w is a predetermined parameter specifying the size of the window connections that fall outside this window are assumed to have a zero probabilitythis assumption replaces the assumption of brown et al that connections which cross boundaries of aligned sentences have a zero probabilityin this new framework equation 3 becomes where conl and cone are taken from the set of possible connections as defined by turning to model 2 the parameters of the form a are somewhat more problematicfirst since there are no sentence boundaries there are no direct equivalents for i j in and 1secondly there are too many parameters to be estimated given the limited size of our corpora fortunately these parameters are highly redundantfor example it is likely that a will be very close to a and ain order to deal with these concerns we replace probabilities of the form a with a small set of offset probabilitieswe use k to denote the offset between i an english position which corresponds to the french position j and the english position which the input alignment i connects to j k i ian offset probability o is the probability of having an offset k for some arbitrary connectionaccording to k ranges between w and w thus instead of equation 6 we have where con is the set of all connections and coark is the set of all connections with offset k instead of equation 7 we have the last three equations are used in the them algorithm in an iterative fashion as before to estimate the translation probabilities and the offset probabilitiestable 1 and figure 2 show some values that were estimated in this waythe input consisted of a pair of microsoft windows manuals in english and its equivalent in french table 1 shows four french words and the four most likely translations sorted by t1 note that the correct translation are usually near the front of the list though there is a tendency for the program to be confused by collocates such as quotinformation aboutquotfigure 2 shows the probability estimates for offsets from the initial alignment inote that smaller offsets are more likely than larger ones as we would expectmoreover the distribution is reasonably close to normal as indicated by the dotted line which was generated by a gaussian with a mean of 0 and standard deviation of 102we have found it useful to make use of three filters to deal with robustness issuesempirically we found that both high frequency and low frequency words caused difficulties and therefore connections involving these words are filtered outthe thresholds are set to exclude the most frequent function words and punctuations as well as words with less than 3 occurrencesin addition following a similar filter by brown et al small values of t are set to 0 after each iteration of the them algorithm because these small values often correspond to inappropriate translationsfinally connections to null are ignoredsuch connections model french words that are often omitted in the english translationhowever because of ocr errors and other sources of noise it was decided that this phenomenon was too difficult to modelsome words will not be aligned because of these heuristicsit may not be necessary however to align all words in order to meet the goal of helping translators with difficult terminologythe them algorithm produces two sets of maximum likelihood probability estimates translation probabilities 1 and offset probabilities obrown et al select their preferred alignment simply by choosing the most probable alignment according to the maximum likelihood probabilities relative to the given sentence alignmentin the terms of our in this example french is used as the source language and english as the target2the center of the estimated distribution seems more fiat than in a normal distributionthis might be explained by a higher tendency for local changes of word order within phrases than for order changes among phrasesthis is merely a hypothesis though which requires further testingunfortunately this method does not model the dependence between connections for french words that are near one anotherfor example the fact that the french position j was connected to the english position i will not increase the probability that j 1 will be connected to an english position near ithe absence of such dependence can easily confuse the program mainly in aligning adjacent occurrences of the same word which are common in technical textsbrown et al introduce such dependence in their model 4we have selected a simpler alternative defined in terms of offset probabilitiesthe first step in finding the most probable alignment is to determine the relevant connectzons for each french positionrelevant connections are required to be reasonably likely that is their translation probability should exceed some minimal thresholdmoreover they are required to fall within a window between 1 w and 1 w in the english corpus as in the previous step we call a french position relevant if it has at least one relevant connectioneach alignment a then consists of exactly one connection for each relevant french position to model the dependency between connections in an alignment we assume that the offset of a connection is determined relative to the preceding connection in a instead of relative to the initial alignment ifor this purpose we define a as a linear extrapolation from the preceding connection in a where previ is the last french position before j which is aligned by a and ne and nf are the lengths of the english and french corporaa thus predicts the connection of j knowing the connection of jprey and assuming that the two languages have the same word orderinstead of the most probable alignment maximizes french word zone fermer informations insertion english translations box area want in close when close selected information about for see insertion point edit to we approximate the offset probabilities o relative to a using the maximum likelihood estimates which were computed relative to i we use a dynamic programming algorithm to find the most probable alignmentthis enables us to know the value a 1 when dealing with position jto avoid connections with very low probability we require that i o exceeds a prespecified threshold t3if the threshold is not exceeded the connection is dropped from the alignment and i o for that connection is set to t when computing t can therefore be interpreted as a global setting of the probability that a random position will be connected to the null in fact the threshold on t which is used to determine the relevant connections is used just as an efficient early application of the threshold t this early application is possible when t o t where kmax is the value of k with maximal oenglish worda similar dynamic programming approach was used by gale and church for word alignment to handle dependency between connectionsword_olign was first evaluated on a representative sample of canadian hansards the sample was kindly provided by simard et al along with alignments of sentence boundaries as determined by their panel of 8 judges ten iterations of the them algorithm were computed to estimate the parameters of the modelthe window size was set to 20 words in each direction and the minimal threshold for t was set to 0005we considered connections whose source and target words had frequencies between 3 and 1700 in this experiment we used french as the source language and english as the target languagefigure 3 presents the alignment error rate of word_alignit is compared with the error rate of word_align input ie the initial rough alignment which is produced by char_alignthe errors are sampled at sentence boundaries and are measured as the relative distance between the output of the alignment program and the quottruequot alignment as defined by the human judges5the histograms present errors in the range of 2020 which covers about 95 of the data6it can be seen that word_align decreases the error rate significantly in 55 of the cases there is no error in word_align output in 73 the distance from the correct alignment is at most 1 and in 84 the distance is at most 3a second evaluation of word_align was performed on noisy technical documents of the type typically available for att language line serviceswe used the english and french versions of a manual of monitoring equipment both scanned by an ocr devicewe sampled the english vocabulary with frequency between three and 450 occurrences the same vocabulary that was used for alignmentwe sampled 100 types from the top fifth by frequency of the vocabulary 80 types from the second quintile 60 from the third 40 from the fourth and 20 from the bottom quintilewe used this stratified sampling because we wanted to make more accurate statements about our error rate by tokens than we would have obtained from random sampling or even from equal weighting of the quintilesafter choosing the 300 types from the vocabulary list one token for each type was chosen at random from the corpusby hand the best corresponding position in the french version was chosen to be compared with word_align outputtable 2 summarizes the results of the second experimentthe figures indicate the expected relative frequency of each offset from the correct alignmentthis relative frequency was computed according to the word frequencies in the stratified sampleas shown in the table for 605 of the tokens the alignment is accurate and in 84 the offset from the correct alingment is at most 3these figures demonstrate the usefulness of word_align for constructing bilingual lexicons and its impact on as explained earlier word_align produces a partial alignmentfor the purpose of the evaluation we used linear interpolation to get alignments for all the positions in the sample6recall that the window size we used is 20 words in each direction which means that word_align cannot recover from larger errors in char_align char align errors figure 3 word_align reduces the variance by a factor of 5 over char_align alone the quality of bilingual concordances indeed using bilingual concordances which are based on word_align output the translators at att language line services are now producing bilingual terminology lexicons at a rate of 60100 terms per hourthis is compared with the previous rate of about 30 terms per hour using char_algn output and an extremely lower rate before alignment tools were availablecompared with other word alignment algorithms word_align does not require sentence alignment as input and was shown to produce useful alignments for small and noisy corporaits robustness was achieved by modifying brown et al model 2 to handle an initial quotroughquot alignment reducing the number of parameters and introducing a dependency between alignments of adjacent wordstaking the output of char_align as input word_align produces significantly better wordlevel alignments on the kind of corpora that are typically available to translatorsthis improvement increased the rate of constructing bilingual terminology lexicons at att language line services by a factor of 23in addition the alignments may also be helpful to developers of lexicons for machine translation systemsword_align thus provides an example how a model such as brown et al model 2 that was originally designed for research in statistical machine translation can be modified to achieve practical though less ambitious goals in the near term
W93-0301
robust bilingual word alignment for machine aided translationwe have developed a new program called word_align for aligning parallel text text such as the canadian hansards that are available in two or more languagesthe program takes the output of char_align a robust alternative to sentencebased alignment programs and applies wordlevel constraints using a version of brown el al model 2 modified and extended to deal with robustness issuesword_align was tested on a subset of canadian hansards supplied by simard the combination of word_align plus char_align reduces the variance by a factor of 5 over char_align alonemore importantly because word_align and char_align were designed to work robustly on texts that are smaller and more noisy than the hansards it has been possible to successfully deploy the programs at att language line services a commercial translation service to help them with difficult terminologywe show that knowledge of targettext length is not crucial to the model performance
has a consensus nl generation architecture appeared and is it psycholinguistically plausible i survey some recent applicationsoriented nl generation systems and claim that despite very different theoretical backgrounds these systems have a remarkably similar architecture in terms of the modules they divide the generation process into the computations these modules perform and the way the modules interact with each other i also compare this consensus architecture among applied nlg systems with psycholinguistic knowledge about how humans speak and argue that at least some aspects of the consensus architecture seem to be in agreement with what is known about human language production despite the fact that psycholinguistic plausibility was not in general a goal of the developers of the surveyed systems in this paper i survey some recentlydeveloped nl generation systems that cover the complete generation process and are designed to be used by application programs as well as making some theoretical pointi claim that despite their widely differing theoretical backgrounds the surveyed systems are similar in terms of the modules they divide the generation process into the way the modules interact with each other and the kinds of computations each individual module performsin other words despite different theoretical claims there is a remarkable level of similarity in how these systems really work that is a de facto consensus architecture seems to be emerging for how applied nlg systems should generate textthe existence of such agreement among the surveyed systems is especially surprising because in some cases the theoretical backgrounds of the systems examined argue against some aspects of the consensus architecturei also compare the consensus architecture to psycholinguistic knowledge about language generation in human speakerssuch a comparison is often difficult to make because of the many gaps in our current knowledge about how humans speaknevertheless i argue that as far as such a comparison can be made the specific design decisions embodied in the consensus architecture seem to often be more or less in accord with current knowledge of human language generationthis is again perhaps somewhat surprising since psycholinguistic plausibility was not in general a goal of the developers of the examined systemsperhaps this indicates that there is some connection between the engineering considerations that underlie the design decisions made in the consensus architecture and the maximizeperformanceintherealworld criteria that drove the evolutionary processes that created the human language processorif there is some truth to this hypothesis then studying the engineering issues involved in building applied systems may lead to insights about the way the human language system worksthe analysis presented here is based on a survey of generation systems that in short the idea was to survey recent systems that looked at the entire generation problem and that were motivated by applications and engineering considerations as well as linguistic theorythe systems examined were1 fuf elhadad 1992 developed at columbia university and used in several projects there including comet and advisor ii i will use the term fuf in this paper to refer to both fuf itself and the various related systems at columbiaseveral other universities have also recently begun to use fuf in their researchfuf is based on kay functional unification formalism kay 1979idas reiter 0 at 19921 developed at edinburgh university idas was a prototype online documentation system for users of complex machineryfrom a theoretical perspective idas main objective was to show that a single representation and reasoning system can be used for both domain and linguistic knowledge reiter and mellish 1992joyce rambow and korelsky 1992 developed at odyssey research associates joyce is taken as a representative of several nl generation systems produced by ora and cogentex including gossip fog and lfsthese systems are all aimed at commercial or government applications and are all based on melcuk meaningtext theory meleuk 19881penman penman natural language group 1989 under development at isi since the early 1980 penman has been used in several demonstration systemsas usual i will use penman to refer to both penman itself and the systems that were built around itpenman theoretical basis is systemic linguistics halliday 1985 and rhetoricalstructure theoryspokesman meteer 1989 spokesman was developed at bbn for various applications and has some of the same design goals as mcdonald the selection rules are of course not completely well defined which means there was inevitably some arbitrariness when i used them to select particular systems to indude in the surveyi encourage any reader who believes that i have unfairly omitted a system to contact me so that this system can be included in future versions of the surveymumble system mcdonald 19831 including in particular the desire to build a system that at least in some respects is psycholinguistically plausiblespokesman uses treeadjoining grammars josh 1987 for syntactic processingall of the examined systems produce english and they also are mostly aimed at producing technical texts it would be interesting to examine systems aimed at other languages or other types of applications and see if this caused any architectural differencesas can be seen the chosen systems have widely different theoretical basesit is therefore quite interesting that they all seem to have ended up with broadly similar architectures in that they break up the generation process into a similar set of modules and they all use a pipeline architecture to connect the modules ie the modules are linearly ordered and information flows from each module to its successor in the pipeline with no feedback from later modules to earlier modulesthe actual modules possessed by the systems are content determination this maps the initial input of the generation system onto a semantic form possibly annotated with rhetorical relationssentence planningmany names have been used for this process here i use one suggested by rambow and korelsky 1992the basic goal is to map conceptual structures onto linguistic ones this includes generating referring expressions choosing content words and grammatical relationships and grouping information into clauses and sentencessurface generation i use this term in a fairly narrow sense here to mean a module that takes as input an abstract specification of information to be communicated by syntax and function words and produces as output a surface form that communicates this information all of the examined systems had separate sentenceplanning and surfacegeneration modules and the various intermediate forms used to pass information between these modules conveyed similar kinds of informationmorphology most of the systems have a fairly simple morphological component presumably since english morphology is quite simpleformatting idas joyce and penman also contain mechanisms for formatting their output andor adding hypertext annotations to enable users to click on portions of the generated textthis section describes the consensus architecture in more detail with particular emphasis on some of the design decisions embodied in it that more theoretically motivated researchers have disagreed withit furthermore examines the plausibility of these decisions from a psycholinguistic perspective and argues that in many respects they agree with what is known about how humans generate textthe consensus architecture divides the generation process into multiple modules with information flowing in a pipeline fashion from one module to the nextby pipeline i mean that the modules are arranged in a linear order and each module receives information only from its predecessor and sends information only to its successorinformation does not flow backwards from a module to its predecessor and global blackboards that all modules can access and modify are not usedi do not mean by pipeline that generation must be incremental in the sense that say syntactic processing of the first sentence is done at the same time as semantic processing of the second i believe most of the systems examined could in fact do this but they have not bothered to do so 411 design decision avoid integrated architecture many nl generation researchers have argued against dividing the generation process into modules perhaps the bestknown are appelt 1985 and danlos 1984others such as rubinoff 1992 have accepted modules but have argued that the architecture must allow feedback between later modules and earlier modules which argues against the oneway information flow of the pipeline architecturethe argument against pipelines and modules is almost always some variant of there are linguistic phenomena that can only be properly handled by looking at constraints from different levels and this is difficult to do in a pipeline systemto take one fairly random example danlos and namer 1988 have pointed out that since the french masculine and feminine pronouns le and la are abbreviated to l before a word that starts with a vowel and since in some cases le and la may be unambiguous references while l is not the referring expression system must have some knowledge of surface word order and selected content and function words before it can decide whether a pronoun is acceptable this will not be possible if referring expressions are chosen before syntactic structures are built as happens in the consensus architecturethere is undoubtably some truth to these arguments but the applications builder also has to consider the engineering reality that the sorts of systems proposed by appelt danlos and namer are extremely difficult to build from an engineering perspectivethe engineering argument for modularization is particularly strong marr has put this very well in man 1976 page 485 any large computation should be split up and implemented as a collection of small subparts that are as nearly independent of one another as the overall task allowsif a process is not designed in this way a small change in one place will have consequences in many other placesthis means that the process as a whole becomes extremely difficult to debug or improve whether by a human designer or in the course of natural evolution because a small chance to improve one part has to be accompanied by many simultaneous compensatory changes elsewheremarr argues that a modularized structure makes sense both for human engineers and for the evolutionary process that produced the human brainthe evidence is indeed strong that the human brain is highly modularizedthis evidence comes from many sources but i think perhaps the most convincing evidence is from studies of humans with brain damagesuch people tend to lose specific abilities not suffer overall degradation that applies equally to all abilitiesellis and young 1988 provide an excellent summary of such work and list patients that for example ical structures but have a severely restricted vocabulary ie have impaired lexical choice dcalizing itthe main engineering argument for arranging modules into a pipeline instead of a more complex structure is again simplicity and ease of debuggingin a oneway pipeline of n modules there are only n1 interfaces between modules while a pipeline with twoway information flow has 2 interfaces and a system that fully connects each module with every other module will have n interfacesa system that has a twoway interface between every possible pair of modules will undoubtably be able to handle many linguistic phenomena in a more powerful elegant principled etc manner than a system that arranges modules in a simple oneway pipeline such a system will also however be much more difficult to build and debugit is easy to argue that a oneway pipeline is worse at handling some linguistic phenomena than a richlyconnected architecture but this is not the end of the story for the systembuilding engineer he or she has to balance the cost of the pipeline being inefficient andor inelegant at handling some phenomena against the benefit of the pipeline being a much easier structure to build and debugwe have insufficient engineering data at present to make any wellsubstantiated claims about whether the oneway pipeline has the optimal costbenefit tradeoff or not but the circumstantial evidence on this question is striking despite the fact that so many theoretical papers have argued against pipelines and very few have argued for pipelines every one of the applicationsoriented systems examined in this survey chose to use the oneway pipeline architecturein other words an applications systems builder can not look at particular linguistic phenomena in isolation he or she must weigh the benefits of properly handling these phenomena against the cost of implementing the proposed architecturein the french pronoun case described by danlos and namer for example the applications builder might argue that in the great majority of cases no harm will in fact be done if the referringexpression generator simply ignores the possibility that pronouns may be abbreviated to l especially given humans ability to use context to disambiguate references and if a situation does arise where it is absolutely essential that the human reader be able to correctly disambiguate a reference then perhaps pronouns should not be used in any casegiven this and the very high engineering cost of building an integrated architecture of the sort proposed by danlos and namer is implementing such an architecture truly the most effective way of using scarce engineering resourcespsycholinguistic research on selfmonitoring and selfrepair suggests that there is some feedback in the human language generation system so the human language processor is probably more complex than a simple oneway pipeline but it may not be much more complexto the best of my knowledge most of the observed selfrepair phenomena could be explained by an architecture that added a few feedback loops from later stages of the pipeline back to the initial planner this would only slightly add to the number of intermodule interfaces and hence would have a much lower engineering cost than implementing the fully connected every module communicates with every other module architecturewhether the human language engine is organized as a pipeline plus a few feedback loops or an every module talks to every other module architecture is unknown at this point hopefully new psycholinguistic experiments will she would more light on this issuei think it would be very interesting for example to test human french speakers on situations of the sort described by danlos and namer and see what they actually did in such contexts i do not believe that such an experiment has been performedcontent determination takes the initial input to the generation system which may be for example a query to be answered or an intention to be satisfied and produces from it a emantic form conceptual representation or list of propositions ie a specification of the meaning content of the output texti will in this paper use the term semantic representation for this meaning specificationroughly speaking the semantic representations used by all of the examined systems can be characterized as some kind of emantic net where the primitive elements in the net are conceptual instead of linguistic in some cases the semantic nets also include discourse and rhetorical relations between portions of the net subsequent portions of the generator use these to generate discourse connectives control formatting etcthe systems examined use quite different contentdetermination mechanisms schemas mckeown 1985 were the most popular approachcontent determination in the systems examined basically performs two functions deep content determination determine what information should be communicated to the hearerrhetorical planning organize this information in a rhetorically coherent mannerhovy 1988 has proposed an architecture where these tasks are performed separately among the systems examined however hovy is unique in taking this approach the builders of the other systems apparently believe that these two processes are so closely related that they should be performed simultaneouslyi am not aware of any psychological data that directly address this issuehowever hovy architecture requires the languageproducing agent to completely determine the content of a paragraph before hesheit can begin to utter it and intuitively it seems implausible to me that human speakers do this it also goes against incremental theories of human speech production levelt 1989 pages 2427the sentence planner converts the semantic representation which is specified in terms of domain entities into an abstract linguistic representation that specifies content words and grammatical relationshipsi will use mercuk term deep syntactic form for this representationall of the systems analyzed possess a deep syntactic representation none attempt to go from semantics to surface form in a single stepidas and penman use variants of the same deep syntactic language spl kasper 1989fuf and joyce use deep syntactic languages that are based on functional unification and meaningtext theory but these convey much the same information as splspokesman uses the realization specification language of mumble mcdonald 1983 as its deep syntactic representation i have found it difficult to compare this language to the others but mcdonald agrees that it conveys essentially the same information as splunfortunately while all of the systems possessed a module which converted semantic representations into deep syntactic ones each system used a different name for this modulein fuf it is the lexical chooser in idas it is the text planner in joyce it is the entence planner in spokesman it is the text structurer and in penman it does not seem to have a name at all eg hovy 1988 simply refers to pregeneration textplanning tasksi use the joyce term here because i think it is the least ambiguousthe specific tasks performed by the sentence planner include relatively little is said in the papers about clause grouping and referringexpression generation but more information is available on the first task mapping domain entities onto linguistic entitiesall the examined systems except perhaps penman use a variant of what i have elsewhere called the tructuremapping approach reiter 19912 i do not know what approach penman uses structuremapping is based on a dictionary that lists the semanticnet equivalents of linguistic resources meteer 1991 such as content words and grammatical relationshipsthis dictionary might for example indicate that the english word sisteris equivalent to the structure sibling with attribute sexfemale and that the domain relation partof can be expressed with the grammatical possessive eg the car enginegiven this dictionary the structuremapping algorithm iteratively replaces semantic structures by linguistic ones until the entire semantic net has been recoded into a linguistic structurethere may be several ways of recoding a semantic representation into a linguistic one which means structuremapping systems have a choice between using the first acceptable reduction they find or doing a search for a reduction that maximizes some optimality criterion the papers i read were not very clear on this issue but i believe that while most of the systems surveyed use the first acceptable reduction found fuf in some cases searches for an optimal reduction choice from surface realization the consensus architecture clearly separates lexical choice of content words from syntactic processing in other words it does not use an integrated lexicogrammar which systemic theorists in particular have argued for and which is implicit in some unificationbased approaches such as the semantic headdriven algorithm shieber et al 19901despite these theoretical arguments none of the systems examined used an integrated lexicogrammar including unificationbased fuf and systemicbased penman3 in contrast earlier unificationbased syseven though i have previously argued against structuremapping because it does not do a good job of handling lexical preferences reiter 1991 i nevertheless ended up using this technique when i moved from my phd research to the more applicationsoriented idas projectperhaps this is another example of engineering considerations overriding theoretical arguments terns such as the tactical component of mckeown text system mckeown 1985 did integrate lexical and syntactic processing in a single tactical generator also systemic systems that have been less driven by application needs than penman such as genesys fawcett and tucker 1990 have used integrated lexicogrammarsthere is psychological evidence that at least some lexical processing is separated from syntactic processing eg the patient mentioned in section 411 who was able to perform contentdetermination and syntactic generation but had a very restricted speaking vocabularyi think it is also very suggestive that humans have different learning patterns for content and function words the former are openclass and easily learned while the latter are closedclass and people tend to stick to the ones they learned as childrenthere is less evidence on the location of lexical choice in the psycholinguistic pipeline and on whether it is performed in one stage or distributed among several stagessurface generation has been used to mean many different things in the literaturei use it here to refer to the quotportion of the generation system that knows how grammatical relationships are actually expressed in english for example it is the surface generator that knows what function words and word order relationships are used in english for imperative interrogative and negated sentences it is the surface generator that knows which auxiliaries are required for the various english tenses and it is the surface generator that knows when pronominalization is syntactically required with no backtracking the grammars and grammar representations used by the systems examined are quite different but all systems process the grammars with a topdown algorithm that uses minimal if any backtrackingnone of the systems use the semantic headdriven generation algorithm shieber et at 1990 although this is probably the single bestknown algorithm for surface generation elhadad 1992 chapter 4 claims that such an algorithm is only necessary for systems that attempt to simultaneously perform both lexical choice and surface generation which none of the examined systems doperhaps more interestingly four of the five systems do not allow backtracking and the fifth fuf allows backtracking but does not seem to use it much during surface generation this is interesting since backtracking is usually regarded as an essential component of unificationbased generation approaches it is certainly used in the semanticheaddriven algorithm and in the text generator mckeown 1985from a psycholinguistic perspective many people have argued that human language production is incremental which means that of necessity it cannot include much backtrackingthe gardenpath phenomena shows that there are limits to how much syntactic backtracking people people perform during language understandingthis evidence is of course suggestive rather than definitive it seems likely that there are limitations on how much backtracking humans will perform during syntactic processing but there is no hard proof of this these modules will not be further examined here mainly because little information is given in the papers on the details of how morphology and formatting are implementedi would like to conclude with a perhaps controversial personal opinionthere have been many cases where nl generation researchers have claimed that a certain linguistic phenomena is best handled by a certain architectureeven if this is true however if it turns out that adopting this architecture will substantially complicate the design of the overall generation system and that the most common cases of the phenomena of interest can be adequately handled by adding a few heuristics to the appropriate stage of a simpler architecture then the engineeringoriented nl worker must ask him or herself if the benefits of the proposed architecture truly outweigh its costsfor instance one cannot simply argue that an integrated architecture is superior to a pipeline because it is better suited to handling certain kinds of pronominalization it is also necessary to evaluate the engineering cost of shifting to an integrated architecture and determine if for example better overall performance for the amount of engineering resources available could be obtained by keeping the general pipeline architecture and instead investing some of the engineering resources aved by this decision into building more sophisticated heuristics into the pronominalization modulein doing so i believe that the nl engineer is coming close to the reasoning of the evolutionary process that created the human language systemevolution does not care about elegant declarative formalisms or proper handling of special cases evolution goal is to maximize performance in realworld situations while maintaining an architecture that can be easily tinkered with by future evolutionary processesin short evolution is an engineer not a mathematician4 it is thus perhaps not surprising if nl generation systems designed to be used in realworld applications end up with an architecture that seem to bear some resemblance to the architecture of the human language processor5 and future attempts to build applicationsoriented generation systems may end up giving us real insights into how language processing works in humans even if this is not the main purpose of these systemssimilarly psycholinguistic knowledge of how the human language generator works may suggest useful algorithms for nl engineers one such case is described in reiter and dale 1992crossfertilization between psycholinguistics and nl engineering will only arise however if the results of engineering analyses are reported in the research literature especially when they suggest going against some theoretical principleunfortunately to date the results of such analyses have alltoooften been regarded more as embarrassments than as valuable observations and hence have not been publishedi would like to conclude this paper by encouraging generation researchers to regard the results of engineering analyses to be as interesting and as important to the understanding of language as conventional linguistic analysesafter all as woods 1975 has pointed out while descriptive analyses of language can at best tell us what the brain does engineering analyses can potentially offer insights on why the brain functions as it doesi would like to thank jean carletta robert dale michael elhadad david mcdonald richard kittredge tanya korelsky chris mellish owen rambow and graeme ritchie for their very helpful comments on earlier versions of this workit goes without saying of course that the views represented are my own and that any factual errors are entirely my faultthis re4gould various popular books on evolutionary biology such as gould 1983 give an excellent feel for evolution as an engineercumhackers see also the interesting discussion of language and evolution in pinker 199450f course the best way to do something on a machine is often not the best way to do it in nature eg birds and airplanes use different mechanisms to flyon the other hand there does seem to be a remarkable congruence between effective vision processing strategies in animals and computers marr 1982one could also argue that since language is purely a product of the human mind any effective language processor is probably going to have to share some of the mind processing strategies search was mostly done while the author was at the university of edinburgh where he was supported by serc grant grf36750
W94-0319
has a consensus nl generation architecture appeared and is it psycholinguistically plausiblei survey some recent applicationsoriented nl generation systems and claim that despite very different theoretical backgrounds these systems have a remarkably similar architecture in terms of the modules they divide the generation process into the computations these modules perform and the way the modules interact with each otheri also compare this consensus architecture among applied nlg systems with psycholinguistic knowledge about how humans speak and argue that at least some aspects of the consensus architecture seem to be in agreement with what is known about human language production despite the fact that psycholinguistic plausibility was not in general a goal of the developers of the surveyed systemsmost generation systems pipeline pragmatic semantic lexical and syntactic decisionswe show that psycholinguistic and engineering approaches often result in systems which are similar in crucial respects
unsupervised learning of disambiguation rules for part of speech tagging in this paper we describe an unsupervised learning algorithm for automatically training a rulebased part of speech tagger without using a manually tagged corpus we compare this algorithm to the baumwelch algorithm used for unsupervised training of stochastic taggers next we show a method for combining unsupervised and supervised rulebased training algorithms to create a highly accurate tagger using only a small amount of manually tagged text there has recently been a great deal of work exploring methods for automatically training part of speech taggers as an alternative to laboriously handcrafting rules for tagging as was done in the past klein and simmons 1963 harris 1962almost all of the work in the area of automatically trained taggers has explored markovmodel based part of speech tagging jelinek 1985 church 1988 derose 1988 demarcken 1990 cutting et al 1992 kupiec 1992 charniak et al 1993 weischedel et al 1993 schutze and singer 1994 lin et al 1994 elworthy 1994 merialdo 199512 for a markovmodel based tagger training consists of learning both lexical probabilities and contextual probabilities once trained a sentence can be tagged by searching for the tag sequence that maximizes the product of lexical and contextual probabilitiesthe most accurate stochastic taggers use estimates of lexical and contextual probabilities extracted from large manually annotated corpora it is possible to use unsupervised learning to train stochastic taggers without the need for a manually annotated corpus by using the baumwelch algorithm baum 1972 jelinek 1985 cutting et al 1992 kupiec 1992 elworthy 1994 merialdo 19951this algorithm works by iteratively adjusting the lexical and contextual probabilities to increase the overall probability of the training corpusif no prior knowledge is available probabilities are initially either assigned randomly or evenly distributedalthough less accurate than the taggers built using manually annotated corpora the fact that they can be trained using only a dictionary listing the allowable parts of speech for each word and not needing a manually tagged corpus is a huge advantage in many situationsalthough a number of manually tagged corpora are available training on a corpus of one type and then applying the tagger to a corpus of a different type usually results in a tagger with low accuracy weischedel et al 1993therefore if tagged text is needed in training this would require manually tagging text each time the tagger is to be applied to a new language and even when being applied to a new type of textin brill 1992 brill 1994 a rulebased part of speech tagger is described which achieves highly competitive performance compared to stochastic taggers and captures the learned knowledge in a set of simple deterministic rules instead of a large table of statisticsin addition the learned rules can be converted into a deterministic finite state transducertagging with this finite state transducer requires n steps to tag a sequence of length n independent of the number of rules and results in a part of speech tagger ten times faster than the fastest stochastic tagger roche and schabes 1995one weakness of this rulebased tagger is that no unsupervised training algorithm has been presented for learning rules automatically without a manually annotated corpusin this paper we present such an algorithmwe describe an algorithm for both unsupervised and weakly supervised training of a rulebased part of speech tagger and compare the performance of this algorithm to that of the baumwelch algorithmthe rulebased tagger is based on a learning algorithm called transformationbased errordriven learningtransformationbased errordriven learning has been applied to a number of natural language problems including part of speech tagging prepositional phrase attachment disambiguation speech generation and syntactic parsing brill 1992 brill 1994 ramshaw and marcus 1994 roche and schabes 1995 brill and resnik 1994 huang et al 1994 brill 1993a brill 1993bfigure 1 illustrates the learning processfirst unannotated text is passed through an initialstate annotatorthe initialstate annotator can range in complexity from assigning random structure to assigning the output of a sophisticated manually created annotatoronce text has been passed through the initialstate annotator it is then compared to the truth as specified in a manually annotated corpus and transformations are learned that can be applied to the output of the initial state annotator to make it better resemble the truthin all of the applications explored to date the following greedy search is applied at each iteration of learning the transformation is found whose application results in the highest score that transformation is then added to the ordered transformation list and the training corpus is updated by applying the learned transformationto define a specific application of transformationbased learning one must specify the following once an ordered list of transformations is learned new text can be annotated by first applying the initial state annotator to it and then applying each of the learned transformations in orderin transformationbased part of speech tagging3 all words are initially tagged with their most likely tag as indicated in the training corpusbelow are some of the transformation templates used by the learner4 change tag a to tag b when percentage of all possible transformations when searching for the best onean example of a learned transformation is change the tag of a word from verb to noun if the previous word is a determinerif the word race occurs more frequently as a verb than as a noun in the training corpus the initial state annotator will mistag this word as a verb in the sentence the race was very excitingthe above transformation will correct this tagging errorit was shown in brill 1994 that the transformationbased tagger achieves a high rate of tagging accuracythe transformationbased tagger captures its learned information in a set of simple rules compared to the many thousands of opaque probabilities learned by markovmodel based taggers6 supervised training is feasible when one has access to a large manually tagged training corpus from the same domain as that to which the trained tagger will be appliedwe next explore unsupervised and weakly supervised training as a practical alternative when the necessary resources are not available for supervised trainingin supervised training the corpus is used for scoring the outcome of applying transformations in order to find the best transformation in each iteration of learningin order to derive an unsupervised version of the learner an objective function must be found for training that does not need a manually tagged corpuswe begin our exploration providing the training algorithm with a minimal amount of initial knowledge namely knowing the allowable tags for each word and nothing else7 the relative likelihoods of tags for words is not known nor is any information about which tags are likely to appear in which contextsthis would correspond to the knowledge that could be extracted from an online dictionary or through morphological and distributional analysisthe unsupervised rule learning algorithm is based on the following simple ideagiven the sentence the can will be crushed with no information beyond the dictionary entry for the word can the best we can do is randomly guess between the possible tags for can in this contexthowever using an unannotated corpus and a dictionary it could be discovered that of the words that appear after the in the corpus that have only one possible tag listed in the dictionary nouns are much more common than verbs or modalsfrom this the following rule could be learned change the tag of a word from to noun if the previous word is theto fully define the learner we must specify the three components of the learner the initial state annotator the set of transformation templates and the scoring criterioninitial state annotator the unsupervised learner begins with an unannotated text corpus and a dictionary listing words and the allowable part of speech tags for each wordthe tags are not listed in any particular orderthe initial state annotator tags each word in the corpus with a list of all allowable tagsbelow is an example of the initialstate tagging of a sentence from the penn treebank marcus et al 1993 where an underscore is to be read as or8 rivaljj_nnp gangsnns havevb_vbp turnedvbd_vbn citiesnns intoin combatnn_vb zonesnns transformation templates the learner currently has four transformation templatesthey are change the tag of a word from x to y if transformations are used differently in the unsupervised learner than in the supervised learnerhere a transformation will reduce the uncertainty as to the correct tag of a word in a particular context instead of changing one tag to anotherso all learned transformations will have the form change the tag of a word from x to y in context c where x is a set of two or more part of speech tags and y is a single part of speech tag such that y e xbelow we list some transformations that were actually learned by the systemchange the tag from nn_vb_vbp to vbp if the previous tag is nns from nn_vb to vb if the previous tag is md from jj_nnp to jj if the following tag is nns scoring criterion when using supervised transformationbased learning to train a part of speech tagger the scoring function is just the tagging accuracy that results from applying a transformationwith unsupervised learning the learner does not have a gold standard training corpus with which accuracy can be measuredinstead we can try to use information from the distribution of unambiguous words to find reliable disambiguating contextsin each learning iteration the score of a transformation is computed based on the current tagging of the training setrecall that this is completely unsupervisedinitially each word in the training set is tagged with all tags allowed for that word as indicated in the dictionaryin later learning iterations the training set is transformed as a result of applying previously learned transformationsto score the transformation change the tag of a word from x to y in context c where y e x we do the followingfor each tag z e x where freq is the number of occurrences of words unambiguously tagged with tag y in the corpus freq is the number of occurrences of words unambiguously tagged with tag z in the corpus and incontext is the number of times a word unambiguously tagged with tag z occurs in context c in the training corpus9 then the score for the transformation change the tag of a word from x to y in context cis incontext freqi freq incontext a good transformation for removing the part of speech ambiguity of a word is one for which one of the possible tags appears much more frequently as measured by unambiguously tagged words than all others in the context after adjusting for the differences in relative frequency between the different tagsthe objective function for this transformation measures this by computing the difference between the number of unambiguous instances of tag y in context c and the number of unambiguous instances of the most likely tag r in context c where r e x r y adjusting for relative frequencyin each learning iteration the learner searches for the transformation which maximizes this functionlearning stops when no positive scoring transformations can be foundto test the effectiveness of the above unsupervised learning algorithm we ran a number of experiments using two different corpora and part of speech tag sets the penn treebank wall street journal corpus marcus et al 19931 and the original brown corpus francis and kucera 1982first a dictionary was created listing all possible tags for each word in the corpusthis means that the test set contains no unknown wordswe have set up the experiments in this way to facilitate comparisons with results given in other papers where the same was donean example of a context is the previous tag is a determinerin this experiment a training set of 120000 words and a separate test set of 200000 words were usedwe measure the accuracy of the tagger by comparing text tagged by the trained tagger to the gold standard manually annotated corpusin the case where the tag of a word is not fully disambiguated by the tagger a single tag is randomly chosen from the possible tags and this tag is then compared to the gold standardinitial state tagging accuracy on the training set is 907after learning 1151 transformations training set accuracy increases to 950initial state tagging accuracy on the test set is also 907accuracy increases to 951 after applying the learned transformationsfigure 2 shows test set tagging accuracy as a function of transformation numberin figure 3 we plot the difference between training and test set accuracies after the application of each transformation including a smoothed curve10 notice that there is no overtraining the difference in accuracies on training and test set remain within a very narrow range throughout with test set accuracy exceeding training set accuracy by a small marginovertraining did not occur when using the original brown corpus eitherwhen training a stochastic tagger using the baumwelch algorithm overtraining often does occur merialdo 1995 elworthy 1994 requiring an additional heldout training corpus for determining an appropriate number of training iterationsin this experiment we also used a training set of 120000 words and a separate test set of 200000 wordsinitial state tagging accuracy on the training set is 898after learning 1729 transformations and applying them to the training set accuracy increases to 956initial state tagging accuracy on the test set is 899 with accuracy increasing to 956 after applying the learned transformationsexpanding the training set to 350000 words and testing on the same test set accuracy increases to 960all unsupervised learning results are summarized in table 1in merialdo 1995 tagging experiments are described training a tagger using the baumwelch algorithm with a dictionary constructed as described above and an untagged corpusexperiments were run on associated press articles which were manually tagged at the university of lancasterwhen training on one million words of text test set accuracy peaks at 866in elworthy 1994 similar experiments were runthere a peak accuracy of 920 was attained using the lob corpusquot using the penn treebank corpus a peak accuracy of 836 resultedthese results are significantly lower than the results achieved using unsupervised transformationbased learningin kupiec 1992 a novel twist to the baumwelch algorithm is presented where instead of having contextual probabilities for a tag following one or more previous tags words are pooled into equivalence classes where all words in an equivalence class have the same set of allowable part of speech assignmentsusing these equivalence classes greatly reduces the number of parameters that need to be estimatedkupiec ran experiments using the original brown corpuswhen training on 440000 words test set accuracy was 957 excluding punctuationas shown above test set accuracy using the transformationbased algorithm described in this paper gives an accuracy of 960 when trained on 350000 wordsexcluding punctuation this accuracy is 956note that since the baumwelch algorithm frequently overtrains a tagged text would be necessary to figure out what training iteration gives peak performancewe have explored a method of training a transformationbased tagger when no information is known other than a list of possible tags for each wordnext we explore weakly supervised learning where a small amount of human intervention is permittedwith markovmodel based taggers there have been two different methods proposed for adding knowledge to a tagger trained using the baumwelch algorithmone method is to manually alter the tagging model based on human error analysisthis method is employed in kupiec 1992 cutting et al 1992another approach is to obtain the initial probabilities for the model directly from a manually tagged corpus instead of using random or evenly distributed initial probabilities and then adjust these probabilities using the baumwelch algorithm and an untagged corpusthis approach is described in merialdo 1995 elworthy 1994a tagged corpus can also be used to improve the accuracy of unsupervised transformationbased learninga transformationbased system is a processor and not a classifierbeing a processor it can be applied to the output of any initial state annotatoras mentioned above in the supervised transformationbased tagger described in brill 1994 each word is initially tagged with its most likely taghere we use the trained unsupervised part of speech tagger as the initial state annotator for a supervised learnertransformations will then be learned to fix errors made by the unsupervised learneras shown in figure 4 unannotated text is first passed through the unsupervised initialstate annotator where each word is assigned a list of all allowable tagsthe output of this tagger is then passed to the unsupervised learner which learns an ordered list of transformationsthe initialstate annotator and learned unsupervised transformations are then applied to unannotated text which is then input to the supervised learner along with the corresponding manually tagged corpusthe supervised learner learns a second ordered list of transformationsonce the system is trained fresh text is tagged by first passing it through the unsupervised initial state annotator then applying each of the unsupervised transformations in order and then applying each of the supervised transformations in orderthe advantage of combining unsupervised and supervised learning over using supervised 11elworthy 1994 quotes accuracy on ambiguous words which we have converted to overall accuracy learning alone is that the combined approach allows us to utilize both tagged and untagged text in trainingsince manually tagged text is costly and timeconsuming to generate it is often the case that when there is a corpus of manually tagged text available there will also be a much larger amount of untagged text available a resource not utilized by purely supervised training algorithmsone significant difference between this approach and that taken in using the baumwelch algorithm is that here the supervision influences the learner after unsupervised training whereas when using tagged text to bias the initial probabilities for baumwelch training supervision influences the learner prior to unsupervised trainingthe latter approach has the potential weakness of unsupervised training erasing what was learned from the manually annotated corpusfor example in merialdo 1995 extracting probability estimates from a 50000 word manually tagged corpus gave a test set accuracy of 954after applying ten iterations of the baumwelch algorithm accuracy dropped to 944using the transformations learned in the above unsupervised training experiment run on the penn treebank we apply these transformations to a separate training corpusnew supervised transformations are then learned by comparing the tagged corpus that results from applying these transformations with the correct tagging as indicated in the manually annotated training corpusin table 2 we show tagging accuracy on a separate test set using different sizes of manually annotated corporain each case a 120000 word untagged corpus was used for initial unsupervised trainingthis table also gives results from supervised training using the annotated corpus without any prior unsupervised training12 in all cases the combined training outperformed the purely supervised training at no added cost in terms of annotated training textconclusions in this paper we have presented a new algorithm for unsupervised training of a rulebased part of speech taggerthe rulebased tagger trained using this algorithm significantly outperforms the traditional method of applying the baumwelch algorithm for unsupervised training of a stochastic tagger and achieves comparable performance to a classbased baumwelch training algorithmin addition we have shown that by combining unsupervised and supervised learning we can obtain a tagger that significantly outperforms a tagger trained using purely supervised learningwe are encouraged by these results and expect an improvement in performance when the number of transformation templates provided to the unsupervised learner increases beyond the four currently usedwe have also demonstrated that overtraining a problem in baumwelch training is not a problem in transformationbased learning
W95-0101
unsupervised learning of disambiguation rules for part of speech taggingin this paper we describe an unsupervised learning algorithm for automatically training a rulebased part of speech tagger without using a manually tagged corpuswe compare this algorithm to the baumwelch algorithm used for unsupervised training of stochastic taggersnext we show a method for combining unsupervised and supervised rulebased training algorithms to create a highly accurate tagger using only a small amount of manually tagged textwe present a rulebased partofspeech tagger for unsupervised training corpuswe propose a method to acquire contextdependent pos disambiguation rules and created an accurate tagger even from a very small annotated text by combining supervised and unsupervised learning
prepositional phrase attachment through a backedoff model recent work has considered corpusbased or statistical approaches to the problem of prepositional attachment ambiguity typically ambiguous verb phrases of the form v p np2 through a model which considers values of the four head words npattach vpattach work by ratnaparkhi reynar and roukos rrr94 and brill and resnik br94 has considered corpusbased approaches to this problem using a set of examples to train a model which is then used to make attachment decisions on test databoth papers describe methods which look at the four head words involved in the attachment the vp head the first np head the preposition and the second np head this paper proposes a new statistical method for ppattachment disambiguation based on the four head wordsthe training and test data were supplied by ibm being identical to that used in rrr94examples of verb phrases containing a sequence had been taken from the wall street journal treebank msm93for each such vp the head verb first head noun preposition and second head noun were extracted along with the attachment decision for example the verb phrase would give the quintuple 0 joined board as director the elements of this quintuple will from here on be referred to as the random variables a v ni p and n2in the above verb phrase a 0 v joined ni board p as and n2 directorthe data consisted of training and test files of 20801 and 3097 quintuples respectivelyin addition a development set of 4039 quintuples was also suppliedthis set was used during development of the attachment algorithm ensuring that there was no implicit training of the method on the test set itselfa ppattachment algorithm must take each quadruple in test data and decide whether the attachment variable a 0 or 1the accuracy of the algorithm is then the percentage of attachments it gets correct on test data using the a values taken from the treebank as the reference setthe probability of the attachment variable a being 1 or 0 is a probability p which is conditional on the values of the words in the quadruplein general a probabilistic algorithm will make an estimate 75 of this probability the decision can then be made using the test if this is true the attachment is made to the noun if not then it is made to the verbwhen evaluating an algorithm it is useful to have an idea of the lower and upper bounds on its performancesome key results are summarised in the table belowall results in this section are on the ibm training and test data with the exception of the two average human resultsalways noun attachment means attach to the noun regardless of most likely for each preposition means use the attachment seen most often in training data for the preposition seen in the test quadruplethe human performance results are taken from rrr94 and are the average performance of 3 treebanking experts on a set of 300 randomly selected test events from the wsj corpus first looking at the four head words alone then using the whole sentencea reasonable lower bound seems to be 722 as scored by the most likely for each preposition methodan approximate upper bound is 882 it seems unreasonable to expect an algorithm to perform much better than a humanwe will use the symbol f to denote the number of times a particular tuple is seen in training datafor example f is the number of times the quadruple is seen with a noun attachmentcounts of lower order tuples can also be made for example 1 is the number of times is seen with noun attachment in training data f is the number of times is seen with either attachment and any value of ni and p a maximum likelihood method would use the training data to give the following estimation for the conditional probability unfortunately sparse data problems make this estimate uselessa quadruple may appear in test data which has never been seen in training data ie f 0the above estimate is undefined in this situation which happens extremely frequently in a large vocabulary domain such as wsjeven if f 0 it may still be very low and this may make the above mle estimate inaccurateunsmoothed mle estimates based on low counts are notoriously bad in similar problems such as ngram language modeling gc90however later in this paper it is shown that estimates based on low counts are surprisingly useful in the ppattachment problemhindle and rooth hr93 describe one of the first statistical approaches to the prepositional phrase attachment problemover 200000 triples were extracted from 13 million words of ap news storiesthe attachment decisions for these triples were unknown so an unsupervised training method was used two human judges annotated the attachment decision for 880 test examples and the method performed at 80 accuracy on these casesnote that it is difficult to compare this result to results on wall street journal as the two corpora may be quite differentthe wall street journal treebank msm93 enabled both rrr94 and br94 to extract a large amount of supervised training material for the problemboth of these methods consider the second noun n2 as well as v n1 and p with the hope that this additional information will improve resultsbr94 use 12000 training and 500 test examplesa greedy search is used to learn a sequence of transformations which minimise the error rate on training dataa transformation is a rule which makes an attachment decision depending on up to 3 elements of the quadruplea further experiment incorporated wordclass information from wordnet into the model by allowing the transformations to look at classes as well as the wordsthe method gave 808 accuracy with words only 818 with words and semantic classes and they also report an accuracy of 758 for the metric of hr93 on this datatransformations score 8191 on the ibm data used in this paperpersonal communication from brillrrr94 use the data described in section 21 of this paper 20801 training and 3097 test examples from wall street journalthey use a maximum entropy model which also considers subsets of the quadrupleeach subtuple predicts noun or verb attachment with a weight indicating its strength of prediction the weights are trained to maximise the likelihood of training datafor example might have a strong weight for noun attachment while would have a strong weight for verb attachmentrrr94 also allow the model to look at class information this time the classes were learned automatically froin a corpusresults of 777 and 816 are reportedcrucially they ignore lowcount events in training data by imposing a frequency cutoff somewhere between 3 and 5katz87 describes backedoff ngram word models for speech recognitionthere the task is to estimate the probability of the next word in a text given the preceding wordsthe nile estimate of this probability would be but again the denominator f will frequently be zero especially for large n the backedoff estimate is a method of combating the sparse data problemit is defined recursively as follows else backingoff continues in the same waythe idea here is to use mle estimates based on lower order ngrams if counts are not high enough to make an accurate estimate at the current levelthe cut off frequencies are thresholds determining whether to backoff or not at each level counts lower than ci at stage i are deemed to be too low to give an accurate estimate so in this case backingoff continues are normalisation constants which ensure that conditional probabilities sum to onenote that the estimation of is analogous to the estimation of p and the above method can therefore also be applied to the ppattachment problemfor example a simple method for estimation of nl p n2 would go from mle estimates of ally n1 p n2 to nl p to j3 to polo to pmhowever a crucial difference between the two problems is that in the ngram task the words w1 to tun are sequential giving a natural order in which backing off takes place from awnlwi w2tun_i to p to p and so onthere is no such sequence in the ppattachment problem and because of this there are four possible triples when backing off from quadruples and and six possible pairs when backing off from triples and a key observation in choosing between these tuples is that the preposition is particularly important to the attachment decisionfor this reason only tuples which contained the preposition were used in backed off estimates this reduces the problem to a choice between 3 triples and 3 pairs at each respective stagesection 62 describes experiments which show that tuples containing the preposition are much better indicators of attachmentthe following method of combining the counts was found to work best in practice note that this method effectively gives more weight to tuples with high overall countsanother obvious method of combination a simple average2 gives equal weight to the three tuples regardless of their total counts and does not perform as wellthe cutoff frequencies must then be chosena surprising difference from language modeling is that a cutoff frequency of 0 is found to be optimum at all stagesthis effectively means however low a count is still use it rather than backing off a level2ega simple average for triples would be defined asthe figure below shows the results for the method on the 3097 test sentences also giving the total count and accuracy at each of the backedoff stagesin an effort to reduce sparse data problems the following processing was run over both test and training data these modifications are similar to those performed on the corpus used by br94the result using this modified corpus was 845 an improvement of 04 on the previous resultresults from rrr94 br94 and the backedoff method are shown in the table below4all results are for the ibm datathese figures should be taken in the context of the lower and upper bounds of 722882 proposed in section 23on the surface the method described in 11r93 looks very similar to the backedoff estimatefor this reason the two methods deserve closer comparisonhindle and rooth used a partial parser to extract head nouns from a corpus together with a preceding verb and a following preposition giving a table of triplesan iterative unsupervised method was then used to decide between noun and verb attachment for each triplethe decision was made as follows if we ignore n2 then the ibm data is equivalent to hindle and rooth triples with the advantage of the attachment decision being known allowing a supervised algorithmthe test used in hr93 can then be stated as follows in our notation this is effectively a comparison of the maximum likelihood estimates of ill and a different measure from the backedoff estimate which gives 73the backedoff method based on just the f and anil p counts would be if p 05 then choose noun attachment else choose verb attachment where f fi f f an experiment was implemented to investigate the difference in performance between these two methodsthe test set was restricted to those cases where f 0 f 0 and hindle and rooth method gave a definite decisionthis gave 1924 test caseshindle and rooth method scored 821 accuracy on this set whereas the backedoff measure scored 865 a possible criticism of the backedoff estimate is that it uses low count events without any smoothing which has been shown to be a mistake in similar problems such as ngram language modelsin particular quadruples and triples seen in test data will frequently be seen only once or twice in training dataan experiment was made with all counts less than 5 being put to zero6 effectively making the algorithm ignore low count eventsin rrr94 a cutoff between 3 and 5 is used for all eventsthe training and test data were both the unprocessed original data setsthe results were as follows we have excluded tuples which do not contain a preposition from the modelthis section gives results which justify thisthe table below gives accuracies for the subtuples at each stage of backingoffthe accuracy figure for a particular tuple is obtained by modifying the algorithm in section 41 to use only information from that tuple at the appropriate stagefor example for stage 2 would be modified to read if f 0 all other stages in the algorithm would be unchangedthe accuracy figure is then the percentage accuracy on the test cases where the counts were usedthe development set with no morphological processing was used for these testsat each stage there is a sharp difference in accuracy between tuples with and without a prepositionmoreover if the 14 tuples in the above table were ranked by accuracy the top 7 tuples would be the 7 tuples which contain a prepositionthe backedoff estimate scores appreciably better than other methods which have been tested on the wall street journal corpusthe accuracy of 845 is close to the human performance figure of 88 using the 4 head words alonea particularly surprising result is the significance of low count events in training datathe algorithm has the additional advantages of being conceptually simple and computationally inexpensive to implementthere are a few possible improvements which may raise performance furtherfirstly while we have shown the importance of lowcount events some kind of smoothing may improve performance further this needs to be investigatedwordclasses of semantically similar words may be used to help the sparse data problem both rrr94 and br94 report significant improvements through the use of wordclassesfinally more training data is almost certain to improve results
W95-0103
prepositional phrase attachment through a backedoff modelrecent work has considered corpusbased or statistical approaches to the problem of prepositional phrase attachment ambiguitytypically ambiguous verb phrases of the form v rip1 p rip2 are resolved through a model which considers values of the four head words this paper shows that the problem is analogous to ngram language models in speech recognition and that one of the most common methods for language modeling the backedoff estimate is applicableresults on wall street journal data of 845 accuracy are obtained using this methodwe use a backoff model which enables them to take low frequency effects into account on the ratnaparkhi dataset we introduce modifications to the ratnaparkhi et al dataset meant to combat data sparsity and used the modified version to train their backedoff model
a bayesian hybrid method for contextsensitive spelling correction two classes of methods have been shown to be useful for resolving lexical ambiguity the first relies on the presence of particular words within some distance of the ambiguous target word the second uses the pattern of words and partofspeech tags around the target word these methods have complementary coverage the former captures the lexical quotatmospherequot while the latter captures local syntax yarowsky has exploited this complementarity by combining the two methods using decision lists the idea is to pool the evidence provided by the component methods and to then solve a target problem by applying the single strongest piece of evidence whatever type it happens to be this paper takes yarowsky work as a starting point applying decision lists to the problem of contextsensitive spelling correction decision lists are found by and large to outperform either component method however it is found that further improvements can be obtained taking into account not just the single strongest piece of evidence but available evidence a new hybrid method based on bayesian classifiers is presented for doing this and its performance improvements are demonstrated two classes of methods have been shown useful for resolving lexical ambiguitythe first tests for the presence of particular context words within a certain distance of the ambiguous target wordthe second tests for collocations patterns of words and partofspeech tags around the target wordthe contextword and collocation methods have complementary coverage the former captures the lexical quotatmospherequot while the latter captures local syntaxyarowsky 1994 has exploited this complementarity by combining the two methods using decision liststhe idea is to pool the evidence provided by the component methods and to then solve a target problem by applying the single strongest piece of evidence whatever type it happens to beyarowsky applied his method to the task of restoring missing accents in spanish and french and found that it outperformed both the method based on context words and one based on local syntaxthis paper takes yarowsky method as a starting point and hypothesizes that further improvements can be obtained by taking into account not only the single strongest piece of evidence but all the available evidencea method is presented for doing this based on bayesian classifiersthe work reported here was applied not to accent restoration but to a related lexical disambiguation task contextsensitive spelling correctionthe task is to fix spelling errors that happen to result in valid words in the lexicon for example i would like the chocolate cake for desert where dessert was misspelled as desertthis goes beyond the capabilities of conventional spell checkers which can only detect errors that result in nonwordswe start by applying a very simple method to the task to serve as a baseline for comparison with the other methodswe then apply each of the two component methods mentioned above context words and collocationswe try two ways of combining these components decision lists and bayesian classifierswe evaluate the above methods by comparing them with an alternative approach to spelling correction based on partofspeech trigramsthe sections below discuss the task of contextsensitive spelling correction the five methods we tried for the task and the evaluationthe final section draws some conclusionscontextsensitive spelling correction is the problem of correcting spelling errors that result in valid words in the lexiconsuch errors can arise for a variety of reasons including typos homonym confusions and usage errors these errors are not detected by conventional spell checkers as they only notice errors resulting in nonwordswe treat contextsensitive spelling correction as a task of word disambiguationthe ambiguity among words is modelled by confusion setsa confusion set c w1 wn means that each word wi in the set is ambiguous with each other word in the setthus if c desert dessert then when the spellingcorrection program sees an occurrence of either desert or dessert in the target document it takes it to be ambiguous between desert and dessert and tries to infer from the context which of the two it should bethis treatment requires a collection of confusion sets to start withthere are several ways to obtain such a collectionone is based on finding words in the dictionary that are one typo away from each other mays et at 199111 another finds words that have the same or similar pronunciationssince this was not the focus of the work reported here we simply took our confusion sets from the list of quotwords commonly confusedquot in the back of the random house unabridged dictionary flexner 19831a final point concerns the two types of errors a spellingcorrection program can make false negatives and false positives we will make the simplifying assumption that both kinds of errors are equally badin practice however false negatives are much worse as users get irritated by programs that badger them with bogus complaintshowever given the probabilistic nature of the methods that will be presented below it would not be hard to modify them to take this into accountwe would merely set a confidence threshold and report a suggested correction only if the probability of the suggested word exceeds the probability of the user original spelling by at least the threshold amountthe reason this was not done in the work reported here is that setting this confidence threshold involves a certain subjective factor our simplifying assumption allows us to measure performance objectively by the single parameter of prediction accuracythis section presents a progression of five methods for contextsensitive spelling correction baseline an indicator of quotminimal competencyquot for comparison with the other methods context words tests for particular words within k words of the ambiguous target word collocations tests for syntactic patterns around the ambiguous target word decision lists combines context words and collocations via decision lists bayesian classifiers combines context words and collocations via bayesian classifierseach method will be described in terms of its operation on a single confusion set c wi wn that is we will say how the method disambiguates occurrences of words w1 through wn from the contextthe methods handle multiple confusion sets by applying the same technique to each confusion set independentlyeach method involves a training phase and a test phasethe performance figures given below are based on training each method on the 1millionword brown corpus kutquotera and francis 1967 and testing it on a 34millionword corpus of wall street journal text marcus et al 1993the baseline method disambiguates words w1 through wn by simply ignoring the context and always guessing that the word should be whichever wi occurred most often in the training corpusfor instance if c desert dessert and desert occurred more often than dessert in the training corpus then the method will predict that every occurrence of desert or dessert in the test corpus should be changed to deserttable 1 shows the performance of the baseline method for 18 confusion setsthis collection of confusion sets will be used for evaluating the methods throughout the papereach line of the table gives the results for one confusion set the words in the confusion set the number of instances of any word in the confusion set in the training corpus and in the test corpus the word in the confusion set that occurred most often in the training corpus and the prediction accuracy of the baseline method for the test corpusprediction accuracy is the number of times the correct word was predicted divided by the total number of test casesfor example the members of the confusion set i me occurred 840 times in the test corpus the breakdown being 744 and 96 methe baseline method predicted every time and thus was right 744 times for a score of 744840 0886essentially the baseline method measures how accurately one can predict words using just their prior probabilitiesthis provides a lower bound on the performance we would expect from the other methods which use more than just the priorsone clue about the identity of an ambiguous target word comes from the words around itfor instance if the target word is ambiguous between desert and dessert and we see words like arid sand and sun nearby this suggests that the target word should be deserton the other hand words such as chocolate and delicious in the context imply dessertthis observation is the basis for the method of context wordsthe idea is that each word wi in the confusion set will have a characteristic distribution of words that occur in its context thus to classify an ambiguous target word we look at the set of words around it and see which wi distribution they most closely followconfusion set no of no of most baseline training test frequent cases cases word whether weather 331 245 whether 0922 i me 6125 840 i 0886 its it is 1951 3575 its 0863 past passed 385 397 past 0861 than then 2949 1659 than 0807 being begin 727 449 being 0780 effect affect 228 162 effect 0741 your you are 1047 212 your 0726 number amount 588 429 number 0627 council counsel 82 83 council 0614 rise raise 139 301 rise 0575 between among 1003 730 between 0538 led lead 226 219 led 0530 except accept 232 95 except 0442 peace piece 310 61 peace 0393 there their they are 5026 2187 there 0306 principle principal 184 69 principle 0290 sight site cite 149 44 sight 0114 column gives the word in the confusion set that occurred most frequently in the training corpusthe quotbaselinequot column gives the prediction accuracy of the baseline system on the test corpusfollowing previous work gale et al 1994 we formulate the method in a bayesian frameworkthe task is to pick the word wi that is most probable given the context words ci observed within a kword window of the target wordthe probability for each wi is calculated using bayes rule as it stands the likelihood term p is difficult to estimate from training data we would have to count situations in which the entire context was previously observed around word wi which raises a severe sparsedata probleminstead therefore we assume that the presence of one word in the context is independent of the presence of any other wordthis let us us decompose the likelihood into a product gale et al 1994 provide evidence that this is in fact a reasonable approximationwe still have the problem however of estimating the individual p probabilities from our training corpusthe straightforward way would be to use a maximum likelihood estimate we would count m the total number of occurrences of wi in the training corpus and m the number of such occurrences for which ci occurred within k words and we would then take the ratio mimi2 unfortunately we may not have enough training data to get an accurate estimate this waygale et al 1994 address this problem by interpolating between two maximumlikelihood estimates one of p and one of pthe former measures the desired quantity but is subject to inaccuracy due to sparse data the latter provides a robust estimate but of a potentially irrelevant quantitygale et al interpolate between the two so as to minimize the overall inaccuracywe have pursued an alternative approach to the problem of estimating the likelihood termswe start with the observation that there is no need to use every word in the kword window to discriminate among the words in the confusion setif we do not have enough training data for a given word c to accurately estimate p for all w then we simply disregard c and base our discrimination on other more reliable evidencewe implement this by introducing a quotminimum occurrencesquot threshold trninit is currently set to 10we then ignore a context word c if where mi and mi are defined as abovein other words c is ignored if it practically never occurs within the context of any wi or if it practically always occurs within the context of every w in the former case we have insufficient data to measure its presence in the latter its absencebesides the reason of insufficient data a second reason to ignore a context word is if it does not help discriminate among the words in the confusion setfor instance if we are trying to decide between and me then the presence of the in the context probably does not helpby ignoring such words we eliminate a source of noise in our discrimination procedure as well as reducing storage requirements and run timeto determine whether a context word c is a useful discriminator we run a chisquare test fleiss 1981 to check for an association between the presence of c and the choice of word in the confusion setif the observed association is not judged to be significant3 then c is discardedthe significance level is currently set to 005figure 1 pulls together the points of the preceding discussion into an outline of the method of context wordsin the training phase it identifies a list of context words that are useful for discriminating among the words in the confusion setat run time it estimates the probability of each word in the confusion setit starts with the prior probabilities and multiplies them by the likelihood of each context word from its list that appears in the kword window of the target wordfinally it selects the word in the confusion set with the greatest probabilitythe main parameter to tune for the method of context words is k the halfwidth of the context windowprevious work yarowsky 1994 shows that smaller values of k work well for resolving local syntactic ambiguities while larger values are suitable for resolving semantic ambiguitieswe tried the values 3 6 12 and 24 on some practice confusion sets and found that k 3 generally did best indicating that most of the action for our task and confusion sets comes from local syntaxin the rest of this paper this value of k will be used2we are interpreting the condition quotc occurs within a kword window of tvquot as a binary feature either it happens or it does notthis allows us to handle context words in the same bayesian framework as will be used later for other binary features a more conventional interpretation is to take into account the number of occurrences of each c within the kword window and to estimate p accordinglyhowever either interpretation is valid as long as it is applied consistently that is both when estimating the likelihoods from training data and when classifying test casestraining phase table 2 shows the effect of varying k for our usual collection of confusion setsit can be seen that performance generally degrades as k increasesthe reason is that the method starts picking up spurious correlations in the training corpustable 4 gives some examples of the context words learned for the confusion set peace piece with k 24the context words corps united nations etc all imply peace and appear to be plausible on the other hand consider the context word how which allegedly also implies peaceif we look back at the training corpus for the supporting data for this word we find excerpts such as but oh how i do sometimes need just a moment of rest and peace no matter how earnest is our quest for guaranteed peace how best to destroy your peace there does not seem to be a necessary connection here between how and peace the correlation is probably spuriousalthough we are using a chisquare test expressly to filter out such spurious correlations we can only expect the test to catch 95 of them as mentioned above most of the legitimate context words show up for small k thus as k gets large the limited number of legitimate context words gets overwhelmed by the 5 of the spurious correlations that make it through our filterthe method of context words is good at capturing generalities that depend on the presence of nearby words but not their orderwhen order matters other more syntaxbased methods such as collocations and trigrams are appropriatein the work reported here the method of collocations was used to capture order dependenciesa collocation expresses a pattern of syntactic elements around the target wordwe allow two types of syntactic elements words and partofspeech tagsgoing back to the desert dessert example a collocation that would imply desert might be prep the this collocation would match the sentences travelers entering from the desert were confounded along with some guerrilla fighting in the desert two ladies who lay pinkly nude beside him in the desert matching partofspeech tags against the sentence is done by first tagging each word in the sentence with its set of possible partofspeech tags obtained from a dictionaryfor instance walk has the tag set ns v corresponding to its use as a singular noun and as a verb4 for a tag to match a word the tag must be a member of the word tag setthe reason we use tag sets instead of running a tagger on the sentence to produce unique tags is that taggers need to look at all words in the sentence which is impossible when the target word is taken to be ambiguous the method of collocations was implemented in much the same way as the method of context wordsthe idea is to discriminate among the words wi in the confusion set by identifying the collocations that tend to occur around each w an ambiguous target word is then classified by finding all collocations that match its contexteach collocation provides some degree of evidence for each word in the confusion setthis evidence is combined using hayes rulein the end the wi with the highest probability given the evidence is selecteda new complication arises for collocations however in that collocations unlike context words cannot be assumed independentconsider for example the following collocations for desert prep the in the the these collocations are highly interdependent we will say they conflictto deal with this problem we invoke our earlier observation that there is no need to use all the evidenceif two pieces of evidence conflict we simply eliminate one of them and base our decision on the rest of the evidencewe identify conflicts by the heuristic that two collocations conflict if they overlapthe overlapping portion is the factor they have in common and thus represents their lack of independencethis is only a heuristic because we could imagine collocations that do not overlap but still conflictnote incidentally that there can be at most two nonconflicting collocations for any decision one matching on the lefthand side of the target word and one on the righthaving said that we resolve conflicts between two collocations by eliminating one of them we still need to specify which oneour approach is to assign each one a strength just as yarowsky 1994 does in his hybrid method and to eliminate the one with the lower strengththis preserves the strongest nonconflicting evidence as the basis for our answerthe strength of a collocation reflects its reliability for decisionmaking a further discussion of strength is deferred to section 34figure 2 ties together the preceding discussion into an outline of the method of collocationsthe method is described in terms of quotfeaturesquot rather than quotcollocationsquot to reflect its full generality the features could be context words as well as collocationsin fact the method subsumes the method of context words it does everything that method does and resolves conflicts among its features as wellto facilitate the conflict resolution it sorts the features by decreasing strengthlike the method of context words the method of collocations has one main parameter to tune t the maximum number of syntactic elements in a collocationsince the number of collocations grows exponentially with e it was only practical to vary t from 1 to 3we tried this on some practice confusion sets and found that all values of t gave roughly comparable performancewe selected t 2 to use from here on as a compromise between reducing the expressive power of collocations and incurring a high computational cost table 3 shows the results of varying e for the usual confusion setsthere is no clear winner each value of did best for certain confusion setstable 5 gives examples of the collocations learned for peace piece with 2a good deal of redundancy can be seen among the collocationsthere is also some redundancy between the collocations and the context words of the previous section many of the collocations at the end of the list appear to be overgeneral and irrelevantyarowsky 1994 pointed out the complementarity between context words and collocations context words pick up those generalities that are best expressed in an orderindependent way while collocations capture orderdependent generalitiesyarowsky proposed decision lists as a way to get the best of both methodsthe idea is to make one big list of all features in this case context words and collocationsthe features are sorted in order of decreasing strength where the strength of a feature reflects its reliability for decisionmakingan ambiguous target word is then classified by running down the list and matching each feature against the target contextthe first feature that training phase matches is used to classify the target wordyarowsky 1994 describes further refinements such as detecting and pruning features that make a zero or negative contribution to overall performancethe method of decision lists as just described is almost the same as the method for collocations in figure 2 where we take quotfeaturesquot in that figure to include both context words and collocationsthe main difference is that during evidence gathering at run time decision lists terminate after matching the first featurethis obviates the need for resolving conflicts between featuresgiven that decision lists base their answer for a problem on the single strongest feature their performance rests heavily on how the strength of a feature is definedyarowsky 1994 used the following metric to calculate the strength of a feature f this is for the case of a confusion set of two words w1 and w2it can be shown that this metric produces the identical ranking of features as the following somewhat simpler metric provided p 0 for all i5 as an example of using the metric suppose f is the context word arid and suppose that arid cooccurs 10 times with desert and 1 time with dessert in the training corpusthen reliability max 1011 0909this value measures the extent to which the presence of the feature is unambiguously correlated with one particular w it can be thought of as the feature reliability at picking out that wi from the others in the confusion set collocationthe bottom line of the table shows the number of collocations learned averaged over all confusion sets also as a function of e one peculiar property of the reliability metric is that it ignores the prior probabilities of the words in the confusion setfor instance in the arid example it would award the same high score even if the total number of occurrences of desert and dessert in the training corpus were 50 and 5 respectively in which case arid performance of 1011 would be exactly what one would expect by chance and therefore hardly impressivebesides the reliability metric therefore we also considered an alternative metric the uncertainty coefficient of x denoted you press et al 1988 p501you measures how much additional information we get about the presence of the feature by knowing the choice of word in the confusion set6 you is calculated as follows the probabilities are calculated for the population consisting of all occurrences in the training corpus of any w for instance p is the probability of feature f being present within this words learned for peace piece with k 24each line gives a context word and the number of peace and piece occurrences for which that context word occurred within k words the last line of the table gives the total number of occurrences of peace and piece in the training corpustable 5 excerpts from the sorted list of 98 collocations learned for peace piece with f 2each line gives a collocation and the number of peace and piece occurrences it matchedthe last line of the table gives the total number of occurrences of peace and piece in the training corpus populationapplying the you metric to the arid example the value returned now depends on the number of occurrences of desert and dessert in the training corpusif these numbers are 50 and 5 then you 00 reflecting the uninformativeness of the arid feature in this situationif instead the numbers are 50 and 500 then you over dessert to compare the two strength metrics we tried both on some practice confusion setssometimes one metric did substantially better sometimes the otherin the balance the reliability metric seemed to give higher performancethis metric is therefore the one that will be used from here onit was also used for all experiments involving the method of collocationstable 6 shows the performance of decision lists with each metric for the usual confusion setsas with the practice confusion sets we see sometimes dramatic performance differences between the two metrics and no clear winnerfor instance for i me the reliability metric did better than you whereas for between among it did worse further research is needed to understand the circumstances under which each metric performs bestfocusing for now on the reliability metric table 6 shows that the method of decision lists does by and large accomplish what it set out to do namely outperform either component method alonethere are however a few cases where it falls short for instance for between among decision lists score only 0659 compared with 0759 for context words and 0730 for collocations7 we believe that the problem lies in the strength metric because decision lists make their judgements based on a single piece of evidence their performance is very sensitive to the metric used to select that piece of evidencebut as the reliability and you metrics indicate it is not completely clear how the metric should be definedthis problem is addressed in the next sectionthe previous section confirmed that decision lists are effective at combining two complementary methods context words and collocationsin doing the combination however decision lists look only at the single strongest piece of evidence for a given problemwe hypothesize that even better performance can be obtained by taking into account all available evidencethis section presents a method of doing this based on bayesian classifierslike decision lists the bayesian method starts with a list of all features sorted by decreasing strengthit classifies an ambiguous target word by matching each feature in the list in turn against the target contextinstead of stopping at the first matching feature however it traverses the entire list combining evidence from all matching features and resolving conflicts where necessarythis method is essentially the same as the one for collocations except that it uses context words as well as collocations for the featuresthe only new wrinkle is in checking for conflicts between features at run time as there are now two kinds of features to considerif both features are context words we say the features never conflict if both features are collocations we say they conflict if they overlap the new case is if one feature is a context word and the other is a collocationconsider for example the context word walk and the following collocations to some extent all of these collocations conflict with walkcollocation is the most blatant case if it matches the target context this logically implies that the context word walk will matchif collocation matches this guarantees that one of the possible tags of walk will be present nearby the target word thereby elevating the probability that walk will match within k wordsif collocation matches this guarantees that there are two positions nearby the target word that are incompatible with walk thereby reducing the probability that walk will matchif we were to treat all of these cases as conflicts we would end up losing a great deal of evidenceinstead we adopt the more relaxed policy of only flagging the most egregious conflicts here the one between collocation and walkin general we will say that a collocation and a context word conflict if the collocation contains an explicit test for the context wordtable 7 compares all methods covered so far baseline two component methods and two hybrid methodsthe table shows that the bayesian hybrid method does at least as well as the previous four methods for almost every confusion setoccasionally it scores slightly less than collocations this appears to be due to some averaging effect where noisy context words are dragging it downoccasionally too it scores less than decision lists but never by much on the whole it yields a modest but consistent improvement and ill the case of between among a sizable improvementwe believe the improvement is due to considering all of the evidence rather than just the single strongest piece which makes the method more robust to inaccurate judgements about which piece of evidence is quotstrongestquotconfusion baseline cwords col locs dlist bayes trigrams set 3 2 rely rely whether 0922 0902 0931 0935 0935 0873 i 0886 0914 0981 0980 0985 0985 its 0863 0862 0945 0931 0942 0965 past 0861 0861 0909 0932 0924 0955 than 0807 0931 0965 0967 0973 0780 being 0780 0791 0853 0842 0869 0978 effect 0741 0747 0821 0821 0827 0975 your 0726 0816 0887 0868 0901 0958 number 0627 0646 0646 0629 0662 0636 council 0614 0639 0639 0627 0639 0651 rise 0575 0575 0807 0804 0807 0574 between 0538 0759 0730 0659 0786 0538 led 0530 0530 0840 0840 0840 0909 except 0442 0695 0789 0789 0811 0695 peace 0393 0754 0869 0852 0852 0393 there 0306 0726 0932 0914 0916 0961 principle 0290 0290 0812 0812 0812 0609 sight 0114 0455 0318 0432 0455 0250while the previous section demonstrated that the bayesian hybrid method does better than its components we would still like to know how it compares with alternative methodswe looked at a method based on partofspeech trigrams developed and implemented by schabes 1995schabes method can be viewed as performing an abductive inference given a sentence containing an ambiguous word it asks which choice wi for that word would best explain the observed sequence of words in the sentenceit answers this question by substituting each wi in turn into the sentencethe wi that produces the highestprobability sentence is selectedsentence probabilities are calculated using a partofspeech trigram modelwe tried schabes method on the usual confusion sets the results are in the last column of table 7it can be seen that trigrams and the bayesian hybrid method each have their better momentstrigrams are at their worst when the words in the confusion set have the same part of speechin this case trigrams can distinguish between the words only by their prior probabilities this follows from the way the method calculates sentence probabilitiesthus for between among for example where both words are prepositions trigrams score the same as the baseline methodin such cases the bayesian hybrid method is clearly betteron the other hand when the words in the confusion set have different parts of speech as in for example there their they are trigrams are often better than the bayesian methodwe believe this is because trigrams look not just at a few words on either side of the target word but at the partofspeech sequence of the whole sentencethis analysis indicates a complementarity between trigrams and bayes and suggests a combination in which trigrams would be applied first but if trigrams determine that the words in the confusion set have the same part of speech for the sentence at issue then the sentence would be passed to the bayesian methodthis is a research direction we plan to pursuethe work reported here builds on yarowsky use of decision lists to combine two component methods context words and collocationsdecision lists pool the evidence from the two methods and solve a target problem by applying the single strongest piece of evidence whichever type that happens to bethis paper investigated the hypothesis that even better performance can be obtained by basing decisions on not just the single strongest piece of evidence but on all available evidencea method for doing this based on bayesian classifiers was presentedit was applied to the task of contextsensitive spelling correction and was found to outperform the component methods as well as decision listsa comparison of the bayesian hybrid method with schabes trigrambased method suggested a further combination in which trigrams would be used when the words in the confusion set had different parts of speech and the bayesian method would be used otherwisethis is a direction we plan to pursue in future researchwe would like to thank bill freeman yves schabes emmanuel roche and jacki golding for helpful and enjoyable discussions on the work reported here
W95-0104
a bayesian hybrid method for contextsensitive spelling correctiontwo classes of methods have been shown to be useful for resolving lexical ambiguitythe first relies on the presence of particular words within some distance of the ambiguous target word the second uses the pattern of words and partofspeech tags around the target wordthese methods have complementary coverage the former captures the lexical atmosphere while the latter captures local syntaxyarowsky has exploited this complementarity by combining the two methods using decision liststhe idea is to pool the evidence provided by the component methods and to then solve a target problem by applying the single strongest piece of evidence whatever type it happens to bethis paper takes yarowsky work as a starting point applying decision lists to the problem of contextsensitive spelling correctiondecision lists are found by and large to outperform either component methodhowever it is found that further improvements can be obtained by taking into account not just the single strongest piece of evidence but all the available evidencea new hybrid method based on bayesian classifiers is presented for doing this and its performance improvements are demonstratedwe build a classifier based on a rich set of context features
disambiguating noun groupings with respect to wordnet senses word groupings useful for language processing tasks are increasingly available as thesauri appear online and as distributional word clustering techniques improve however for many tasks one is in relationships among word words this paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns the kind of data one finds in online thesauri or as the output of distributional clustering algorithms disambiguation is performed with respect to wordnet senses which are fairly finegrained however the method also permits the assignment of higherlevel wordnet categories rather than sense labels the method is illustrated primarily by example though results of a more rigorous evaluation are also presented word groupings useful for language processing tasks are increasingly available as thesauri appear online and as distributional techniques become increasingly widespread however for many tasks one is interested in relationships among word senses not wordsconsider for example the cluster containing attorney counsel trial court and judge used by brown et al to illustrate a quotsemantically stickyquot group of wordsas is often the case where sense ambiguity is involved we as readers impose the most coherent interpretation on the words within the group without being aware that we are doing soyet a computational system has no choice but to consider other more awkward possibilities for example this cluster might be capturing a distributional relationship between advice and royalty this would be a mistake for many applications such as query expansion in information retrieval where a surfeit of false connections can outweigh the benefits obtained by using lexical knowledgeone obvious solution to this problem would be to extend distributional grouping methods to word sensesfor example one could construct vector representations of senses on the basis of their cooccurrence with words or with other sensesunfortunately there are few corpora annotated with word sense information and computing reliable statistics on word senses rather than words will require more data rather than less1 furthermore one widely available example of a large manually sensetagged corpus the wordnet group annotated subset of the brown corpus2 vividly illustrates the difficulty in obtaining suitable datalactually this depends on the finegrainedness of sense distinctions clearly one could annotate corpora with very high level semantic distinctions for example basili et al take such a coarsegrained approach utilizing on the order of 10 to 15 semantic tags for a given domaini assume throughout this paper that finergrained distinctions than that are necessaryit is quite small by current corpus standards the direct annotation methodology used to create it is labor intensive found that direct annotation takes twice as long as automatic tagging plus correction for partofspeech annotation and the output quality reflects the difficulty of the task there have been some attempts to capture the behavior of semantic categories in a distributional setting despite the unavailability of senseannotated corporafor example hearst and schtitze take steps toward a distributional treatment of wordnetbased classes using schtitze approach to constructing vector representations from a large cooccurrence matrixyarowsky algorithm for sense disambiguation can be thought of as a way of determining how roget thesaurus categories behave with respect to contextual featuresand my own treatment of selectional constraints provides a way to describe the plausibility of cooccurrence in terms of wordnet semantic categories using cooccurrence relationships mediated by syntactic structurein each case one begins with known semantic categories and nonsenseannotated text and proceeds to a distributional characterization of semantic category behavior using cooccurrence relationshipsthis paper begins from a rather different starting pointas in the abovecited work there is no presupposition that senseannotated text is availablehere however i make the assumption that word groupings have been obtained through some black box procedure eg from analysis of unannotated text and the goal is to annotate the words within the groupings post hoc using a knowledgebased catalogue of sensesif successful such an approach has obvious benefits one can use whatever sources of good word groupings are available primarily unsupervised word clustering methods but also online thesauri and the like without folding in the complexity of dealing with word senses at the same time3 the resulting sense groupings should be useful for a variety of purposes although ultimately this work is motivated by the goal of sense disambiguation for unrestricted text using unsupervised methodslet us state the problem as followswe are given a set of words w w1 d with each word wz having an associated set si sii of possible senseswe assume that there exists some set w c you si representing the set of word senses that an ideal human judge would conclude belong to the group of senses corresponding to the word grouping w the goal is then to define a membership function co that takes sij wi and w as its arguments and computes a value in 0 1 representing the confidence with which one can state that sense si3 belongs in sense grouping w4 note that in principle nothing precludes the possibility that multiple senses of a word are included in wiexampleconsider the following word group5 burglars thief rob mugging stray robbing lookout chase crate thieves restricting our attention to noun senses in wordnet only lookout and crate are polysemoustreating this word group as w one would expect cp to assign a value of 1 to the unique senses of the monosemous words and to assign a high value to lookout sense as lookout lookout man sentinel sentry watch scout a person employed to watch for something to happenlow values of co would be expected for the senses of lookout that correspond to an observation tower or to the activity of watchingcrate two wordnet senses correspond to the physical object and the quantity my own intuition is that the first of these would more properly be included in hp than the second and should therefore receive a higher value of cc though of course neither i nor any other individual really constitutes an quotideal human judgequot the core of the disambiguation algorithm is a computation of semantic similarity using the wordnet taxonomy a topic recently investigated by a number of people in this paper i restrict my attention to wordnet isa taxonomy for nouns and take an approach in which semantic similarity is evaluated on the basis of the information content shared by the items being comparedthe intuition behind the approach is simple the more similar two words are the more informative will be the most specific concept that subsumes them boththe traditional method of evaluating similarity in a semantic network by measuring the path length between two nodes also captures this albeit indirectly when the semantic network is just an isa hierarchy if the minimal path of isa links between two nodes is long that means it is necessary to go high in the taxonomy to more abstract concepts in order to find their least upper boundhowever there are problems with the simple pathlength definition of semantic similarity and experiments using wordnet show that other measures of semantic similarity such as the one employed here provide a better match to human similarity judgments than simple path length does given two words w1 and w2 their semantic similarity is calculated as where subsumers is the set of wordnet synsets that subsume both w1 and w2 in any sense of either wordthe concept c that maximizes the expression in will be referred to as the most informative subsumer of wi and w2although there are many ways to associate probabilities with taxonomic classes it is reasonable to require that concept probability be nondecreasing as one moves higher in the taxonomy ie that ci c2 implies pr prthis guarantees that quotmore abstractquot does indeed mean quotless informativequot defining informativeness in the traditional way in terms of log likelihoodprobability estimates are derived from a corpus by computing where words is the set of nouns having a sense subsumed by concept c probabilities are then computed simply as relative frequency where n is the total number of noun instances observedsingular and plural forms are counted as the same noun and nouns not covered by wordnet are ignoredalthough the wordnet noun taxonomy has multiple root nodes a single quotvirtualquot root node is assumed to exist with the original root nodes as its childrennote that by equations through if two senses have the virtual root node as their only upper bound then their similarity value is 0examplethe following table shows the semantic similarity computed for several word pairs in each case shown with the most informative subsumer6 probabilities were estimated using the penn treebank version of the brown corpusthe pairs come from an example given by church and hanks illustrating the words that human subjects most frequently judged as being associated with the word doctorword 1 word 2 similarity most informative subsumer doctor nurse 94823 doctor lawyer 72240 doctor man 29683 doctor medicine 10105 doctor hospital 10105 doctor health 00 virtual root doctor sickness 00 virtual root doctors are minimally similar to medicine and hospitals since these things are all instances of quotsomething having concrete existence living or nonlivingquot but they are much more similar to lawyers since both are kinds of professional people and even more similar to nurses since both are professional people working specifically within the health professionsnotice that similarity is a more specialized notion than association or relatedness doctors and sickness may be highly associated but one would not judge them to be particularly similarthe disambiguation algorithm for noun groups is inspired by the observation that when two polysemous words are similar their most informative subsumer provides information about which sense of each word is the relevant onein the above table for example both doctor and nurse are polysemous wordnet records doctor not only as a kind of health professional but also as someone who holds a phd and nurse can mean not only a health professional but also a nannywhen the two words are considered together however the shared element of meaning for the two relevant senses emerges in the form of the most informative subsumerit may be that other pairings of possible senses also share elements of meaning however in cases like those illustrated above the more specific or informative the shared ancestor is the more strongly it suggests which senses come to mind when the words are considered togetherthe working hypothesis in this paper is that this holds true in generalturning that observation into an algorithm requires two things a way to assign credit to word senses based on similarity with cooccurring words and a tractable way to generalize to the case where more than two polysemous words are involvedthe algorithm given in figure 1 does both quite straightforwardlyalgorithmgiven w w1 wnl a set of nouns for i and j lion with i j vi j sim ci j the most informative subsumer for wi and wj fork 1 to num_senses if ci j is an ancestor of sensei increment supporti k by vi j fork 1 to num_senses if ci j is an ancestor of sensei k increment supportj k by v ei increment normalizationi by vi j increment normalizationj by vii j this algorithm considers the words in w pairwise avoiding the tractability problems in considering all possible combinations of senses for the group if each word had m sensesfor each pair considered the most informative subsumer is identified and this pair is only considered as supporting evidence for those senses that are descendants of that conceptnotice that by equation support i k is a sum of log probabilities and therefore preferring senses with high support is equivalent to optimizing a product of probabilitiesthus considering words pairwise in the algorithm reflects a probabilistic independence assumptionexamplethe most informative subsumer for doctor and nurse is and therefore that pairing contributes support to the sense of doctor as an md but not a phdsimilarly it contributes support to the sense of nurse as a health professional but not a nannythe amount of support contributed by a pairwise comparison is proportional to how informative the most informative subsumer istherefore the evidence for the senses of a word will be influenced more by more similar words and less by less similar wordsby the time this process is completed over all pairs each sense of each word in the group has had the potential of receiving supporting evidence from a pairing with every other word in the groupthe value assigned to that sense is then the proportion of support it did receive out of the support possiblediscussionthe intuition behind this algorithm is essentially the same intuition exploited by lesk sussna and others the most plausible assignment of senses to multiple cooccurring words is the one that maximizes relatedness of meaning among the senses chosenhere i make an explicit comparison with sussna approach since it is the most similar of previous worksussna gives as an example of the problem he is solving the following paragraph from the corpus of 1963 time magazine articles used in information retrieval research the allies after nassau in december 1960 the yous first proposed to help nato develop its own nuclear strike force but europe made no attempt to devise a plan last week as they studied the nassau accord between president kennedy and prime minister macmillan europeans saw emerging the first outlines of the nuclear nato that the yous wants and will support it all sprang from the angloyous crisis over cancellation of the bugridden skybolt missile and the yous offer to supply britain and france with the proved polaris from this sussna extracts the following noun grouping to disambiguate allies strike force attempt plan week accord president prime minister outlines support crisis cancellation bug missile france polaris time these are the nonstopword nouns in the paragraph that appear in wordnet the description of sussna algorithm for disambiguating noun groupings like this one is similar to the one proposed here in a number of ways relatedness is characterized in terms of a semantic network the focus is on nouns only and evaluations of semantic similarity are the basis for sense selectionhowever there are some important differences as wellfirst unlike sussna proposal this algorithm aims to disambiguate groupings of nouns already established to be related as opposed to groupings of nouns that happen to appear near each other in running text this provides some justification for restricting attention to similarity as opposed to the more general notion of associationsecond this difference is reflected algorithmically by the fact that sussna uses not only isa links but also other wordnet links such as partofthird unlike sussna algorithm the semantic similaritydistance computation here is not based on path length but on information content a choice that i have argued for elsewhere fourth the combinatorics are handled differently sussna explores analyzing all sense combinations as well as the alternative of sequentially quotfreezingquot a single sense for each of wi w2_1 and using those choices assumed to be correct as the basis for disambiguating withe algorithm presented here falls between those two alternativesa final important difference between this algorithm and previous algorithms for sense disambiguation is that it offers the possibility of assigning higherlevel wordnet categories rather than lowestlevel sense labelsit is a simple modification to the algorithm to assign values of co not only to synsets directly containing words in w but to any ancestors of those synsets one need only let the list of synsets associated with each word wi also include any synset that is an ancestor of any synset containing word wiassuming that num senses and s ens e k are reinterpreted accordingly the algorithm will compute co not only for the synsets directly including words in w but also for any higherlevel abstractions of themexampleconsider the word group doctor nurse lawyerif one were to include all subsuming concepts for each word rather than just the synsets of which they are directly members the concepts with nonzero values of co would be as follows given assignments of co at all levels of abstraction one obvious method of semantic annotation is to assign the highestlevel concept for which co is at least as large as the sensespecific value of cpfor instance in the previous example one would assign the annotation to both doctor and nurse and the annotation to lawyerin this section i present a number of examples for evaluation by inspectionin each case i give the source of the noun grouping the grouping itself and for each word a description of word senses together with their values of ydistributional cluster head body hands eye voice arm seat hair mouth as noted in section 21 this group represents a set of words similar to burglar according to schtitze method for deriving vector representation from corpus behaviorin this case words rob and robbing were excluded because they were not nouns in wordnetthe word stray probably should be excluded also since it most likely appears on this list as an adjective machinegenerated thesaurus entry method test mean procedure technique i chose this grouping at random from a thesaurus created automatically by grefenstette syntacticodistributional methods using the med corpus of medical abstracts as its sourcethe group comes from from the thesaurus entry for the word methodnote that mean probably should be meansthere is a tradition in sense disambiguation of taking particularly ambiguous words and evaluating a system performance on those wordshere i look at one such case the word line the goal is to see what sense the algorithm chooses when considering the word in the contexts of each of the roget thesaurus classes in which it appears where a quotclassquot includes all the nouns in one of the numbered categories7 the following list provides brief descriptions of the 25 senses of line in wordnet since line appears in 13 of the numbered categories in roget thesaurus a full description of the values of co would be too large for the present paperindeed showing all the nouns in the numbered categories would take up too much space they average about 70 nouns apieceinstead i identify the numbered category and give the three wordnet senses of line for which co was greatestqualitatively the algorithm does a good job in most of the categoriesthe reader might find it an interesting exercise to try to decide which of the 25 senses he or she would choose especially in the cases where the algorithm did less well the previous section provided illustrative examples demonstrating the performance of the algorithm on some interesting casesin this section i present experimental results using a more rigorous evaluation methodologyinput for this evaluation came from the numbered categories of rogettest instances consisted of a noun group together with a single word in that group to be disambiguatedto use an example from the previous section category 590 contains the following writing chirography penmanship quill driving typewriting writing manuscript ms these presents stroke of the pen dash of the pen coupe de plume line headline pen and ink letter uncial writing cuneiform character arrowhead ogham runes hieroglyphic contraction devanagari nagari script shorthand stenography secret writing writing in cipher cryptography stenography copy transcript rescript rough copy fair copy handwriting signature sign manual autograph monograph holograph hand fist calligraphy good hand running hand flowing hand cursive hand legible hand bold hand bad hand crampedhand crabbed hand illegible hand scribble illformed letters pothooks and hangers stationery pen quill goose quill pencil style paper foolscap parchment vellum papyrus tablet slate marble pillar table blackboard ink bottle ink horn ink pot ink stand ink well typewriter transcription inscription superscription graphology composition authorship writer scribe amanuensis scrivener secretary clerk penman copyist transcriber quill driver stenographer typewriter typist writer for the press any word or phrase in that group that appears in the noun taxonomy for wordnet would be a candidate as a test instance for example line or secret writingthe test set chosen at random contained 125 test casestwo human judges were independently given the test cases to disambiguatefor each case they were given the full set of nouns in the numbered category together with descriptions of the wordnet senses for the word to be disambiguated it was a forcedchoice task that is the judge was required to choose exactly one sensein addition for each judgment the judge was required to provide a confidence value for this decision ranging from 0 to 4 results are presented here individually by judgefor purposes of evaluation test instances for which the judge had low confidence were excludedfor judge 1 there were 99 test instances with sufficiently high confidence to be consideredas a baseline ten runs were done selecting senses by random choice with the average percent correct being 348 standard deviation 358as an upper bound judge 2 was correct on 657 of those test instancesthe disambiguation algorithm shows considerable progress toward this upper bound with 586 correctfor judge 2 there were 86 test instances with sufficiently high confidence to be consideredas a baseline ten runs were done selecting senses by random choice with the average percent correct being 333 standard deviation 383as an upper bound judge 1 was correct on 686 of those test instancesagain the disambiguation algorithm performs well with 605 correctthe results of the evaluation are extremely encouraging especially considering that disambiguating word senses to the level of finegrainedness found in wordnet is quite a bit more difficult than disambiguation to the level of homographs a note worth adding it is not clear that the quotexact matchquot criterion that is evaluating algorithms by the percentage of exact matches of sense selection against a humanjudged baseline is the right taskin particular in many tasks it is at least as important to avoid inappropriate senses than to select exactly the right onethis would be the case in query expansion for information retrieval for example where indiscriminately adding inappropriate words to a query can degrade performance the examples presented in section 3 are encouraging in this regard in addition to performing well at the task of assigning a high score to the best sense it does a good job of assigning low scores to senses that are clearly inappropriateregardless of the criterion for success the algorithm does need further evaluationimmediate plans include a larger scale version of the experiment presented here involving thesaurus classes as well as a similarly designed evaluation of how the algorithm fares when presented with noun groups produced by distributional clusteringin addition i plan to explore alternative measures of semantic similarity for example an improved variant on simple path length that has been proposed by leacock and chodorow ultimately this algorithm is intended to be part of a suite of techniques used for disambiguating words in running text with respect to wordnet sensesi would argue that success at that task will require combining knowledge of the kind that wordnet provides primarily about relatedness of meaning with knowledge of the kind best provided by corpora primarily about usage in contextthe difficulty with the latter kind of knowledge is that until now the widespread success in characterizing lexical behavior in terms of distributional relationships has applied at the level of words indeed word forms as opposed to sensesthis paper represents a step toward getting as much leverage as possible out of work within that paradigm and then using it to help determine relationships among word senses which is really where the action is
W95-0105
disambiguating noun groupings with respect to wordnet sensesword groupings useful for language processing tasks are increasingly available as thesauri appear online and as distributional word clustering techniques improvehowever for many tasks one is interested in relationships among word senses not wordsthis paper presents a method for automatic sense disambiguafton of nouns appearing within sets of related nouns the kind of data one finds in online thesauri or as the output of distributional clustering algorithmsdisambiguation is performed with respect to wordnet senses which are fairly finegained however the method also permits the assignment of higherlevel wordnet categories rather than sense labelsthe method is illustrated primarily by example though results of a more rigorous evaluation are also presentedin this work the assessment of semantic similarity using a dictionary database as knowledge source is recognized as providing significant cues for word clusteringwe define the semantic similarity between two words as the entropy value of the most informative concept subsuming the two words in a hierarchically structured thesauruswe attempt to combine paradigmatic and syntagmatic similarity strategies
text chunking using transformationbased learning eric brill introduced transformationbased learning and showed that it can do partofspeech tagging with fairly high accuracy the same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text including nonrecursive quotbasenpquot chunks for this purpose it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each word in automatic tests using treebankderived data this technique achieved recall and precision rates of roughly 92 for basenp chunks and 88 for somewhat more complex chunks that partition the sentence some interesting adaptations to the transformationbased learning approach are also suggested by this application text chunking involves dividing sentences into nonoverlapping segments on the basis of fairly superficial analysisabney has proposed this as a useful and relatively tractable precursor to full parsing since it provides a foundation for further levels of analysis including verbargument identification while still allowing more complex attachment decisions to be postponed to a later phasesince chunking includes identifying the nonrecursive portions of noun phrases it can also be useful for other purposes including index term generationmost efforts at superficially extracting segments from sentences have focused on identifying lowlevel noun groups either using handbuilt grammars and finite state techniques or using statistical models like hmms trained from corporain this paper we target a somewhat higher level of chunk structure using brill transformationbased learning mechanism in which a sequence of transformational rules is learned from a corpus this sequence iteratively improves upon a baseline model for some interpretive feature of the textthis technique has previously been used not only for partofspeech tagging but also for prepositional phrase attachment disambiguation and assigning unlabeled binarybranching tree structure to sentences because transformationbased learning uses patternaction rules based on selected features of the local context it is helpful for the values being predicted to also be encoded locallyin the textchunking application encoding the predicted chunk structure in tags attached to the words rather than as brackets between words avoids many of the difficulties with unbalanced bracketings that would result if such local rules were allowed to insert or alter interword brackets directlyin this study training and test sets marked with two different types of chunk structure were derived algorithmically from the parsed data in the penn treebank corpus of wall street journal text the source texts were then run through brill partofspeech tagger and as a baseline heuristic chunk structure tags were assigned to each word based on its partofspeech tagrules were then automatically learned that updated these chunk structure tags based on neighboring words and their partofspeech and chunk tagsapplying transformationbased learning to text chunking turns out to be different in interesting ways from its use for partofspeech taggingthe much smaller tagset calls for a different organization of the computation and the fact that partofspeech assignments as well as word identities are fixed suggests different optimizationsabney has proposed text chunking as a useful preliminary step to parsinghis chunks are inspired in part by psychological studies of gee and grosjean that link pause durations in reading and naive sentence diagraming to text groupings that they called 0phrases which very roughly correspond to breaking the string after each syntactic head that is a content wordabney other motivation for chunking is procedural based on the hypothesis that the identification of chunks can be done fairly dependably by finite state methods postponing the decisions that require higherlevel analysis to a parsing phase that chooses how to combine the chunksexisting efforts at identifying chunks in text have been focused primarily on lowlevel noun group identification frequently as a step in deriving index terms motivated in part by the limited coverage of present broadscale parsers when dealing with unrestricted textsome researchers have applied grammarbased methods combining lexical data with finite state or other grammar constraints while others have worked on inducing statistical models either directly from the words or from automatically assigned partofspeech classeson the grammarbased side bourigault describes a system for extracting quotterminological noun phrasesquot from french textthis system first uses heuristics to find quotmaximal length noun phrasesquot and then uses a grammar to extract quotterminological unitsquot for example from the maximal np le disque dur de la station de travail it extracts the two terminological phrases disque dur and station de travailbourigault claims that the grammar can parse quotaround 95 of the maximal length noun phrasesquot in a test corpus into possible terminological phrases which then require manual validationhowever because its goal is terminological phrases it appears that this system ignores np chunkinitial determiners and other initial prenominal modifiers somewhat simplifying the parsing taskvoutilainen in his impressive nptool system uses an approach that is in some ways similar to the one used here in that he adds to his partofspeech tags a new kind of tag that shows chunk structure the chunk tag quotnquot for example is used for determiners and premodifiers both of which group with the following noun headhe uses a lexicon that lists all the possible chunk tags for each word combined with handbuilt constraint grammar patternsthese patterns eliminate impossible readings to identify a somewhat idiosyncratic kind of target noun group that does not include initial determiners but does include postmodifying prepositional phrases voutilainen claims recall rates of 985 or better with precision of 95 or betterhowever the sample nptool analysis given in the appendix of appears to be less accurate than claimed in general with 5 apparent mistakes out of the 32 np chunks in that sample as listed in table 1these putative errors combined with the claimed high performance suggest that nptool definition of np chunk is also tuned for extracting terminological phrases and thus excludes many kinds of np premodifiers again simplifying the chunking tasknptool parse apparent correct parse less time less time the other hand the other hand many advantages many advantages binary addressing binary addressing and and instruction formats instruction formats a purely binary computer a purely binary computer kupiec also briefly mentions the use of finite state np recognizers for both english and french to prepare the input for a program that identified the correspondences between nps in bilingual corpora but he does not directly discuss their performanceusing statistical methods church parts program in addition to identifying parts of speech also inserted brackets identifying core npsthese brackets were placed using a statistical model trained on brown corpus material in which np brackets had been inserted semiautomaticallyin the small test sample shown this system achieved 98 recall for correct bracketsat about the same time ejerhed working with church performed comparisons between finite state methods and church stochastic models for identifying both nonrecursive clauses and nonrecursive nps in english textin those comparisons the stochastic methods outperformed the hand built finitestate models with claimed accuracies of 935 and 986 for the statistical models compared to to 87 and 978 for the finitestate methodsrunning church program on test material however reveals that the definition of np embodied in church program is quite simplified in that it does not include for example structures or words conjoined within np by either explicit conjunctions like quotandquot and quotorquot or implicitly by commaschurch chunker thus assigns the following np chunk structures a skokie ill subsidiary newer bigselling prescriptions drugs the inefficiency waste and lack of coordination kidder peabody sz co it is difficult to compare performance figures between studies the definitions of the target chunks and the evaluation methodologies differ widely and are frequently incompletely specifiedall of the cited performance figures above also appear to derive from manual checks by the investigators of the system predicted output and it is hard to estimate the impact of the system suggested chunking on the judge determinationwe believe that the work reported here is the first study which has attempted to find np chunks subject only to the limitation that the structures recognized do not include recursively embedded nps and which has measured performance by automatic comparison with a preparsed corpuswe performed experiments using two different chunk structure targets one that tried to bracket nonrecursive quotbasenpsquot and one that partitioned sentences into nonoverlapping ntype and vtype chunks loosely following abney modeltraining and test materials with chunk tags encoding each of these kinds of structure were derived automatically from the parsed wall street journal text in the penn treebank while this automatic derivation process introduced a small percentage of errors of its own it was the only practical way both to provide the amount of training data required and to allow for fullyautomatic testingthe goal of the quotbasenpquot chunks was to identify essentially the initial portions of nonrecursive noun phrases up to the head including determiners but not including postmodifying prepositional phrases or clausesthese chunks were extracted from the treebank parses basically by selecting nps that contained no nested nps1the handling of conjunction followed that of the treebank annotators as to whether to show separate basenps or a single basenp spanning the conjunction2possessives were treated as a special case viewing the possessive marker as the first word of a new basenp thus flattening the recursive structure in a useful waythe following sentences give examples of this basenp chunk structure during n the third quarter n n compaq n purchased n a former wang laboratories manufacturing facility n in n sterling n n scotland n which will be used for n international service and repair operations n n the government nj has n other agencies and instruments n for pursuing n these other objectives n even n mao tsetung iv n china ad began in n 1949 n with n a partnership n between n the communists n and n a number n of n smaller noncommunist parties n the chunks in the partitioning chunk experiments were somewhat closer to abney model where the prepositions in prepositional phrases are included with the object np up to the head in a single ntype chunkthis created substantial additional ambiguity for the system which had to distinguish prepositions from particlesthe handling of conjunction again follows the treebank parse with nominal conjuncts parsed in the treebank as a single np forming a single n chunk while those parsed as conjoined nps become separate chunks with any coordinating conjunctions attached like prepositions to the following n chunkthe portions of the text not involved in ntype chunks were grouped as chunks termed vtype though these quotvquot chunks included many elements that were not verbal including adjective phrasesthe internal structure of these vtype chunks loosely followed the treebank parse though v chunks often group together elements that were sisters in the underlying parse treeagain the possessive marker was viewed as initiating a new ntype chunkthe following sentences are annotated with these partitioning n and v chunks n some bankers ni v are reporting v n more inquiries than usual n n about cds n n since friday nj this heuristic fails in some casesfor example treebank uses the label nac for some nps functioning as premodifiers like quotbank of englandquot in quotrobin leighpemberton bank of england governor concededquot in such cases quotgovernorquot is not included in any basenp chunknonconstituent np conjunction which treebank labels nx is another example that still causes problemsn eastern airlines n n creditors n v have begun exploring v n alternative approaches n n to a chapter 11 reorganization n v because v n they nv are unhappy v n with the carrier n n latest proposal n n indexing ni n for the most part n v has involved simply buying v v and then holding v n stocks ni n in the correct mix n v to mirror id n a stock market barometer n these two kinds of chunk structure derived from the treebank data were encoded as chunk tags attached to each word and provided the targets for the transformationbased learningas shown in fig1 transformationbased learning starts with a supervised training corpus that specifies the correct values for some linguistic feature of interest a baseline heuristic for predicting initial values for that feature and a set of rule templates that determine a space of possible transformational rulesthe patterns of the learned rules match to particular combinations of features in the neighborhood surrounding a word and their action is to change the system current guess as to the feature for that wordto learn a model one first applies the baseline heuristic to produce initial hypotheses for each site in the training corpusat each site where this baseline prediction is not correct the templates are then used to form instantiated candidate rules with patterns that test selected features in the neighborhood of the word and actions that correct the currently incorrect tag assignmentthis process eventually identifies all the rule candidates generated by that template set that would have a positive effect on the current tag assignments anywhere in the corpusthose candidate rules are then tested against the rest of corpus to identify at how many locations they would cause negative changesone of those rules whose net score is maximal is then selected applied to the corpus and also written out as the first rule in the learned sequencethis entire learning process is then repeated on the transformed corpus deriving candidate rules scoring them and selecting one with the maximal positive effectthis process is iterated leading to an ordered sequence of rules with rules discovered first ordered before those discovered laterthe predictions of the model on new text are determined by beginning with the baseline heuristic prediction and then applying each rule in the learned rule sequence in turnthis section discusses how text chunking can be encoded as a tagging problem that can be conveniently addressed using transformational learningwe also note some related adaptations in the procedure for learning rules that improve its performance taking advantage of ways in which this task differs from the learning of partofspeech tagsapplying transformational learning to text chunking requires that the system current hypotheses about chunk structure be represented in a way that can be matched against the pattern parts of rulesone way to do this would be to have patterns match tree fragments and actions modify tree geometries as in brill transformational parser in this work we have found it convenient to do so by encoding the chunking using an additional set of tags so that each word carries both a partofspeech tag and also a quotchunk tagquot from which the chunk structure can be derivedin the basenp experiments aimed at nonrecursive np structures we use the chunk tag set i 0 b where words marked i are inside some basenp those marked 0 are outside and the b tag is used to mark the left most item of a basenp which immediately follows another basenpin these tests punctuation marks were tagged in the same way as wordsin the experiments that partitioned text into n and v chunks we use the chunk tag set bn n by v p where bn marks the first word and n the succeeding words in an ntype group while bv and v play the same role for vtype groupspunctuation marks which are ignored in abney chunk grammar but which the treebank data treats as normal lexical items with their own partofspeech tags are unambiguously assigned the chunk tag p items tagged p are allowed to appear within n or v chunks they are irrelevant as far as chunk boundaries are concerned but they are still available to be matched against as elements of the left hand sides of rulesencoding chunk structure with tags attached to words rather than nonrecursive bracket markers inserted between words has the advantage that it limits the dependence between different elements of the encoded representationwhile brackets must be correctly paired in order to derive a chunk structure it is easy to define a mapping that can produce a valid chunk structure from any sequence of chunk tags the few hard cases that arise can be handled completely locallyfor example in the basenp tag set whenever a b tag immediately follows an 0 it must be treated as an i and in the partitioning chunk tag set wherever a v tag immediately follows an n tag without any intervening by it must be treated as a bytransformational learning begins with some initial quotbaselinequot prediction which here means a baseline assignment of chunk tags to wordsreasonable suggestions for baseline heuristics after a text has been tagged for partofspeech might include assigning to each word the chunk tag that it carried most frequently in the training set or assigning each partofspeech tag the chunk tag that was most frequently associated with that partofspeech tag in the trainingwe tested both approaches and the baseline heuristic using partofspeech tags turned out to do better so it was the one used in our experimentsthe partofspeech tags used by this baseline heuristic and then later also matched against by transformational rule patterns were derived by running the raw texts in a prepass through brill transformational partofspeech tagger in transformational learning the space of candidate rules to be searched is defined by a set of rule templates that each specify a small number of particular feature sets as the relevant factors that a rule lefthandside pattern should examine for example the partofspeech tag of the word two to the left combined with the actual word one to the leftin the preliminary scan of the corpus for each learning pass it is these templates that are applied to each location whose current tag is not correct generating a candidate rule that would apply at least at that one location matching those factors and correcting the chunk tag assignmentwhen this approach is applied to partofspeech tagging the possible sources of evidence for templates involve the identities of words within a neighborhood of some appropriate size and their current partofspeech tag assignmentsin the text chunking application the tags being assigned are chunk structure tags while the partofspeech tags are a fixed part of the environment like the lexical identities of the words themselvesthis additional class of available information causes a significant increase in the number of reasonable templates if templates for a wide range of the possible combinations of evidence are desiredthe distributed version of brill tagger makes use of 26 templates involving various mixes of word and partofspeech tests on neighboring wordsour tests were performed using 100 templates these included almost all of brill combinations and extended them to include references to chunk tags as well as to words and partofspeech tagsthe set of 100 rule templates used here was built from repetitions of 10 basic patterns shown on the left side of table 2 as they apply to wordsthe same 10 patterns can also be used to match against partofspeech tags encoded as po p_1 etcthese 20 word and partofspeech patterns were then combined with each of the 5 different chunk tag patterns shown on the right side of the tablethe cross product of the 20 word and partofspeech patterns with the 5 chunk tag patterns determined the full set of 100 templates usedthe large increase in the number of rule templates in the text chunking application when compared to partofspeech tagging pushed the training process against the available limits in terms of both space and time particularly when combined with the desire to work with the largest possible training setsvarious optimizations proved to be crucial to make the tests described feasibleone change in the algorithm is related to the smaller size of the tag setin brill tagger an initial calculation in each pass computes the confusion matrix for the current tag assignments and sorts the entries of that oldtag x newtag matrix so that candidate rules can then be processed in decreasing order of the maximum possible benefit for any rule changing say old tag i to new tag jthe search for the bestscoring rule can then be halted when a cell of the confusion matrix is reached whose maximum possible benefit is less than the net benefit of some rule already encounteredthe power of that approach is dependent on the fact that the confusion matrix for partofspeech tagging partitions the space of candidate rules into a relatively large number of classes so that one is likely to be able to exclude a reasonably large portion of the search spacein a chunk tagging application with only 3 or 4 tags in the effective tagset this approach based on the confusion matrix offers much less benefithowever even though the confusion matrix does not usefully subdivide the space of possible rules when the tag set is this small it is still possible to apply a similar optimization by sorting the entire list of candidate rules on the basis of their positive scores and then processing the candidate rules in order of decreasing positive scoresby keeping track of the rule with maximum benefit seen so far one can be certain of having found one of the globally best rules when one reaches candidate rules in the sorted list whose positive score is not greater than the net score of the best rule so farin earlier work on transformational partofspeech tagging we noted that it is possible to greatly speed up the learning process by constructing a full bidirectional index linking each candidate rule to those locations in the corpus at which it applies and each location in the corpus to those candidate rules that apply theresuch an index allows the process of applying rules to be performed without having to search through the corpusunfortunately such complete indexing proved to be too costly in terms of physical memory to be feasible in this applicationhowever it is possible to construct a limited index that lists for each candidate rule those locations in the corpus at which the static portions of its lefthandside pattern matchbecause this index involves only the stable word identity and partofspeech tag values it does not require updating thus it can be stored more compactly and it is also not necessary to maintain back pointers from corpus locations to the applicable rulesthis kind of partial static index proved to be a significant advantage in the portion of the program where candidate rules with relatively high positive scores are being tested to determine their negative scores since it avoids the necessity of testing such rules against every location in the corpuswe also investigated a new heuristic to speed up the computation after each pass we disable all rules whose positive score is significantly lower than the net score of the best rule for the current passa disabled rule is then reenabled whenever enough other changes have been made to the corpus that it seems possible that the score of that rule might have changed enough to bring it back into contention for the top placethis is done by adding some fraction of the changes made in each pass to the positive scores of the disabled rules and reenabling rules whose adjusted positive scores came within a threshold of the net score of the successful rule on some passnote that this heuristic technique introduces some risk of missing the actual best rule in a pass due to its being incorrectly disabled at the timehowever empirical comparisons between runs with and without rule disabling suggest that conservative use of this technique can produce an order of magnitude speedup while imposing only a very slight cost in terms of suboptimality of the resulting learned rule sequencethe automatic derivation of training and testing data from the treebank analyses allowed for fully automatic scoring though the scores are naturally subject to any remaining systematic errors in the data derivation process as well as to bona fide parsing errors in the treebank sourcetable 3 shows the results for the basenp tests and table 4 shows the results for the partitioning chunks tasksince training set size has a significant effect on the results values are shown for three different training set sizesthe first line in each table gives the performance of the baseline system which assigned a basenp or chunk tag to each word on the basis of the pos tag assigned in the prepassperformance is stated in terms of recall and precision where both ends of a chunk had to match exactly for it to be countedthe raw percentage of correct chunk tags is also given for each run and for each performance measure the relative error reduction compared to the baseline is listedthe partitioning chunks do appear to be somewhat harder to predict than basenp chunksthe higher error reduction for the former is partly due to the fact that the partofspeech baseline for that task is much lowerto give a sense of the kinds of rules being learned the first 10 rules from the 200k basenp run are shown in table 5it is worth glossing the rules since one of the advantages of transformationbased learning is exactly that the resulting model is easily interpretablein the first of the basenp rules adjectives that are currently tagged i but that are followed by words tagged 0 have their tags changed to 0in rule 2 determiners that are preceded by two words both tagged i have their own tag changed to b marking the beginning of a basenp that happens to directly follow anotherrule 3 takes words which immediately follow determiners tagged i that in turn follow something tagged 0 and changes their tag to also be irules 46 are similar to rule 2 marking the initial words of basenps that directly follow another basenprule 7 marks conjunctions as i if they follow an i and precede a noun since such conjunctions are more likely to be embedded in a single basenp than to separate two basenps and rules 8 and 9 do the samefinally rule 10 picks up cases like quotincluding about four million sharesquot where quotaboutquot is used as a quantifier rather than prepositiona similar list of the first ten rules for the chunk task can be seen in table 6to gloss a few of these in the first rule here determiners which usually begin n chunks and thus are assigned the baseline tag bn have their chunk tags changed to n if they follow a word whose tag is also bnin rule 2 sites currently tagged n but which fall at the beginning of a sentence have their tags switched to bnrule 3 changes n to bn after a comma and in rule 4 locations tagged bn are switched to bv if the following location is tagged v and has the partofspeech tag vbthe fact that this system includes lexical rule templates that refer to actual words sets it apart from approaches that rely only on partofspeech tags to predict chunk structureto explore how much difference in performance those lexical rule templates make we repeated the above test runs omitting templates that refer to specific wordsthe results for these runs in tables 7 and 8 suggest that the lexical rules improve performance on the basenp chunk task by about 1 and on the partitioning chunk task by about 5 thus lexical rules appear to be making a limited contribution in determining basenp chunks but a more significant one for the partitioning chunksa rough hand categorization of a sample of the errors from a basenp run indicates that many fall into classes that are understandably difficult for any process using only local word and partofspeech patterns to resolvethe most frequent single confusion involved words tagged vbg and vbn whose baseline prediction given their partofspeech tag was 0 but which also occur frequently inside basenpsthe system did discover some rules that allowed it to fix certain classes of vbg and vbn mistaggings for example rules that retagged vbns as i when they preceded an nn or nns tagged ihowever many also remained unresolved and many of those appear to be cases that would require more than local word and partofspeech patterns to resolvethe second most common class of errors involved conjunctions which combined with the former class make up half of all the errors in the samplethe treebank tags the words quotandquot and frequently quotquot with the partofspeech tag cc which the baseline system again predicted would fall most often outside of a basenp3however the treebank parses do also frequently classify conjunctions of ns or nps as a single basenp and again there appear to be insufficient clues in the word and tag contexts for the current system to make the distinctionfrequently in fact the actual choice of structure assigned by the treebank annotators seemed largely dependent on semantic indications unavailable to the transformational learnerwe are planning to explore several different paths that might increase the system power to distinguish the linguistic contexts in which particular changes would be usefulone such direction is to expand the template set by adding templates that are sensitive to the chunk structurefor example instead of referring to the word two to the left a rule pattern could refer to the first word in the current chunk or the last word of the previous chunkanother direction would be to enrich the vocabulary of chunk tags so that they could be used during the learning process to encode contextual features for use by later rules in the sequencewe would also like to explore applying these same kinds of techniques to building larger scale structures in which larger units are assembled or predicateargument structures derived by combining chunksone interesting direction here would be to explore the use of chunk structure tags that encode a form of dependency grammar where the tag quotn2quot might mean that the current word is to be taken as partof the unit headed by the n two words to the rightby representing text chunking as a kind of tagging problem it becomes possible to easily apply transformationbased learningwe have shown that this approach is able to automatically induce a chunking model from supervised training that achieves recall and precision of 92 for basenp chunks and 88 for partitioning n and v chunkssuch chunking models provide a useful and feasible next step in textual interpretation that goes beyond partofspeech tagging and that serve as a foundation both for largerscale grouping and for direct extraction of subunits like index termsin addition some variations in the transformationbased learning algorithm are suggested by this application that may also be useful in other settingswe would like to thank eric brill for making his system widely available and ted briscoe and david yarowsky for helpful comments including the suggestion to test the system performance without lexical rule templatesnote that this is one of the cases where church chunker allows separate np fragments to count as chunks
W95-0107
text chunking using transformationbased learningeric brill introduced transformationbased learning and showed that it can do partofspeech tagging with fairly high accuracythe same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text including nonrecursive basenp chunksfor this purpose it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each wordin automatic tests using treebankderived data this technique achieved recall and precision rates of roughly 92 for basenp chunks and 88 for somewhat more complex chunks that partition the sentencesome interesting adaptations to the transformationbased learning approach are also suggested by this applicationwe formalize chunking as a classification task in which each word is classified as the eginning nside or outside of a chunkwe pioneer the machine learning techniques to chunking problem
automatic evaluation and uniform filter cascades for inducing nbest translation lexicons this paper shows how to induce an nbest translation lexicon from a bilingual text corpus using statistical properties of the corpus together with four external knowledge sources the knowledge sources are cast as filters so that any subset of them can be cascaded in a uniform framework a new objective evaluation measure is used to compare the quality of lexicons induced with different filter cascades the best filter cascades improve lexicon quality by up to 137 over the plain vanilla statistical method and approach human performance drastically reducing the size of the training corpus has a much smaller impact on lexicon quality when these knowledge sources are used this makes it practical to train on small handbuilt corpora for language pairs where large bilingual corpora are unavailable moreover three of the four filters prove useful even when used with large training corpora a machine translation system must be able to choose among possible translations based on contextto do this it usually relies on a translation lexicon that contains a number of possible translations for each wordn best translation lexicons contain up to n candidate translations for each word ordered from most probable to least probable sometimes specifying a priori probabilities or likelihood scoresexisting automatic methods for constructing nbest translation lexicons rely on the availability of large training corpora of parallel texts in the source and target languagesfor some methods the corpora must also be aligned by sentence bro93 ga1914 unfortunately such training corpora are available for only a handful of language pairs and the cost to create enough training data manually for new language pairs is very highthis paper presents the evaluation method uses a simple objective criterion rather than relying on subjective human judgesit allows many experiments to be run without concern about the cost availability and reliability of human evaluatorsthe filterbased approach is designed to identify likely pairs using a statistical decision procedurecandidate word pairs are drawn from a corpus of aligned sentences is a candidate if t appears in the translation of a sentence containing s in the simplest case the decision procedure considers all candidates for inclusion in the lexicon but the new framework allows a cascade of nonstatistical filters to remove inappropriate pairs from considerationeach filter is based on a particular knowledge source and can be placed into the cascade independently of the othersthe knowledge sources investigated here are bro94 investigated the statistical use of mrbds though not as filtersthe other three knowledge sources have not previously been used for the task of inducing translation lexiconsthe filterbased framework together with the fully automatic evaluation method allows easy investigation of the relative efficacy of cascades of each of the subsets of these four filtersas will be shown below some filter cascades sift candidate word pairs so well that training corpora small enough to be handbuilt can be used to induce more accurate translation lexicons than those induced from a much larger training corpus without such filtersin one evaluation a training corpus of 500 sentence pairs processed with these knowledge sources achieved a precision of 054 while a training corpus of 100000 training pairs alone achieved a precision of only 045such improvements could not be previously obtained because table 1 provides a qualitative demonstration of how a lexicon entry gradually improves as more filters are appliedthe table contains actual entries for the french source word quotpremierquot from 7best lexicons that were induced from 5000 pairs of training sentences using different filter cascadesthe baseline lexicon induced with no filters contains correct translations only in the first and sixth positionsthe cognate filter disallows all candidate translations of french quotpremierquot whenever the english cognate quotpremierquot appears in the target english sentencethis causes english quotpremierquot to move up to second positionthe partofspeech filter realizes that quotpremierquot can only be an adjective in french whereas in the english hansards it is mostly used as a nounso it throws out that pairing along with several other english noun candidates allowing quotfirstquot to move up to third positionthe pos and cognate filters reduce noise better together than separatelymore of the incorrect translations are filtered out in the quotpos sz cogquot column making room for quotforemostquot finally the mrbd filter narrows the list down to just the three translations of french quotpremierquot that are appropriate in the hansard sublanguagepunctuation numbers etc also count as wordsall translation lexicons discussed in this paper were created and evaluated using the procedure in figure 1first candidate translations were generated for each pair of aligned training sentences by taking a simple crossproduct of the wordsnext the candidate translations from each pair of training sentences were passed through a cascade of filtersthe remaining candidate translations from all training sentence pairs were pooled together and fed into a fixed decision procedurethe output of the decision procedure was a model of word correspondences between the two halves of the training corpus a translation lexiconeach filter combination resulted in a different modelall the models were compared in terms of how well they represented a heldout test setthe evaluation was performed objectively and automatically using bitextbased lexicon evaluation bible assigned a score for each model and these scores were used to compare the effectiveness of various filter cascadesas shown in figure 1 the only independent variable in the framework is the cascade of filters used on the translation candidates generated by each sentence pair while the only dependent variable is a numerical scoresince the filters only serve to remove certain translation candidates any number of filters can be used in sequencethis arrangement allows for fair comparison of different filter combinationstranslation lexicon quality has traditionally been measured on two axes precision and recallrecall is the fraction of the source language vocabulary that appears in the lexiconprecision is the fraction of lexicon entries that are correctwhile the true size of the source vocabulary is usually unknown recall can be estimated using a representative text sample by computing the fraction of words in the text that also appear in the lexiconmeasuring precision is much more difficult because it is unclear what a quotcorrectquot lexicon entry is different translations are appropriate for different contexts and in most cases more than one translation is correctthis is why evaluation of translation has eluded automation efforts until nowthe large number of quantitative lexicon evaluations required for the present study made it infeasible to rely on evaluation by human judgesthe only existing automatic lexicon evaluation method that i am aware of is the perplexity comparisons used by brown et al in the framework of their model 1 bro93lexicon perplexity indicates how quotsurequot a translation lexicon is about its contentsit does not however directly measure the quality of those contentsbible is a family of algorithms based on the observation that translation pairs2 tend to appear in corresponding sentences in an aligned bilingual text corpus given a test set of aligned sentences a better translation lexicon will contain a higher fraction of the pairs in those sentencesthis fraction can be computed either by token or by type depending on the applicationif only the words in the lexicon are considered bible gives an estimate of precisionif all the words in the text are considered then bible measures percent correctthe greater the overlap between the vocabulary of the test bitext and the vocabulary of the lexicon being evaluated the more confidence can be placed in the bible scorethe bible approach is suitable for many different evaluation tasksbesides comparing different lexicons on different scales bible can be used to compare different parts of one lexicon that has been partitioned using some characteristic of its entriesfor example the quality of a lexicon noun entries can be compared to the quality of its adjective entries the quality of its entries for frequent words can be compared to the quality of its entries for rare wordslikewise separate evaluations can be performed for each k 1 2the pos filtei only degrades precision for large training corpora distributed 95 confidence intervals were estimated for each score using ten mutually exclusive training sets of each sizeall the confidence intervals were narrower than one percentage point at 500 pairs of training sentences and narrower than half of one percentage point at 2000 pairstherefore bible score differences displayed in figures 5 and 6 are quite reliablethe upper bound on performance for this task is plotted at 057 the better filter cascade produce lexicons whose precision comes close to this markthe best cascades are up to 137 more precise than the baseline modelthe large mrbd resulted in the most useful filter for this pair of languagesfuture research will look into why the mrbd contribution to lexicon precision decreases with more training datafigure 7 shows the relative performance of selected filters when the entire training set of one hundred thousand sentences is usedall the presented filters except the pos filter improve performance even when a large training corpus is availableevidently some information that is useful for inducing translation lexicons cannot be inferred from any amount of training data using only simple statistical methodsthe best precision for the single best translation is achieved by a cascade of the mrbd cognate and word alignment filtersto maximize precision for the best of three or more translations only the cognate filter should be useda machine translation system should not only translate with high precision but it should also have good coverage of the source languageso the product of recall and precision percent correct is a good indication of a lexicon suitability for use with such a systemthis statistic actually represents the percentage of words in the target test corpus that would be correctly translated from the source if the lexicon were used as a simple map therefore if the lexicon is to be used as part of a machineassisted translation system then the percent correct score will be inversely proportional to the required postediting timea simple strategy was adopted to demonstrate the practical utility of filters presented in this paperfirst the most precise filter cascade was selected by looking at figure 5translations were found for all words in the test source text that had entries in the lexicon induced using that cascadethen the second most precise filter cascade was selectedwords that the most precise lexicon quotdid not know aboutquot which were found in the second most precise lexicon were translated nextall the other available lexicons were cascaded this way in the order of their apparent precision down to the baseline lexiconthis quotcascaded backoffquot strategy maintained the recall of the baseline lexicon while taking advantage of the higher precision produced by various filter cascadesalthough more sophisticated translation strategies are certainly possible bible percent correct scores for cascaded lexicons suffice to test the utility of data filters for machine translationthe results in figure 8 indicate that the filters described in this paper can be used to improve the performance of lexical transfer models by more than 35the research presented here makes several contributions to research in machine translation and related fields the effectiveness of different data filters for inducing translation lexicons crucially depends on the particular pair of languages under considerationcognates are more common and therefore more useful in languages which are more closely relatedfor example one would expect to find more cognates between russian and ukrainian than between french and englishthe implementation of a part of speech filter for a given pair of languages depends on the availability of part of speech taggers for both languages where the two taggers have a small common tag setthe effectiveness of oracle filters based on mrbds will depend on the extent to which the vocabulary of the mrbd intersects with the vocabulary of the training textthis in turn depends partly on the size of the mrbdfilters based on word alignment patterns will only be as good as the model of typical word alignments between the pair of languages in questionfor languages with very similar syntax a linear model will sufficehigher order models will be required for a pair of languages like english and japanesefor the case of french and english each of the presented filters makes a significant improvement over the baseline modeltaken together the filters produce models which approach human performancethese conclusions could not have been drawn without a uniform framework for filter comparison or without a technique for automatic evaluationan automatic evaluation technique such as bible should be used to gauge the effectiveness of any mt system which has a lexical transfer componentbible objective criterion is quite simple with the drawback that it gives no indication of what kinds of errors exist in the lexicon being evaluatedeven so given a test corpus of a reasonable size it can detect very small differences in quality between two nbest translation lexiconsfor example bible evaluations were used to find the precise optimum value for the lcsr cutoff in the cognate filterbible also helped to select the optimum tag set for the pos filterthis kind of automatic quality control is indispensable for an engineering approach to better machine translationi am deeply grateful to george foster for postagging the french half of my text corpus to matthew stone for providing a second translation of some hansard text and to the following people for valuable advice and discussions ken church michael collins jason eisner george foster mark liberman mitch marcus adwait ratnaparkhi jeff reynar henry thompson david yarowsky and four anonymous reviewersthis research was partially supported by aro contract daal038900031 and by arpa contract n6600194c6043
W95-0115
automatic evaluation and uniform filter cascades for inducing nbest translation lexiconsthis paper shows how to induce an nbest translation lexicon from a bilingual text corpus using statistical properties of the corpus together with four external knowledge sourcesthe knowledge sources are cast as filters so that any subset of them can be cascaded in a uniform frameworka new objective evaluation measure is used to compare the quality of lexicons induced with different filter cascadesthe best filter cascades improve lexicon quality by up to 137 over the plain vanilla statistical method and approach human performancedrastically reducing the size of the training corpus has a much smaller impact on lexicon quality when these knowledge sources are usedthis makes it practical to train on small handbuilt corpora for language pairs where large bilingual corpora are unavailablemoreover three of the four filters prove useful even when used with large training corporawe use the longest common subsequence ratio to measure similarity
mbt a memorybased part of speech taggergenerator we introduce a memorybased approach to part of speech tagging memorybased learning is a form of supervised learning based on similaritybased reasoning the part of speech tag of a word in a particular context is extrapolated from the most similar cases held in memory supervised learning approaches are useful when a tagged corpus is available as an example of the desired output of the tagger based on such a corpus the taggergenerator automatically builds a tagger which is able to tag new text the same way diminishing development time for the construction of a tagger considerably memorybased tagging shares this advantage with other statistical or machine learning approaches additional advantages specific to a memorybased approach include the relatively small tagged corpus size sufficient for training incremental learning explanation capabilities flexible integration of information in case representations its nonparametric nature reasonably good results on unknown words without morphological analysis and fast learning and tagging in this paper we show that a largescale application of the memorybased approach is feasible we obtain a tagging accuracy that is on a par with that of known statistical approaches and with attractive and time complexity properties when using treebased formalism for indexing and searching huge case bases the use of igtree has as additional advantage that optimal context size for disambiguation is dynamically computed part of speech tagging is a process in which syntactic categories are assigned to wordsit can be seen as a mapping from sentences to strings of tagsautomatic tagging is useful for a number of applications as a preprocessing stage to parsing in information retrieval in text to speech systems in corpus linguistics etcthe two factors determining the syntactic category of a word are its lexical probability and its contextual probability several approaches have been proposed to construct automatic taggersmost work on statistical methods has used ngram models or hidden markov modelbased taggers in these approaches a tag sequence is chosen for a sentence that maximizes the product of lexical and contextual probabilities as estimated from a tagged corpusin rulebased approaches words are assigned a tag based on a set of rules and a lexiconthese rules can either be handcrafted or learned as in hindle or the transformationbased errordriven approach of brill in a memorybased approach a set of cases is kept in memoryeach case consists of a word with preceding and following context and the corresponding category for that word in that contexta new sentence is tagged by selecting for each word in the sentence and its context the most similar case in memory and extrapolating the category of the word from these nearest neighborsa memorybased approach has features of both learning rulebased taggers and of stochastic taggers it is fundamentally a form of knearest neighbors modeling a wellknown nonparametric statistical pattern recognition techniquethe approach in its basic form is computationally expensive however each new word in context that has to be tagged has to be compared to each pattern kept in memoryin this paper we show that a heuristic case base compression formalism makes the memorybased approach computationally attractivememorybased learning is a form of supervised inductive learning from examplesexamples are represented as a vector of feature values with an associated category labelduring training a set of examples is presented in an incremental fashion to the classifier and added to memoryduring testing a set of previously unseen featurevalue patterns is presented to the systemfor each test pattern its distance to all examples in memory is computed and the category of the least distant instance is used as the predicted category for the test patternthe approach is based on the assumption that reasoning is based on direct reuse of stored experiences rather than on the application of knowledge abstracted from experiencein at the concept has appeared in several disciplines using terminology such as similaritybased examplebased memorybased exemplarbased casebased analogical lazy nearestneighbour and instancebased ideas about this type of analogical reasoning can be found also in nonmainstream linguistics and pyscholinguistics in computational linguistics the general approach has only recently gained some popularity eg cardie daelemans jones federici and pirrelli performance of a memorybased system crucially depends on the distance metric usedthe most straightforward distance metric would be the one in equation where x and y are the patterns to be compared and 6 is the distance between the values of the ith feature in a pattern with n featuresdistance between two values is measured using equation an overlap metric for symbolic features we will refer to this approach as ib1 we extended the algorithm described there in the following way in case a pattern is associated with more than one category in the training set the distribution of patterns over the different categories is kept and the most frequently occurring category is selected when the ambiguous pattern is used to extrapolate fromin this distance metric all features describing an example are interpreted as being equally important in solving the classification problem but this is not necessarily the casein tagging the focus word to be assigned a category is obviously more relevant than any of the words in its contextwe therefore weigh each feature with its information gain a number expressing the average amount of reduction of training set information entropy when knowing the value of the feature we will call this algorithm ibigmemorybased learning is an expensive algorithm of each test item all feature values must be compared to the corresponding feature values of all training itemswithout optimisation it has an asymptotic retrieval complexity of 0 the same asymptotic complexity is of course found for memory storage in this approachwe use igtrees to compress the memoryigtree is a heuristic approximation of the ibig algorithmigtree combines two algorithms one for compressing a case base into a trees and one for retrieving classification information from these treesduring the construction of igtree decision trees cases are stored as paths of connected nodesall nodes contain a test and a class label nodes are connected via arcs denoting the outcomes for the test a feature relevance ordering technique is used to determine the order in which features are used as tests in the treethis order is fixed in advance so the maximal depth of the tree is always equal to the number of features and at the same level of the tree all nodes have the same test the reasoning behind this reorganisation is that when the computation of feature relevance points to one feature clearly being the most important in classification search can be restricted to matching a test case to those stored cases that have the same feature value at that featurebesides restricting search to those memory cases that match only on this feature the case memory can be optimised by further restricting search to the procedure buildigtree input second most important feature followed by the third most important feature etca considerable compression is obtained as similar cases share partial pathsinstead of converting the case base to a tree in which all cases are fully represented as paths storing all feature values we compress the tree even more by restricting the paths to those input feature values that disambiguate the classification from all other cases in the training materialthe idea is that it is not necessary to fully store a case as a path when only a few feature values of the case make its classification uniquethis implies that feature values that do not contribute to the disambiguation of the case classification are not stored in the treein our tagging application this means that only context feature values that actually contribute to disambiguation are used in the construction of the treeleaf nodes contain the unique class label corresponding to a path in the treenonterminal nodes contain information about the most probable or default classification given the path thus far according to the bookkeeping information on class occurrences maintained by the tree construction algorithmthis extra information is essential when using the tree for classificationfinding the classification of a new case involves traversing the tree and either retrieving a classification when a leaf is reached or using the default classification on the last matching nonterminal node if a featurevalue match failsa final compression is obtained by pruning the derived treeall leafnode daughters of a mother node that have the same class as that node are removed from the tree as their class information does not contradict the default class information already present at the mother nodeagain this compression does not affect igtree generalisation performancethe recursive algorithms for tree construction and retrieval are given in figures 1 and 2for a detailed discussion see daelemans et al the asymptotic complexity of igtree is extremely favorablecomplexity of searching a query pattern in the tree is proportional to f log where f is the number of features and v is the average number of values per feature in ib1 search complexity is 0 retrieval by search in the tree is independent from the number of training cases and therefore especially useful for large case basesstorage requirements are proportional to n for ib1finally the cost of building the tree on the basis of a set of cases is proportional to n log f in the worst case for training in ib1in practice for our partofspeech tagging experiments igtree retrieval is 100 to 200 times faster than normal memorybased retrieval and uses over 95 less memorythe architecture takes the form of a tagger generator given a corpus tagged with the desired tag set a pos tagger is generated which maps the words of new text to tags in this tag set according to the same systematicitythe construction of a pos tagger for a specific corpus is achieved in the following waygiven an annotated corpus three datastructures are automatically extracted a lexicon a case base for known words and a case base for unknown wordscase bases are indexed using igtreeduring tagging each word in the text to be tagged is looked up in the lexiconif it is found its lexical representation is retrieved and its context is determined and the resulting pattern is looked up in the known words case basewhen a word is not found in the lexicon its lexical representation is computed on the basis of its form its context is determined and the resulting pattern is looked up in the unknown words case basein each case output is a best guess of the category for the word in its current contextin the remainder of this section we will describe each step in more detailwe start from a training set of tagged sentences t a lexicon is extracted from t by computing for each word in t the number of times it occurs with each categoryeg when using the first 2 million words of the wall street journal corpus as t the word once would get the lexical definition rb 330 in 77 ie once was tagged 330 times as an adverb and 77 times as a prepositionsubordinating conjunction2 using these lexical definitions a new possibly ambiguous tag is produced for each word typeeg once would get a new tag representing the category of words which can be both adverbs and prepositionsconjunctions frequency order is taken into account in this process if there would be words which like once can be rb or in but more frequently in than rb then a different tag is assigned to these wordsthe original tag set consisting of 44 morphosyntactic tags was expanded this way to 419 tagsin the wsj example the resulting lexicon contains 57962 word types 7464 of which are ambiguouson the same training set 76 of word tokens are ambiguouswhen tagging a new sentence words are looked up in the lexicondepending on whether or not they can be found there a case representation is constructed for them and they are retrieved from either the known words case base or the unknown words case basea windowing approach was used to represent the tagging task as a classification problema case consists of information about a focus word to be tagged its left and right context and an associated category valid for the focus word in that contextthere are several types of information which can be stored in the case base for each word ranging from the words themselves to intricate lexical representationsin the preliminary experiments described in this paper we limited this information to the possibly ambiguous tags of words for the focus word and its context to the right and the disambiguated tags of words for the left context table 1 is a sample of the case base for the first sentence of the corpus when using this case representationthe final column shows the target category the disambiguated tag for the focus wordwe will refer to this case representation as ddf at the information gain values are given as wella search among a selection of different context sizes suggested ddf at as a suitable case representation for tagging known wordsan interesting property of memorybased learning is that case representations can be easily extended with different sources of information if available the information gain feature relevance ordering technique achieves a delicate relevance weighting of different information sources when they are fused in a single case representationthe window size used by the algorithm will also dynamically change depending on the information present in the context for the disambiguation of a particular focus symbol in most taggers some form of morphological analysis is performed on unknown words in an attempt to relate the unknown word to a known combination of known morphemes thereby allowing its association with one or more possible categoriesafter determining this ambiguous category the word is disambiguated using context knowledge the same way as known wordsmorphological analysis presupposes the availability of highly languagespecific resources such as a morpheme lexicon spelling rules morphological rules and heuristics to prioritise possible analyses of a word according to their plausibilitythis is a serious knowledge engineering bottleneck when the goal is to develop a language and annotationindependent tagger generatorin our memorybased approach we provide morphological information indirectly to the tagger by encoding the three last letters of the word as separate features in the case representationthe first letter is encoded as well because it contains information about prefix and capitalization of the wordcontext information is added to the case representation in a similar way as with known wordsit turned out that in combination with the morphological features a context of one disambiguated tag of the word to the left of the unknown word and one ambiguous category of the word to the right gives good resultswe will call this case representation pdassst3 three suffix letters one prefix letter one left disambiguated context words and one ambiguous right context word as the chance of an unknown word being a function word is small and cases representing function words may interfere with correct classification of openclass words only openclass words are used during construction of the unknown words case basetable 2 shows part of the case base for unknown wordsfigure 3 shows the architecture of the taggergenerator a tagger is produced by extracting a lexicon and two casebases from the tagged example corpusduring tagging the control is the following words are looked up in the lexicon and separated into known and unknown wordsthey are retrieved from the known words case base and the unknown words case base respectivelyin both cases context is used in the case of unknown words the first and three last letters of the word are used instead of the ambiguous tag for the focus wordas far as disambiguated tags for left context words are used these are of course not obtained by retrieval from the lexicon but by using the previous decisions of the taggeras explained earlier both case bases are implemented as igtreesfor the known words case base paths in the tree represent variable size context widthsthe first feature is the focus word then context features are added as further expansions of the tree until the context disambiguates the focus word completelyfurther expansion is halted at that pointin some cases short context sizes are sufficient to disambiguate a focus word in other cases more context is neededigtrees provide an elegant way of automatic determination of optimal context sizein the unknown words case base the trie representation provides an automatic integration of information about the form and the context of a focus word not encountered beforein general the top levels of the tree represent the morphological information while the deeper levels contribute contextual disambiguationin this section we report first results on our memorybased tagging approachin a first set of experiments we compared our igtree implementation of memorybased learning to more traditional implementations of the approachin further experiments we studied the performance of our system on predicting the category of both known and unknown wordsthe experimental methodology was taken from machine learning practice independent training and test sets were selected from the original corpus the system was trained on the training set and the generalization accuracy was computed on the independent test setstorage and time requirements were computed as wellwhere possible we used a 10fold crossvalidation approachin this experimental method a data set is partitioned ten times into 90 training material and 10 testing materialaverage accuracy provides a reliable estimate of the generalization accuracyour goal is to adhere to the concept of memorybased learning with full memory while at the same time keeping memory and processing speed within attractive boundsto this end we applied the igtree formalism to the taskin order to prove that igtree is a suitable candidate for practical memorybased tagging we compared three memorybased learning algorithms ib1 a slight extension of the wellknown knn algorithm in statistical pattern recognition ib1ig an extension of ib1 which uses feature relevance weighting and igtree a memory and processing time saving heuristic implementation of ib1ig table 3 lists the results in generalization accuracy storage requirements and speed for the three algorithms using a ddf at pattern a 100000 word training set and a 10000 word test setin this experiment accuracy was tested on known words onlythe igtree version turns out to be better or equally good in terms of generalization accuracy but also is more than 100 times faster for tagging of new words4 and compresses the original case base to 4 of the size of the original case basethis experiment shows that for this problem we can use igtree as a time and memory saving approximation of memorybased learning without loss in generalization accuracythe time and speed advantage of igtree grows with larger training setsa tenfold crossvalidation experiment on the first two million words of the wsj corpus shows an average generalization performance of igtree of 963we did 10fold crossvalidation experiments for several sizes of datasets revealing the learning curve in figure 4training set size is on the xaxis generalization performance as measured in a 10fold crossvalidation experiment is on the yaxis the error range indicate averages plus and minus one standard deviation on each 10fold crossvalidation experimentalready at small data set sizes performance is relatively highwith increasingly larger data sets the performance becomes more stable it should be noted that in this experiment we assumed correctly disambiguated tags in the left contextin practice when using our tagger this is of course not the case because the disambiguated tags in the left context of the current word to be tagged are the result of a previous decision of the tagger which may be a mistaketo test the influence of this effect we performed a third experimentwe performed the complete tagger generation process on a 2 million words training set and tested on 200000 test wordsperformance on known words unknown words and total are given in table 4in this experiment numbers were not stored in the known words case base they are looked up in the unknown words case basewe are not convinced that variation in the results of the experiments in a 10foldcv setup is statistically meaningful but follow common practice herea casebased approach similar to our memorybased approach was also proposed by cardie for sentence analysis in limited domains we will discuss only the reported pos tagging results hereusing a fairly complex case representation based on output from the circus conceptual sentence analyzer and with a tag set of 18 tags she reports a 95 tagging accuracya decisiontree learning approach to feature selection is used in this experiment to discard irrelevant featuresresults are based on experiments with 120 randomly chosen sentences from the tipster jv corpus cardie reports 891 correct tagging for unknown wordspercentage unknown words was 206 of the test words and overall tagging accuracy 95notice that her algorithm gives no initial preference to training cases that match the test word during its initial case retrievalon the other hand after retrieving the top k cases the algorithm does prefer those cases that match the test word when making its final predictionsso it is understandable that the algorithm is doing better on words that it is seen during training as opposed to unknown wordsin our memorybased approach feature weighting for determining the relevance of features is integrated more smoothly with the similarity metric and our results are based on experiments with a larger corpus our case representation is simpler only the tags not the words themselves or any other information are usedthe most important improvement is the use of igtree to index and search the case base solving the computational complexity problems a casebased approach would run into when using large case basesan approach based on knn methods is a statistical approach but it uses a different kind of statistics than markov modelbased approachesknn is a nonparametric technique it assumes no fixed type of distribution of the datathe most important advantages compared to current stochastic approaches are that few training items are needed for relatively good performance the approach is incremental adding new cases does not require any recomputation of probabilities and it provides explanation capabilities and it requires no additional smoothing techniques to avoid zeroprobabilities the igtree takes care of thatcompared to handcrafted rulebased approaches our approach provides a solution to the knowledgeacquisition and reusability bottlenecks and to robustness and coverage problems compared to learning rulebased approaches such as the one by brill a knn approach provides a uniform approach for all disambiguation tasks more flexibility in the engineering of case representations and a more elegant approach to handling of unknown words we have shown that a memorybased approach to largescale tagging is feasible both in terms of accuracy and also in terms of computational efficiency when using igtree to compress and index the case basethe approach combines some of the best features of learned rulebased and statistical systems more specifically memorybased tagging with igtrees has the following advantageswe have barely begun to optimise the approach a more intelligent similarity metric would also take into account the differences in similarity between different values of the same featureeg the similarity between the tags rbinnn and rbin should be bigger than the similarity between rbin and vbnnapart from linguistic engineering refinements of the similarity metric we are currently experimenting with statistical measures to compute such more finegrained similarities research of the first author was done while he was a visiting scholar at nias in wassenaarthanks to antal van den bosch ton weijters and gert durieux for discussions about tagging igtree and machine learning of natural language
W96-0102
mbt a memorybased part of speech taggergeneratorwe introduce a memorybased approach to part of speech taggingmemorybased learning is a form of supervised learning based on similaritybased reasoningthe part of speech tag of a word in a particular context is extrapolated from the most similar cases held in memorysupervised learning approaches are useful when a tagged corpus is available as an example of the desired output of the taggerbased on such a corpus the taggergenerator automatically builds a tagger which is able to tag new text the same way diminishing development time for the construction of a tagger considerablymemorybased tagging shares this advantage with other statistical or machine learning approachesadditional advantages specific to a memorybased approach include the relatively small tagged corpus size sufficient for training incremental learning explanation capabilities flexible integration of information in case representations its nonparametric nature reasonably good results on unknown words without morphological analysis and fast learning and taggingin this paper we show that a largescale application of the memorybased approach is feasible we obtain a tagging accuracy that is on a par with that of known statistical approaches and with attractive space and time complexity properties when using igtree a treebased formalism for indexing and searching huge case basesthe use of igtree has as additional advantage that optimal context size for disambiguation is dynamically computedour tagger uses a very finegrained tag set
comparative experiments on disambiguating word senses an illustration of the role of bias in machine learning this paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context the algorithms tested include statistical neuralnetwork decisiontree rulebased and casebased classification techniques the specific problem tested involves disambiguating six senses of the word quotlinequot using the words in the current and proceeding sentence as context the statistical and neuralnetwork methods perform the best on this particular problem and we discuss a potential reason for this obdifference we also discuss the role of in machine learning and its importance in explaining performance differences observed on specific problems recent research in empirical natural language processing has explored a number of different methods for learning from datathree general approaches are statistical neuralnetwork and symbolic machine learning and numerous specific methods have been developed under each of these paradigms an important question is whether some methods perform significantly better than others on particular types of problemsunfortunately there have been very few direct comparisons of alternative methods on identical test dataa somewhat indirect comparison of applying stochastic contextfree grammars a transformationbased method and inductive logic programming to parsing the atis corpus from the penn treebank indicates fairly similar performance for these three very different methodsalso comparisons of bayesian informationretrieval neuralnetwork and casebased methods on wordsense disambiguation have also demonstrated similar performance however in a comparison of neuralnetwork and decisiontree methods on learning to generate the past tense of an english verb decision trees performed significantly better subsequent experiments on this problem have demonstrated that an inductive logic programming method produces even better results than decision trees in this paper we present direct comparisons of a fairly wide range of general learning algorithms on the problem of discriminating six senses of the word quotlinequot from context using data assembled by leacock et al we compare a naive bayesian classifier a perceptron a decisiontree learner a k nearestneighbor classifier logicbased dnf and cnf learners and a decisionlist learner tests on all methods used identical training and test sets and ten separate random trials were run in order to measure average performance and allow statistical testing of the significance of any observed differenceson this particular task we found that the bayesian and perceptron methods perform significantly better than the remaining methods and discuss a potential reason for this observed differencewe also discuss the role of bias in machine learning and its importance in explaining the observed differences in the performance of alternative methods on specific problemsresearch in machine learning over the last ten years has been particularly concerned with experimental comparisons and the relative performance of different classification methods in particular the uci machine learning data repository was assembled to facilitate empirical comparisonsexperimental comparisons of different methods on various benchmark problems have generally found relatively small differences in predictive accuracy however on specific problems certain methods can demonstrate a significant advantagefor example on the problem of detecting promoter sequences in dna neuralnetwork and similar methods perform significantly better than symbolic induction methods on the other hand as mentioned in the introduction symbolic induction methods perform significantly better than neuralnetworks on the problem of learning to generate the past tense of an english verb it is generally agreed that the philosophical problem of induction means that no inductive algorithm is universally better than any otherit can be proven that when averaged over a uniform distribution of all possible classification problems the generalization performance of any inductive algorithm is zerothis has been called the quotconservation law for generalization performancequot or a quotno free lunchquot theorem however averaging over a uniform distribution of all possible functions is effectively equivalent to assuming a quotrandom universequot in which the past is not predictive of the futureif all problems are not equally likely the expected generalization performance over a distribution of realworld problems can of course be positive in machine learning bias refers to quotany basis for choosing one generalization over another other than strict consistency with the instancesquot decisiontree methods have a bias for simple decision trees rule induction methods have a bias for simple dnf expressions neuralnetwork methods have a bias for linear threshold functions 1 and naive bayes has a bias for functions which respect conditional independence of featuresthe more the bias of a certain although multilayer networks with sufficient hidden can represent arbitrary nonlinear functions they will tend to learn a linear function if one exists that is consistent with the training data learning algorithm fits the characteristics of a particular problem the better it will perform on that problemmost learning algorithms have some sort of quotoccam razorquot bias in which hypotheses that can be represented with fewer bits in some particular representation language are preferred however the compactness with which different representation languages can represent particular functions can vary dramatically therefore different biases can perform better or worse on specific problemsone of the main goals of machine learning is to find biases that perform well on the distribution of problems actually found in the real worldas an example consider the advantage neuralnetworks have on the promoter recognition problem mentioned earlierthere are several potential sites where hydrogen bonds can form between the dna and a protein and if enough of these bonds form promoter activity can occurthis is represented most compactly using an mofn classification function which returns true if any subset of size m of n specified features are present in an example a single linear threshold unit can easily represent such functions whereas a dnf expression requires quotn choose mquot terms to represent themtherefore the difference in their ability to compactly represent such functions explains the observed performance difference between rule induction and neuralnetworks on this problem2 of course picking the right bias or learning algorithm for a particular task is a difficult problema simple approach is to automate the selection of a method using internal crossvalidation another approach is to use metalearning to learn a set of rules that predicts when a learning algorithm will perform best on a domain given features describing the problem a recent special issue of the machine learning journal on quotbias evaluation and selectionquot introduced by gordon and desjardins presents current research in this general areaseveral recent research projects have taken a corpusbased approach to lexical disambiguation the goal is to learn to use surrounding context to determine the sense of an ambiguous wordour tests are based on the corpus assembled by leacock et al the task is to disambiguate the word quotlinequot into one of six possible senses based on the words occurring in the current and previous sentencethe corpus was assembled from the 198789 wall street journal and a 25 million word corpus from the american printing house for the blindsentences containing quotlinequot were extracted and assigned a single sense from wordnet there are a total of 4149 examples in the full corpus unequally distributed across the six sensesdue to the use of the wall street journal the quotproductquot sense is more than 5 times as common as any of the othersprevious studies have first sampled the data so that all senses were equally representedleacock et al leacock towell and voorhees and voorhees leacock and towell present results on a bayesian method a content vector method from information retrieval and a neural network trained using backpropagation the neural network architecture that performed at least as well as any other contained no hidden units so was effectively equivalent to a perceptronon the sixsense task trained on 1200 examples and averaged over three random trials they report the following generalization accuracies bayesian 71 content vectors 72 neural nets 76none of these differences were statistically significant given the small number of trialsin these studies the data for the contentvector and neuralnetwork methods was first reduced by ignoring case and reducing words to stems computing computation etc are all conflated to the feature comput and removing a set of about 570 highfrequency stopwords similar preprocessing was performed for the current experiments but we can not guarantee identical resultsthe result was a set of 2094 examples equally distributed across the six senses where each example was described using 2859 binary features each representing the presence or absence of a particular word stem in the current or immediately preceding sentencethe current experiments test a total of seven different learning algorithms with quite different biasesthis section briefly describes each of these algorithmsexcept for c45 which uses the c code provided by quinian all of these methods are implemented in common lisp and available online at httpwwwcsutexaseduusersmlmlprogshtmlall systems were run on a sun sparcstation 5 with 40mb of main memorythe simplest algorithms tested were a naive bayesian classifier which assumes conditional independence of features and a k nearestneighbor classifier which assigns a test example to the majority class of the 3 closest training examples initial results indicated that k nearest neighbor with k3 resulted in slightly better performance than k1naive bayes is intended as a simple representative of statistical methods and nearest neighbor as a simple representative of instancebased methods since the previous results of leacock et al indicated that neural networks did not benefit from hidden units on the quotlinequot disambiguation data we employed a simple perceptron as a representative connectionist methodthe implementation learns a separate perceptron for recognizing each sense and assigns a test case to the sense indicated by the perceptron whose output most exceeds its thresholdin the current experiments there was never a problem with convergence during trainingas a representative of decisiontree methods we chose c45 a system that is easily available and included in most recent experimental comparisons in machine learningall parameters were left at their default valueswe also tested c45rules a variant of c45 in which decision trees are translated into rules and pruned however its performance was slightly inferior to the base c45 system on the quotlinequot corpus therefore its results are not includedfinally we tested three simple logicbased induction algorithms that employ different representations of concepts dnf cnf and decision listsmost rulebased methods egmichalski induce a disjunctive set of conjunctive rules and therefore represent concepts in dnfsome recent results have indicated that representing concepts in cnf frequently performs somewhat better some concepts are more compactly represented in cnf compared to dnf and vice versatherefore both representations are includedfinally decision lists are ordered lists of conjunctive rules where rules are tested in order and the first one that matches an instance is used to classify ita number of effective conceptlearning systems have employed decision lists and they have already been successfully applied to lexical disambiguation all of the logicbased methods are variations of the foil algorithm for induction of firstorder functionfree horn clauses appropriately simplified for the propositional casethey are called pfoildnf pfoilcnf and pfoildlistthe algorithms are greedy covering methods that use an informationtheoretic heuristic to guide a topdown search for a simple definition consistent with the training datapfoildnf learns a separate dnf description for each sense using the examples of that sense as positive instances and the examples of all other senses as negative instancesmooney describes pfoildnf and pfoilcnf in more detail and pfoildlist is based on the firstorder decisionlist learner described by mooney and calif experiments in order to evaluate the performance of these seven algorithms direct multitrial comparisons on identical training and test sets were run on the quotlinequot corpussuch headtohead comparisons of methods are unfortunately relatively rare in the empirical naturallanguage literature where papers generally report results of a single method on a single training set with at best indirect comparisons to other methodslearning curves were generated by splitting the preprocessed quotlinequot corpus into 1200 training examples and 894 test cases training all methods on an increasingly larger subset of the training data and repeatedly testing them on the test setlearning curves are fairly common in machine learning but not in corpusbased language researchwe believe they are important since they reveal how algorithms perform with varying amounts of training data and how their performance improves with additional trainingresults on a fixedsized training set gives only one data point on the learning curve and leaves the possibility that differences between algorithms are hidden due to a ceiling effect in which there are sufficient training examples for all methods to reach near bayesoptimal performance3 learning 3bayesoptimal performance is achieved by always picking the category with the maximum probability given all of its featuresthis requires actually knowing the conditional probability of each category given each of the exponentially large number of possible instance descriptions curves generally follow a power law where predictive accuracy climbs fairly rapidly and then levels off at an asymptotic levela learning curve can reveal whether the performance of a system is approaching an asymptote or whether additional training data would likely result in significant improvementsince gathering annotated training data is an expensive timeconsuming process it is important to understand the performance of methods given varying amounts of training datain addition to measuring generalization accuracy we also collected data on the cpu time taken to train and test each method for each trainingset size measured on the learning curvethis provides information on the computational resources required by each method which may also be useful in deciding between them for particular applicationsit also provides data on how the algorithm scales by providing information on how training time grows with trainingset sizefinally all results are averaged over ten random selections of training and test setsthe performance of a system can vary a fair bit from trial to trial and a difference in accuracy on a single training set may not indicate an overall performance advantageunfortunately most results reported in empirical naturallanguage research present only a single or very small number of trialsrunning multiple trials also allows for statistical testing of the significance of any resulting differences in average performancewe employ a simple twotailed paired ttest to compare the performance of two systems for a given trainingset size requiring significance at the 005 leveleven more sophisticated statistical analysis of the results is perhaps warrantedthe resulting learning curves are shown in figure 1 and results on training and testing time are shown in figures 2 and 3figure 3 presents the time required to classify the complete set of 894 test exampleswith respect to accuracy naive bayes and perceptron perform significantly better than all other methods for all trainingset sizesnaive bayes and perceptron are not significantly different except at 1200 training examples where naive bayes has a slight advantagenote that the results for 1200 training examples are comparable to those obtained by leacock et al for similar methodspfoildlist is always significantly better than pfoildnf and pfoitcnf and significantly better than 3 nearest neighbor and c45 at 600 and 1200 training examplesc45 and 3 nearest neighbor are always significantly better than pfoitdnf and pfoilcnf but not significantly different from each otherfinally pfoildnf is significantly better than pfoilcnf at 600 and 1200 training exampleswith respect to training time virtually all differences are significantthe logicbased induction methods are slowest c45 and perceptron intermediate and naive bayes the fastestsince it just stores examples training time for nearest neighbor is always zeroin general connectionist methods are much slower to train than alternative techniques however in this case a simple perceptron converges quite rapidlywith respect to testing time the symbolic induction methods are fastest and almost indistinguishable from zero in figure 3 since they only need to test a small subset of the features4 all visible differences in the graph are significantnaive bayes is the slowest both it and perceptron have the constant overhead of computing a weighted function over all of the almost 3000 featuresnearest neighbor grows linearly with the number of training instances as expected more sophisticated indexing methods can reduce this to logarithmic expected time 5naive bayes and perceptron are similar in that they both employ a weighted combination of all featuresthe decisiontree and logicbased approaches all attempt to find a combination of a relatively small set of features that accurately predict classificationafter training on 1200 examples the symbolic structures learned for the line corpus are relatively largeaverage sizes are 369 leaves for c45 decision trees 742 literals for prilld list decision lists 841 literals for pfoildnf formulae and 1197 literals for pfoilcnf formulaehowever many nodes or literals can test the same feature and the last two results include the total literal count for six separate dnf or cnf formulae therefore each discrimination is clearly only testing a relatively small fraction of the 2859 available featuresnearest neighbor bases its classifications on all features however it weights them all equallytherefore differential weighting is apparently necessary for highperformance on this problemalternative instancebased methods that weight features based on their predictive ability have also been developed therefore our results indicate that lexical disambiguation is perhaps best performed using methods that combine weighted evidence from all of the features rather tures actually present in the exampleswithout this optimization testing would have been several orders of magnitude slower than making a decision by testing only a small subset of highly predictive featuresamong the other methods tested decision lists seem to perform the bestthe ordering of rules employed in a decision list in order to simplify the representation and perform conflict resolution apparently gives it an advantage over other symbolic methods on this taskin addition to the results reported by yarowsky and mooney and calif it provides evidence for the utility of this representation for naturallanguage problemswith respect to training time the symbolic methods are significantly slower since they are searching for a simple declarative representation of the conceptempirically the time complexity for most methods are growing somewhat worse than linearly in the number of training examplesthe worst in this regard are pfoildnf and pfoilcnf which have a worstcase complexity of 0 however all of the methods are able to process fairly large sets of data in reasonable timewith respect to testing time the symbolic methods perform the best since they only need to test a small number of features before making a decisiontherefore in an application where response time is critical learned rules or decision trees could provide rapid classification with only a modest decrease in accuracynot surprisingly there is a tradeoff between training time and testing time the symbolic methods spend more effort during training compressing the representation of the learned concept resulting in a simpler description that is quicker to testthe current results are for only one simple encoding of the lexical disambiguation problem into a feature vector representing an unordered set of word stemsthis paper has focused on exploring the space of possible algorithms rather than the space of possible input representationsalternative encodings which exploit positional information syntactic word tags syntactic parse trees semantic information etc should be tested to determine the utility of more sophisticated representationsin particular it would be interesting to see if the accuracy ranking of the seven algorithms is affected by a change in the representationsimilar comparisons of a range of algorithms should also be performed on other natural language problems such as partofspeech tagging prepositional phrase attachment anaphora resolution etcsince the requirements of individual tasks vary different algorithms may be suitable for different subproblems in natural language processingthis paper has presented fairly comprehensive experiments comparing seven quite different empirical methods on learning to disambiguate words in contextmethods that employ a weighted combination of a large set of features such as simple bayesian and neuralnetwork methods were shown to perform better than alternative methods such as decisiontree rulebased and instancebased techniques on the problem of disambiguating the word quotlinequot into one of six possible senses given the words that appear in the current and previous sentence as contextalthough different learning algorithms can frequently perform quite similarly they all have specific biases in their representation of concepts and therefore can illustrate both strengths and weaknesses in particular applicationsonly rigorous experimental comparisons together with a qualitative analysis and explanation of their results can help determine the appropriate methods for particular problems in natural language processingthis research was partially supported by the national science foundation through grant iri9310819i would also like to thank goeff towell for providing access to the quotlinequot corpus
W96-0208
comparative experiments on disambiguating word senses an illustration of the role of bias in machine learningthis paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from contextthe algorithms tested include statistical neuralnetwork decisiontree rulebased and casebased classification techniquesthe specific problem tested involves disambiguating six senses of the word line using the words in the current and proceeding sentence as contextthe statistical and neuralnetwork methods perform the best on this particular problem and we discuss a potential reason for this observed differencewe also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problemswe argue that naive bayes classification and perceptron classifiers are particularly fit for lexical sample word sense disambiguation problems because they combine weighted evidence from all features rather than select a subset of features for early discriminationbag of words feature sets made up of unigrams have had a long history of success in text classification and word sense disambiguation and we believe that despite creating quite a bit of noise can provide useful information for discrimination
a maximum entropy model for partofspeech tagging this paper presents a statistical model which trains from a corpus annotated with partof speech tags and assigns them to previously unseen text with stateoftheart accuracy the can be classified as a entropy model and simultaneously uses many contextual quotfeaturesquot to predict the pos tag furthermore this paper demonstrates the use of specialized features to model difficult tagging decisions discusses the corpus consistency problems discovered during the implementation of these features and proposes a training strategy that mitigates these problems many natural language tasks require the accurate assignment of partofspeech tags to previously unseen textdue to the availability of large corpora which have been manually annotated with pos information many taggers use annotated text to quotlearnquot either probability distributions or rules and use them to automatically assign pos tags to unseen textthe experiments in this paper were conducted on the wall street journal corpus from the penn treebank project although the model can train from any large corpus annotated with pos tagssince most realistic natural language applications must process words that were never seen before in training data all experiments in this paper are conducted on test data that include unknown wordsseveral recent papers have reported 965 tagging accuracy on the wall st journal corpusthe experiments in this paper test the hypothesis that better use of context will improve the accuracya maximum entropy model is wellsuited for such experiments since it cornbines diverse forms of contextual information in a principled manner and does not impose any distributional assumptions on the training dataprevious uses of this model include language modeling machine translation prepositional phrase attachment and word morphologythis paper briefly describes the maximum entropy and maximum likelihood properties of the model features used for pos tagging and the experiments on the penn treebank wall st journal corpusit then discusses the consistency problems discovered during an attempt to use specialized features on the word contextlastly the results in this paper are compared to those from previous work on pos taggingthe probability model the probability model is defined over it x t where fl is the set of possible word and tag contexts or quothistoriesquot and t is the set of allowable tagsthe model probability of a history h together with a tag t is defined as where ir is a normalization constant fp cu a are the positive model parameters and fk are known as quotfeaturesquot where fj e omnote that each parameter aj corresponds to a feature fjgiven a sequence of words w1 w and tags t1 tn as training data define hi as the history available when predicting tithe parameters p ai are then chosen to maximize the likelihood of the training data using p this model also can be interpreted under the maximum entropy formalism in which the goal is to maximize the entropy of a distribution subject to certain constraintshere the entropy of the distribution p is defined as where the model feature expectation is and the observed feature expectation is ef epoiti and where denotes the observed probability of in the training datathus the constraints force the model to match its feature expectations with those observed in the training datain practice i1 is very large and the model expectation e fi cannot be computed directly so the following approximation is used where 3 is the observed probability of the history hi in the training setit can be shown that if p has the form and satisfies the k constraints it uniquely maximizes the entropy h over distributions that satisfy and uniquely maximizes the likelihood l over distributions of the form the model parameters for the distribution p are obtained via generalized iterative scaingthe joint probability of a history h and tag t is determined by those parameters whose corresponding features are active ie those aj such that f 1a feature given may activate on any word or tag in the history h and must encode any information that might help predict t such as the spelling of the current word or the identity of the previous two tagsthe specific word and tag context available to a feature is given in the following definition of a history hi if the above feature exists in the feature set of the model its corresponding model parameter will contribute towards the joint probability p when wi ends with quotingquot and when ti vbg1thus a model parameter aj effectively serves as a quotweightquot for a certain contextual predictor in this case the suffix quotingquot towards the probability of observing a certain tag in this case a vbgthe model generates the space of features by scanning each pair in the training data with the feature quottemplatesquot given in table 1given hi as the current history a feature always asks some yesno question about hi and furthermore constrains ti to be a certain tagthe instantiations for the variables x y and t in table 1 are obtained automatically from the training datathe generation of features for tagging unknown words relies on the hypothesized distinction that quotrarequot words in the training set are similar to unknown words in test data with respect to how their spellings help predict their tagsthe rare word features in table 1 which look at the word spellings will apply to both rare words and unknown words in test datafor example table 2 contains an excerpt from training data while table 3 contains the features generated while scanning in which the current word is about and table 4 contains features generated while scanning in which the current word wellheeled occurs 3 times in training data and is therefore classified as quotrarequotthe behavior of a feature that occurs very sparsely in the training set is often difficult to predict since its statistics may not be reliabletherefore the model uses the heuristic that any feature condition features wi is not rare wi x wi is rare xis prefix of wi ixi 4 ti t x is suffix of wi ixi 4 wi contains number ti t wi contains uppercase character t t wi contains hyphen ti t which occurs less than 10 times in the data is unreliable and ignores features whose counts are less than 103 while there are many smoothing algorithms which use techniques more rigorous than a simple count cutoff they have not yet been investigated in conjunction with this taggerthe test corpus is tagged one sentence at a timethe testing procedure requires a search to enumerate the candidate tag sequences for the sentence and the tag sequence with the highest probability is chosen as the answerthe search algorithm essentially a quotbeam searchquot uses the conditional tag probability and maintains as it sees a new word the n highest probability tag sequence candidates up to that point in the sentencegiven a sentence wl w a tag sequence candidate ti tn has conditional probability in addition the search procedure optionally consults a tag dictionary which for each known word lists the tags that it has appeared with in the training setif the tag dictionary is in effect the search procedure for known words generates only tags given by the dictionary entry while for unknown words generates all tags in the tag setwithout the tag dictionary the search procedure generates all tags in the tag set for every wordlet w wi wn be a test sentence and let sij be the jth highest probability tag sequence up to and including word withe search is described belowin order to conduct tagging experiments the wall st journal data has been split into three contiguous sections as shown in table 5the feature set and search algorithm were tested and debugged only on the training and development sets and the official test result on the unseen test set is presented in the conclusion of the paperthe performances of the quotbaselinequot model on the development set both with and without the tag dictionary are shown in table 6all experiments use a beam size of n 5 further increasing the beam size does not significantly increase performance on the development set but adversely affects the speed of the taggereven though use of the tag dictionary gave an apparently insignificant improvement in accuracy it is used in further experiments since it significantly reduces the number of hypotheses and thus speeds up the taggerthe running time of the parameter estimation algorithm is 0 where n is the training set size t is the number of allowable tags and a is the average number of features that are active for a given event the running time of the search procedure on a sentence of length n is 0 where t a are defined above and b is the beam sizein practice the model for the experiment shown in table 6 requires approximately 24 hours to train and 1 hour to test on an ibm rs6000 model 380 with 256mb of ramthe maximum entropy model allows arbitrary binaryvalued features on the context so it can use additional specialized ie wordspecific features to correctly tag the quotresiduequot that the baseline features cannot modelsince such features typically occur infrequently the training set consistency must be good enough to yield reliable statisticsotherwise the specialized features will model noise and perform poorly on test datasuch features can be designed for those words which are especially problematic for the modelthe top errors of the model are shown in table 7 clearly the model has trouble with the words that and about among othersas hypothesized in the introduction better features on the context surrounding that and about should correct the tagging mistakes for these two words assuming that the tagging errors are due to an impoverished feature set and not inconsistent dataspecialized features for a given word are constructed by conjoining certain features in the baseline model with a question about the word itselfthe features which ask about previous tags and surrounding words now additionally ask about the identity of the current word eg a specialized feature for the word about in table 3 could be table 8 shows the results of an experiment in which specialized features are constructed for quotdifficultquot words and are added to the baseline feature sethere quotdifficultquot words are those that are mistagged a certain way at least 50 times when the training set is tagged with the baseline modelusing the set of 29 difficult words the model performs at 9649 accuracy on the development set an insignificant improvement from the baseline accuracy of 9643table 9 shows the change in error rates on the development set for the frequently occurring quotdifficultquot wordsfor most words the specialized model yields little or no improvement and for some ie more and about the specialized model performs worsethe lack of improvement implies that either the feature set is still impoverished or that the training data is inconsistenta simple consistency test is to graph the pos tag assignments for a given word as a function of the article in which it occursconsistently tagged words should have roughly the same tag distribution as the article numbers varyfigure 1 represents each pos tag with a unique integer and graphs the pos annotation of about in the training set as a function of the article as seen in figure 1 about is usually annotated with tag1 which denotes in or tag9 which denotes rb and the observed probability of either choice depends heavily on the current articleupon further examination the tagging distribution for about changes precisely when the annotator changesfigure 2 which again uses integers to denote pos tags shows the tag distribution of about as a function of annotator and implies that the tagging errors for this word are due mostly to inconsistent datathe words ago chief down executive off out up and yen also exhibit similar biasthus specialized features may be less effective for those words affected by interannotator biasa simple solution to eliminate interannotator inconsistency is to train and test the model on data that has been created by the same annotatorthe results of such an experiment are shown in table 10the total accuracy is higher implying that the singlyannotated training and test sets are more consistent and the improvement due to the specialized features is higher than before but still modest implying that either the features need further improvement or that intraannotator inconsistencies exist in the corpusmost of the recent corpusbased pos taggers in the literature are either statistically based and use markov model or statistical decision tree techniques or are primarily rule based such as brill transformation based learnerthe maximum entropy tagger presented in this paper combines the advantages of all these methodsit uses a rich feature representation like tbl and sdt and generates a tag probability distribution for each word like decision tree and markov model techniques provide the results from a battery of quottritagquot markov model experiments in which the probability p of observing a word sequence w wi w2 wn together with a tag sequence t is given by furthermore p for unknown words is computed by the following heuristic which uses a set of 35 predetermined endings this approximation works as well as the maxent model giving 85 unknown word accuracy on the wall st journal but cannot be generalized to handle more diverse information sourcesmultiplying together all the probabilities becomes less convincing of an approximation as the information sources become less independentin contrast the maxent model combines diverse and nonlocal information sources without making any independence assumptionsa pos tagger is one component in the sdt based statistical parsing system described in the total word accuracy on wall st journal data 965 is similar to that presented in this paperhowever the aforementioned sdt techniques require word classes to help prevent data fragmentation and a sophisticated smoothing algorithm to mitigate the effects of any fragmentation that occursunlike sdt the maxent training procedure does not recursively split the data and hence does not suffer from unreliable counts due to data fragmentationas a result no word classes are required and a trivial count cutoff suffices as a smoothing procedure in order to achieve roughly the same level of accuracytbl is a nonstatistical approach to pos tagging which also uses a rich feature representation and performs at a total word accuracy of 965 and an unknown word accuracy of 85the tbl representation of the surrounding word context is almost the same7 and the tbl representation of unknown words is a superset8 of the unknown word representation in this paperhowever since tbl is nonstatistical it does not provide probability distributions and 7 looks at words 3 away from the current whereas the feature set in this paper uses a window of 28 uses prefixsuffix additions and deletions which are not used in this paper unlike maxent cannot be used as a probabilistic component in a larger modelmaxent can provide a probability for each tagging decision which can be used in the probability calculation of any structure that is predicted over the pos tags such as noun phrases or entire parse trees as in thus maxent has at least one advantage over each of the reviewed pos tagging techniquesit is better able to use diverse information than markov models requires less supporting techniques than sdt and unlike tbl can be used in a probabilistic frameworkhowever the pos tagging accuracy on the penn wall st journal corpus is roughly the same for all these modelling techniquesthe convergence of the accuracy rate implies that either all these techniques are missing the right predictors in their representation to get the quotresiduequot or more likely that any corpus based algorithm on the penn treebank wall st journal corpus will not perform much higher than 965 due to consistency problemsthe maximum entropy model is an extremely flexible technique for linguistic modelling since it can use a virtually unrestricted and rich feature set in the framework of a probability modelthe implementation in this paper is a stateoftheart pos tagger as evidenced by the 966 accuracy on the unseen test set shown in table 11the model with specialized features does not perform much better than the baseline model and further discovery or refinement of wordbased features is difficult given the inconsistencies in the training dataa model trained and tested on data from a single annotator performs at 5 higher accuracy than the baseline model and should produce more consistent input for applications that require tagged text
W96-0213
a maximum entropy model for partofspeech taggingthis paper presents a statistical model which trains from a corpus annotated with partofspeech tags and assigns them to previously unseen text with stateoftheart accuracythe model can be classified as a maximum entropy model and simultaneously uses many contextual features to predict the pos tagfurthermore this paper demonstrates the use of specialized features to model difficult tagging decisions discusses the corpus consistency problems discovered during the implementation of these features and proposes a training strategy that mitigates these problemswe assume that the tag of a word is independent of the tags of all preceding words given the tags of the previous two wordswe release a publicly available maximum entropy tagger
efficient algorithms for parsing the dop model excellent results have been reported for data oriented parsing of natural language texts unfortunately existing algorithms are both computationally intensive and difficult to implement previous algorithms are expensive due to two factors the exponential number of rules that must be generated and the use of a monte carlo parsing algorithm in this paper we solve the first problem by a novel reduction of the dop model toa small equivalent probabilistic contextfree grammar we solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents rather than the probability of a correct parse tree using the optimizations experiments yield a 97 crossing brackets rate and 88 zero crossing brackets rate this differs significantly from the results reported by bod and is comparable to results from a duplication of pereira and schabes experiment on the same data we show that bod results are at least partially due to an extremely fortuitous choice of test data and partially due to using cleaner data than other researchers the dataoriented parsing model has a short interesting and controversial historyit was introduced by reinko scha and was then studied by rens bodunfortunately bod was not able to find an efficient exact algorithm for parsing using the model however he did discover and implement monte carlo approximationshe tested these algorithms on a cleaned up version of the atis corpus and achieved some very exciting results reportedly getting 96 of his test set exactly correct a huge improvement over previous resultsfor instance bod compares these results to schabes in which for short sentences 30 of the sentences have no crossing brackets thus bod achieves an extraordinary 8fold error rate reductionnot surprisingly other researchers attempted to duplicate these results but due to a lack of details of the parsing algorithm in his publications these other researchers were not able to confirm the results even bod thesis does not contain enough information to replicate his resultsparsing using the dop model is especially difficultthe model can be summarized as a special kind of stochastic tree substitution grammar given a bracketed labelled training corpus let every subtree of that corpus be an elementary tree with a probability proportional to the number of occurrences of that subtree in the training corpusunfortunately the number of trees is in general exponential in the size of the training corpus trees producing an unwieldy grammarin this paper we introduce a reduction of the dop model to an exactly equivalent probabilistic context free grammar that is linear in the number of nodes in the training datanext we present an algorithm for parsing which returns the parse that is expected to have the largest number of correct constituentswe use the reduction and algorithm to parse held out test data comparing these results to a replication of pereira and schabes on the same datathese results are disappointing the pcfg implementation of the dop model performs about the same as the pereira and schabes methodwe present an analysis of the runtime of our algorithm and bodfinally we analyze bod data showing that some of the difference between our performance and his is due to a fortuitous choice of test datathis paper contains the first published replication of the full dop model ie using a parser which sums over derivationsit also contains algorithms implementing the model with significantly fewer resources than previously neededfurthermore for the first time the dop model is compared on the same data to a competing modelthe dop model itself is extremely simple and can be described as follows for every sentence in a parsed training corpus extract every subtreein general the number of subtrees will be very large typically exponential in sentence lengthnow use these trees to form a stochastic tree substitution grammar there are two ways to define a stsg either as a stochastic tree adjoining grammar restricted to substitution operations or as an extended pcfg in which entire trees may occur on the right hand side instead of just strings of terminals and nonterminalsgiven the tree of figure 1 we can use the dop model to convert it into the stsg of figure 2the numbers in parentheses represent the probabilitiesthese trees can be combined in various ways to parse sentencesin theory the dop model has several advantages over other modelsunlike a pcfg the use of trees allows capturing large contexts making the model more sensitivesince every subtree is included even trivial ones corresponding to rules in a pcfg novel sentences with unseen contexts unfortunately the number of subtrees is huge therefore bod randomly samples 5 of the subtrees throwing away the restthis significantly speeds up parsingthere are two existing ways to parse using the dop modelfirst one can find the most probable derivationthat is there can be many ways a given sentence could be derived from the stsgusing the most probable derivation criterion one simply finds the most probable way that a sentence could be producedfigure 3 shows a simple example stsgfor the string xx what is the most probable derivationthe parse tree has probability i of being generated by the trivial derivation containing a single treethis tree corresponds to the most probable derivation of xxone could try to find the most probable parse treefor a given sentence and a given parse tree there are many different derivations that could lead to that parse treethe probability of the parse tree is the sum of the probabilities of the derivationsgiven our example there are two different ways to generate the parse tree each with probability t so that the parse tree has probability t this parse tree is most probablebod shows how to approximate this most probable parse using a monte carlo algorithmthe algorithm randomly samples possible derivations then finds the tree with the most sampled derivationsbod shows that the most probable parse yields better performance than the most probable derivation on the exact match criterionkhalil simaan implemented a version of the dop model which parses efficiently by limiting the number of trees used and by using an efficient most probable derivation modelhis experiments differed from ours and bod in many ways including his use of a different version of the atis corpus the use of word strings rather than part of speech strings and the fact that he did not parse sentences containing unknown words effectively throwing out the most difficult sentencesfurthermore simaan limited the number of substitution sites for his trees effectively using a subset of the dop modelunfortunately bod reduction to a stsg is extremely expensive even when throwing away 95 of the grammarfortunately it is possible to find an equivalent pcfg that contains exactly eight pcfg rules for each node in the training data thus it is 0because this reduction is so much smaller we do not discard any of the grammar when using itthe pcfg is equivalent in two senses first it generates the same strings with the same probabilities second using an isomorphism defined below it generates the same trees with the same probabilities although one must sum over several pcfg trees for each stsg treeto show this reduction and equivalence we must first define some terminologywe assign every node in every tree a unique number which we will call its addresslet ak denote the node at address k where a is the nonterminal labeling that nodewe will need to create one new nonterminal for each node in the training datawe will call this nonterminal akwe will call nonterminals of this form quotinteriorquot nonterminals and the original nonterminals in the parse trees quotexteriorquotlet aj represent the number of subtrees headed by the node aajlet a represent the number of subtrees headed by nodes with nonterminal a that is a e a how many subtrees does it haveconsider first the possibilities on the left branchthere are bk nontrivial subtrees headed by bak and there is also the trivial case where the left node is simply bthus there are bk 1 different possibilities on the left branchsimilarly for the right branch there are ci 1 possibilitieswe can create a subtree by choosing any possible left subtree and any possible right subtreethus there are ai possible subtrees headed by ajin our example tree of figure 1 both noun phrases have exactly one subtree npt np2 1 the verb phrase has 2 subtrees vp3 2 and the sentence has 6 si 6these numbers correspond to the number of subtrees in figure 2we will call a pcfg subderivation isomorphic to a stsg tree if the subderivation begins with an external nonterminal uses internal nonterminals for intermediate steps and ends with external nonterminalsfor instance consider the tree taken from figure 2the following pcfg subderivation is isomorphic s npa1 vp2 pn pn vp2 pn pn v npwe say that a pcfg derivation is isomorphic to a stsg derivation if there is a corresponding pcfg subderivation for every step in the stsg derivationwe will give a simple small pcfg with the following surprising property for every subtree in the training corpus headed by a the grammar will generate an isomorphic subderivation with probability 1ain other words rather than using the large explicit stsg we can use this small pcfg that generates isomorphic derivations with identical probabilitiesthe construction is as followsfor a node such we will generate the following eight pcfg rules where the number in parentheses following a rule is its probabilityai 4 because a 4 because bkc a bkc ai 4 because a 4 because bkc a bkc we will show that subderivations headed by a with external nonterminals at the roots and leaves internal nonterminals elsewhere have probability 1asubderivations headed by ai with external nonterminals only at the leaves internal nonterminals elsewhere have probability vaithe proof is by induction on the depth of the treesfor trees of depth 1 there are two cases trivially these trees have the required probabilitiesnow assume that the theorem is true for trees of depth n or lesswe show that it holds for trees of depth n 1there are eight cases one for each of the eight ruleswe show two of themlet bak represent a tree of at most depth n with external leaves headed by bk and with internal intermediate nonterminalsthen for trees such as the probability of the tree is1 simibk ci aj a larly for another case trees headed by a the probability of the tree isokay0 1the other six cases follow trivially with similar reasoningwe call a pcfg derivation isomorphic to a stsg derivation if for every substitution in the stsg there is a corresponding subderivation in the pcfgfigure 4 contains an example of isomorphic derivations using two subtrees in the stsg and four productions in the pcfgwe call a pcfg tree isomorphic to a stsg tree if they are identical when internal nonterminals are changed to external nonterminalsour main theorem is that this construction produces pcfg trees isomorphic to the stsg trees with equal probabilityif every subtree in the training corpus occurred exactly once this would be trivial to provefor every stsg subderivation there would be an isomorphic pcfg subderivation with equal probabilitythus for every stsg derivation there would be an isomorphic pcfg derivation with equal probabilitythus every stsg tree would be produced by the pcfg with equal probabilityhowever it is extremely likely that some subtrees especially trivial ones like if the stsg formalism were modified slightly so that trees could occur multiple times then our relationship could be made one to oneconsider a modified form of the dop model in which when subtrees occurred multiple times in the training corpus their counts were not merged both identical trees are added to the grammareach of these trees will have a lower probability than if their counts were mergedthis would change the probabilities of the derivations however the probabilities of parse trees would not change since there would be correspondingly more derivations for each treenow the desired one to one relationship holds for every derivation in the new stsg there is an isomorphic derivation in the pcfg with equal probabilitythus summing over all derivations of a tree in the stsg yields the same probability as summing over all the isomorphic derivations in the pcfgthus every stsg tree would be produced by the pcfg with equal probabilityit follows trivially from this that no extra trees are produced by the pcfgsince the total probability of the trees produced by the stsg is 1 and the pcfg produces these trees with the same probability no probability is quotleft overquot for any other treesthere are several different evaluation metrics one could use for finding the best parsein the section covering previous research we considered the most probable derivation and the most probable parse treethere is one more metric we could considerif our performance evaluation were based on the number of constituents correct using measures similar to the crossing brackets measure we would want the parse tree that was most likely to have the largest number of correct constituentswith this criterion and the example grammar of figure 3 the best parse tree would be the probability that the s constituent is correct is 10 while the probability that the a constituent is correct is 1 and the probability that the b constituent is correct is thus this tree has on average 2 constituents correctall other trees will have fewer constituents correct on averagewe call the best parse tree under this criterion the maximum constituents parsenotice that this parse tree cannot even be produced by the grammar each of its constituents is good but it is not necessarily good when considered as a full treebod shows that the most probable derivation does not perform as well as the most probable parse for the dop model getting 65 exact match for the most probable derivation versus 96 correct for the most probable parsethis is not surprising since each parse tree can be derived by many different derivations the most probable parse criterion takes all possible derivations into accountsimilarly the maximum constituents parse is also derived from the sum of many different derivationsfurthermore although the maximum constituents parse should not do as well on the exact match criterion it should perform even better on the percent constituents correct criterionwe have previously performed a detailed comparison between the most likely parse and the maximum constituents parse for probabilistic context free grammars we showed that the two have very similar performance on a broad range of measures with at most a 10 difference in error rate we therefore think that it is reasonable to use a maximum constituents parser to parse the dop modelthe parsing algorithm is a variation on the insideoutside algorithm developed by baker and discussed in detail by lan and young however while the insideoutside algorithm is a grammar reestimation algorithm the algorithm presented here is just a parsing algorithmit is closely related to a similar algorithm used for hidden markov models for finding the most likely state at each timehowever unlike in the hmm case where the algorithm produces a simple state sequence in the pcfg case a parse tree is produced resulting in additional constraintsa formal derivation of a very similar algorithm is given elsewhere only the intuition is given herethe algorithm can be summarized as followsfirst for each potential constituent where a constituent is a nonterminal a start position and an end position find the probability that that constituent is in the parseafter that put the most likely constituents together to form a passe tree using dynamic programmingthe probability that a potential constituent occurs in the correct parse tree p will be called gin words it is the probability that given the sentence wiw a symbol x generates wswt we can compute this probability using elements of the insideoutside algorithmfirst compute the inside probabilities e psecond compute the outside probabilities f p once the matrix g is computed a dynamic programming algorithm can be used to determine the best parse in the sense of maximizing the number of constituents expected correctfigure 5 shows pseudocode for a simplified form of this algorithmfor a grammar with g nonterminals and training data of size t the run time of the algorithm is 0 since there are two layers of outer loops each with run time at most n and inner loops over addresses nonterminals and n however this is dominated by the computation of the inside and outside probabilities which takes time 0 for a grammar with r rulessince there are eight rules for every node in the training data this is 0by modifying the algorithm slightly to record the actual split used at each node we can recover the best parsethe entry maxc li n contains the expected number of correct constituents given the modelwe are grateful to bod for supplying the data that he used for his experiments the original atis data from the penn tree bank version 05 is very noisy it is difficult to even automatically read this data due to inconsistencies between filesresearchers are thus left with the difficult decision as to how to clean the datafor this paper we conducted two sets of experiments one using a minimally cleaned set of data1 making our results comparable to previous results the other using the atis data prepared by bod which contained much more significant revisionsten data sets were constructed by randomly splitting minimally edited atis sentences into a 700 sentence training set and 88 sentence test set then discarding sentences of length 30for each of the ten sets both the dop algorithm outlined here and the grammar induction experiment of pereira and schabes were runcrossing brackets zero crossing brackets and the paired differences are presented in table 1all sentences output by the parser were made binary branching since otherwise the crossing brackets measures are meaningless ia different file between the original atis data and the cleaned up version in a form usable by the quotedquot program is available by anonymous ftp from ftpftpdasharvardedupubgoodmanatised ti_tbpared and ti_tbposednote that the number of changes made was smallthe different files sum to 457 bytes versus 269339 bytes for the original files or less than 02a few sentences were not parsable these were assigned right branching period high structure a good heuristic we also ran experiments using bod data 75 sentence test sets and no limit on sentence lengthhowever while bod provided us with his data he did not provide us with the split into test and training that he used as before we used ten random splitsthe results are disappointing as shown in table 2they are noticeably worse than those of bod and again very comparable to those of pereira and schabeswhereas bod reported 96 exact match we got only 86 using the less restrictive zero crossing brackets criterionit is not clear what exactly accounts for these differences2 it is also noteworthy that the results are much better on bod data than on the minimally edited data crossing brackets rates of 96 and 97 on bod data versus 90 on minimally edited datathus it appears that part of bod extraordinary performance can be explained by the fact that his data is much cleaner than the data used by other researchersdop does do slightly better on most measureswe performed a statistical analysis using a ttest on the paired differences between dop and pereira and schabes performance on each runon 2ideally we would exactly reproduce these experiments using bod algorithmunfortunately it was not possible to get a full specification of the algorithm the minimally edited atis data the differences were statistically insignificant while on bod data the differences were statistically significant beyond the 98th percentileour technique for finding statistical significance is more strenuous than most we assume that since all test sentences were parsed with the same training data all results of a single run are correlatedthus we compare paired differences of entire runs rather than of sentences or constituentsthis makes it harder to achieve statistical significancenotice also the minimum and maximum columns of the quotdoppsquot lines constructed by finding for each of the paired runs the difference between the dop and the pereira and schabes algorithmsnotice that the minimum is usually negative and the maximum is usually positive meaning that on some tests dop did worse than pereira and schabes and on some it did betterit is important to run multiple tests especially with small test sets like these in order to avoid misleading resultsin this section we examine the empirical runtime of our algorithm and analyze bodwe also note that bod algorithm will probably be particularly inefficient on longer sentencesit takes about 6 seconds per sentence to run our algorithm on an hp 9000715 versus 35 hours to run bod algorithm on a sparc 2 factoring in that the hp is roughly four times faster than the sparc the new algorithm is about 500 times fasterof course some of this difference may be due to differences in implementation so this estimate is fairly roughfurthermore we believe bod analysis of his parsing algorithm is flawedletting g represent grammar size and represent maximum estimation error bod correctly analyzes his runtime as ohowever bod then neglects analysis of this 2 term assuming that it is constantthus he concludes that his algorithm runs in polynomial timehowever for his algorithm to have some reasonable chance of finding the most probable parse the number of times he must sample his data is at least inversely proportional to the conditional probability of that parsefor instance if the maximum probability parse had probability 150 then he would need to sample at least 50 times to be reasonably sure of finding that parsenow we note that the conditional probability of the most probable parse tree will in general decline exponentially with sentence lengthwe assume that the number of ambiguities in a sentence will increase linearly with sentence length if a five word sentence has on average one ambiguity then a ten word sentence will have two etca linear increase in ambiguity will lead to an exponential decrease in probability of the most probable parsesince the probability of the most probable parse decreases exponentially in sentence length the number of random samples needed to find this most probable parse increases exponentially in sentence lengththus when using the monte carlo algorithm one is left with the uncomfortable choice of exponentially decreasing the probability of finding the most probable parse or exponentially increasing the runtimewe admit that this is a somewhat informal argumentstill the monte carlo algorithm has never been tested on sentences longer than those in the atis corpus there is good reason to believe the algorithm will not work as well on longer sentencesnote that our algorithm has true runtime 0 as shown previouslyin the dop model a sentence cannot be given an exactly correct parse unless all productions in the correct parse occur in the training setthus we can get an upper bound on performance by examining the test corpus and finding which parse trees could not be generated using only productions in the training corpusunfortunately while bod provided us with his data he did not specify which sentences were test and which were trainingwe can however find an upper bound on average case performance as well as an upper bound on the probability that any particular level of performance could be achievedbod randomly split his corpus into test and trainingaccording to his thesis only one of his 75 test sentences had a correct parse which could not be generated from the training datathis turns out to be very surprisingan analysis of bod data shows that at least some of the difference in performance between his results and ours must be due to an extraordinarily fortuitous choice of test datait would be very interesting to see how our algorithm performed on bod split into test and training but he has not provided us with this splitbod did examine versions of dop that smoothed allowing productions which did not occur in the training set however his reference to coverage is with respect to a version which does no smoothingin order to perform our analysis we must determine certain details of bod parser which affect the probability of having most sentences correctly parsablewhen using a chart parser as bod did three problematic cases must be handled productions unary productions and nary productionsthe first two kinds of productions can be handled with a probabilistic chart parser but large and difficult matrix manipulations are required these manipulations would be especially difficult given the size of bod grammarexamining bod data we find he removed productionswe also assume that bod made the same choice we did and eliminated unary productions given the difficulty of correctly parsing thembod himself does not know which technique he used for nary productions since the chart parser he used was written by a third party the nary productions can be parsed in a straightforward manner by converting them to binary branching form however there are at least three different ways to convert them as illustrated in table 3in method quotcorrectquot the nary branching productions are converted in such a way that no overgeneration is introduceda set of special nonterminals is added one for each partial right hand sidein method quotcontinuedquot a single new nonterminal is introduced for each original nonterminalbecause these nonterminals occur in multiple contexts some overgeneration is introducedhowever this overgeneration is constrained so that elements that tend to occur only at the beginning middle or end of the right hand side of a production cannot occur somewhere elseif the quotsimplequot method is used then no new nonterminals are introduced using this method it is not possible to recover the nary branching structure from the resulting parse tree and significant overgeneration occurstable 4 shows the undergeneration probabilities for each of these possible techniques for handling unary productions and nary productions3 the first number in each column is the probability that a sentence in the training data will have a production that occurs nowhere elsethe second number is the probability that a test set of 75 sentences drawn from this database will have one ungeneratable sentence 759744 the table is arranged from least generous to most generous in the upper left hand corner is a technique bod might reasonably have used in that case the probability of getting the test set he described is less than one in a millionin the perl script for analyzing bod data is available by anonymous ftp from ftpftpdasharvardeduipubgoodmananalyzeperl 4actually this is a slight overestimate for a few reasons including the fact that the 75 sentences are drawn without replacementalso consider a sentence with a production that occurs only in one other sentence in the corpus there is some probability that both sentences will end up in the test data causing both to be ungeneratable lower right corner we give bod the absolute maximum benefit of the doubt we assume he used a parser capable of parsing unary branching productions that he used a very overgenerating grammar and that he used a loose definition of quotexact matchquot even in this case there is only about a 15 chance of getting the test set bod describeswe have given efficient techniques for parsing the do modelthese results are significant since the dop model has perhaps the best reported parsing accuracy previously the full dop model had not been replicated due to the difficulty and computational complexity of the existing algorithmswe have also shown that previous results were partially due to an unlikely choice of test data and partially due to the heavy cleaning of the data which reduced the difficulty of the taskof course this research raises as many questions as it answerswere previous results due only to the choice of test data or are the differences in implementation partly responsiblein that case there is significant future work required to understand which differences account for bod exceptional performancethis will be complicated by the fact that sufficient details of bod implementation are not availablethis research also shows the importance of testing on more than one small test set as well as the importance of not making crosscorpus comparisons if a new corpus is required then previous algorithms should be duplicated for comparison
W96-0214
efficient algorithms for parsing the dop modelexcellent results have been reported for dataoriented parsing of natural language texts unfortunately existing algorithms are both computationally intensive and difficult to implementprevious algorithms are expensive due to two factors the exponential number of rules that must be generated and the use of a monte carlo parsing algorithmin this paper we solve the first problem by a novel reduction of the dop model to a small equivalent probabilistic contextfree grammarwe solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents rather than the probability of a correct parse treeusing the optimizations experiments yield a 97 crossing brackets rate and 88 zero crossing brackets ratethis differs significantly from the results reported by bod and is comparable to results from a duplication of pereira and schabes experiment on the same datawe show that bod results are at least partially due to an extremely fortuitous choice of test data and partially due to using cleaner data than other researcherswe give a polynomial time conversion of a dop model into an equivalent pcfg whose size is linear in the size of the training set
corpus based pp attachment ambiguity resolution with a semantic dictionary this paper deals with two important ambiguities of natural language prepositional phrase attachment and word sense ambiguity we propose a new supervised learning method for ppattachment based on a semantically tagged corpus because any sufficiently big sensetagged corpus does not exist we also propose a new unsupervised context based word sense disambiguation algorithm which amends the training corpus for the pp attachment by word sense tags we present the results of our approach and evaluate the achieved pp attachment accuracy in comparison with other methods the problem with successful resolution of ambiguous prepositional phrase attachment is that we need to employ various types of knowledgeconsider for example the following sentencethe pp for children can be either adjectival and attach to the object noun books or adverbial and attach to the verb buy leaving us with the ambiguity of two possible syntactic structures adj vp adv vp ppit is obvious that without some contextual information we cannot disambiguate such a sentence correctlyconsider however the next sentencein this case we can almost certainly state that the pp is adverbial ie attached to the verbthis resolution is based on our life time experience in which we much more often encounter the activity which can be described as quotbuying things for moneyquot than entities described as quotbooks for moneyquot1 at the moment we do not have a computer database containing life time experiences and therefore we have to find another way of how to decide the correct pp attachmentone of the solutions lies in the exploration of huge textual corpora which can partially substitute world knowledgepartially because we do not know how wide a context what type of general knowledge or how deep an inference has to be applied for a successful disambiguationif we limit the context around the prepositional phrase to include only the verb its object and the pp itself the human performance on pp attachment is approximately 882 accurate decisions rrr94because people are capable of utilising their world knowledge the remaining inaccuracy must be attributed to the lack of a wider context2statistically each preposition has a certain percentage of occurrences for each attachment relying on which would provide us with approximately 727 of correct attachments cb95if we manage to partially substitute the world knowledge the resulting accuracy would lie in the range between 727 and 882these are the boundaries we expect an automatic system to score withinaltman and steedman as88 have shown that in many cases pp can be attached correctly only if the context of the current discourse is usedusing the discourse context is however extremely difficult because we do not have enough theoretical background to decide which bits of context are needed to correctly disambiguate and which are irrelevantthere have been numerous attempts to substitute context by superficial knowledge extracted from a large corpuspioneering research on corpusbased statistical pp attachment ambiguity resolution has been done by hindle and rooth in hr93they extracted over 200000 verbnounpreposition triples with unknown attachment decisionsan iterative unsupervised method was then used to decide between adjectival and adverbial attachment in which the decision was based on comparing the cooccurence probabilities of the given preposition with the verb and with the noun in each quadrupleanother promising approach is the transformationbased rule derivation presented by brill and resnik in br941 which is a simple learning algorithm that derives a set of transformation rulesthese rules are then used for pp attachment and therefore unlike the statistical methods it is unnecessary to store huge frequency tablesbrill and resnik had reported 818 success of this method on 500 randomlyselected sentencesthe current statistical stateofthe art method is the backedoff model proposed by collins and brooks in cb95 which performs with 845 accuracy on standalone quadruplesmost of the methods however suffer from a sparse data problemall are based on matching the words from the analysed sentence against the words in the training setthe problem is that only exact matches are allowedthe backoff model showed an overall accuracy of 845 but the accuracy of full quadruple matches was 926due to the sparse data problem however the full quadruple matches were quite rare and contributed to the result in only 48 of casesthe accuracy for a match on three words was also still relatively high while for doubles and singles it dropped substantially cb95this originated our assumption that if the number of matches on four and three words was raised the overall accuracy would increase as wellbecause collins and brooks backingoff model is very profound we could not find a way of improving its accuracy unless we increased the percentage of full quadruple and triple matches by employing the semantic distance measure instead of wordstring matchingwe feel that the sentence buy books for children should be matched with buy magazines for children due to the small conceptual distance between books and magazineswhat is unknown however is the limit distance for two concepts to be matchedmany nouns in the wordnet hierarchy share the same root and there is a danger of overgeneralisationwe will try to overcome this problem through the supervised learning algorithm described hereinanother problem is that most of the words are semantically ambiguous and unless disambiguated it is difficult to establish distances between themthe pp attachment also depends on the selection of word senses and vice versa as will be shown in the result sectiona number of other researchers have explored corpusbased approaches to pp attachment that make use of word classesfor examples weischedel w91 and basil b91 both describe the use of manually constructed domain specific word classes together with corpusbased statistics in order to resolve pp attachment ambiguitybecause these papers describe results obtained on different corpora however it is difficult to make a performance comparisonwe will now discuss the issues connected with matching two different words based on their semantic distanceemploying the notion of semantic similarity it is necessary to address a number of problemsat first we have to specify the semantic hierarchysecond we need to determine how to calculate the distance between two different concepts in the hierarchyfinally we must determine how to select a sense of a word based on a context in which it appearsthe hierarchy we chose for semantic matching is the semantic network of wordnet 1m190 mi93wordnet is a network of meanings connected by a variety of relationswordnet presently contains approximately 95000 different word forms organised into 70100 word meanings or sets of synonymsit is divided into four categories out of which we will be using only verbs and nounsnouns are organised as 11 topical hierarchies where each root represents the most general concept for each topicverbs which tend to be more polysemous and can change their meanings depending on the kind of the object they take are formed into 15 groups and have altogether 337 possible rootsverb hierarchies are more shallow than those of nouns as nouns tend to be more easily organised by the isa relation while this is not always possible for verbsthe traditional method of evaluating semantic distance between two meanings based merely on the length of the path between the nodes representing them does not work well in wordnet because the distance also depends on the depth at which the concepts appear in the hierarchyfor example the root entity is directly followed by the concept of lifefarm while a sedan a type of a car is in terms of path more distant from the concept of express_train although they are both vehicles and therefore closer conceptsin the case of verbs the situation is even more complex because many verbs do not share the same hierarchy and therefore there is no direct path between the concepts they representthere have been numerous attempts to defme a measure for semantic distance of wordnet contained concepts re95ke96 su95 su96 etcfor our purposes we have based the semantic distance calculation on a combination of the path distance between two nodes and their depthhaving ascertained the nearest common ancestor in the hierarchy we calculate the distance as an average of the distance of the two concepts to their nearest common ancestor divided by the depth in the wordnet hierarchywhere l1 l2 are the lengths of paths between the concepts and the nearest common ancestor and d1 d2 are the depths of each concept in the hierarchy the more abstract the concepts are the bigger the distancethe same concepts have a distance equal to 0 concepts with no common ancestor have a distance equal to 1because the verb hierarchy is rather shallow and wide the distance between many verbal concepts is often in order to determine the position of a word in the semantic hierarchy we have to determine the meaning of the word from the context in which it appearsfor example the noun bank can take any of the nine meanings defined in wordnet it is not a trivial problem and has been approached by many researchers gcy92 ya93 bw94 re95 ya95 kae96 fl196 etcwe believe that the word sense disambiguation can be accompanied by pp attachment resolution and that they complement each otherat the same time we would like to note that pp attachment and sense disambiguation are heavily contextually dependent problemstherefore we know in advance that without incorporation of wide context the full disambiguation will be never reachedthe supervised learning algorithm which we have devised for the pp attachment resolution and which is discussed in chapter 3 is based on the induction of a decision tree from a large set of training examples which contain verbnounprepositionnoun quadruples with disambiguated sensesunfortunately at the time of writing this work a sufficiently big corpus which was both syntactically analysed and semantically tagged did not existtherefore we used the syntactically analysed corpus ma931 and assigned the word senses ourselvesmanual assignment however in the case of a huge corpus would be beyond our capacity and therefore we devised an automatic method for an approximate word sense disambiguation based on the following notions determining the correct sense of an ambiguous word is highly dependent on the context in which the word occurseven without any sentential context the human brain is capable of disambiguating word senses based on circumstances or experience3in natural language processing however we rely mostly on the sentential contexts le on the surrounding concepts and relations between themthese two problems arise 1the surrounding concepts are very often expressed by ambiguous words and a correct sense for these words also has to be determined2what relations and how deep an inference is needed for correct disambiguation is unknownwe based our wordsense disambiguating mechanism on the premise that two ambiguous words usually tend to stand for their most similar sense if they appear in the same contextin this chapter we present a similaritybased disambiguation method aimed at disambiguating sentences for subsequent ppattachment resolutionsimilar contextual situations are found in the training corpora and are used for the sense disambiguationif for example the verb buy appears in the sentence the investor bought the company for 5 million dollars and somewhere else in the training corpus there is a sentence4 the investor purchased the company for 5 million dollars we can take advantage of this similarity and disambiguate the verb quotbuyquot to its sense that is nearest to the sense of the verb purchase which is not ambiguousthe situation however might not be as simplistic as that because such obvious matches are extremely rare even in a huge corpusthe first problem is that the sample verb in the training corpus may be also ambiguouswhich sense do we therefore choosethe second problem is that there may in fact be no exact match in the training corpus for the context surrounding words and their relationsto overcome both of these problems we have applied the concept of semantic distance discussed aboveevery possible sense of all the related context words is evaluated and the best matchchosen5the proposed unsupervised similaritybased iterative algorithm for the word sense disambiguation of the training corpus looks as follows quotfor each quadruple q in the training set for each ambiguous word in the quadruple among the remaining quadruples find a set s of similar quadruples for each nonempty set s choose the nearest similar quadruple from the set s disambiguate the ambiguous word to the nearest sense of the corresponding word of the chosen nearest quadruple increase the similarity distance threshold sdt sdt 01 until all the quadruples are disambiguated or sdt 3the above algorithm can be described as iterative clustering because at first the nearest quadruples are matched and disambiguatedthen the similarity distance threshold is raised and the process repeats itself in the next iterationif a word is not successfully disambiguated it is assigned its first le the most frequent sensethe reason for starting with the best matches is that these tend to provide better disambiguationsconsider for example the following set of quadruples at first the algorithm tries to disambiguate quadruple q1 starting with the verb the algorithm searches for other quadruples which have the quadruple distance smaller than the current similarity distance thresholdfor sdt0 this means only for quadruples with all the words with semantic distance 0 ie synonymsthere are no matches found for q1 and the algorithm moves to q2 fmding quadruple q4 as the only one matching such criteriathe verb buy in q2 is disambiguated to the sense which is nearest to the sense of purchase in q4 ie mindist0the noun company cannot be disambiguated because the matched nearest quadruple q4 contains the same noun and such a disambiguation is not allowed the description million is monosemoussame process is called for all the remaining quadruples but further disambigaution with sdt41 is not possible the iteration threshold is increased by 01 and the algorithm starts again with the first quadrupleno match is found for q1 for any word and we have to move to quadruple q2its verb is already disambiguated therefore the algorithm looks for all the quadruples which have the quadruple distance for nouns below the sdt of oa and which contain similar nouns the quadruple q3 satisfies this criteriadistances of all the combinations of senses of the noun company and business are calculated and the nearest match chosen to disambiguate the noun company in q2 mindistthe verb buy in q2 is already disambiguated and the distance to both q2 and q4 is the same ie dqvdqv300485 where the minimum semantic distance between the nearest senses of the verb acquire and buy is miniist425 the verb acquire is disambiguated to the sense nearest to the sense of the verb buy and the algorithm proceeds to the noun business in q3the same two quadruples fall below the sdt for nouns as dqndqv30857 and the noun business of q3 is disambiguated to its sense nearest to the disambiguated sense of company in q2the verb in q4 is monosemous therefore the algorithm finds a set of similar quadruples for nouns because it has already been disambiguated in the previous steps q2 q3 and q6the nearest quadruple in this set is q2 0 and the noun company in q4 is disambiguated to the sense of the noun in q2the quadruple q5 has no similar quadruples for the current sdt and therefore the next quadruple is q6similarly to the above disambiguations both its verb and noun are disambiguatedthere is no further match for any quadruple and therefore sdt is increased to 02 and the algorithm starts with q1 again no matches are found for sdt42 for neither q1 or q5the algorithm iterates until sdt06 which enables the disambiguation of the noun plant in q1 to its sense nearest to the noun facility in q5 dqn257 as min11ist0375similarly the noun facility in q6 is disambiguated whereas the descriptions in both q1 and q5 cannot be successfully disambiguated because only a very small set of quadruples was used in this examplein this case both the description week and inspection would be assigned their most frequent senses ie the first senses of wordnetin case of a bigger training set most of the quadruples get disambiguated however with increasing sdt the disambiguation quality decreasesthe above example shows the importance of iteration because starting with lower sdt guarantees better resultsif for example there was no iteration cycle and the algorithm tried to disambiguate the quadruples in the order in which they appear the quadruple q1 would be matched with q6 and all its words would be disambiguated to inappropriate sensessuch a wrong disambiguation would further force wrong disambiguations in other quadruples and the overall result would be substantially less accurateanother advantage of this disambiguation mechanism is that the proper nouns which usually refer to people or companies can be also disambiguatedfor example an unknown name arby in quadruple acquire arby for million is matched with disambiguated noun in q6 and also disambiguated to the company1 sense rather than toperson would be dismissedsimilarity distance threshold defines the limit matching distance between two quadruplesthe matching distance between two quadruples q1v1n1pdi and q2v2n2pd2 is defined as follows where p is the number of pairs of words in the quadruples which have a common semantic ancestor lep 1 2 or 3 and its purpose is to give higher priority to matches on more wordsthe distance of the currendy disambiguated word is squared in order to have a bigger weight in the distance da the distance between two words d is defined as the minimum semantic distance between all the possible senses of the words w1 and w2two quadruples are similar if their distance is less or equal to the current similarity distance threshold and if the currently disambiguated word is similar to the corresponding word in the matched quadrupletwo words are similar if their semantic distance is less than 10 and if either their character strings are different or if one of the words has been previously disambiguatedfor the attachment of the prepositional phrases in unseen sentences we have modified quinlan id3 algorithm q86 br91 which belongs to the the family of inductive learning algorithmsusing a huge training set of classified examples it uncovers the importance of the individual words and creates a decision tree that is later used for classification of unseen examples7the algorithm uses the concepts of the wordnet hierarchy as attribute values and creates the decision tree in the following way let t be a training set of classified quadruples1if all the examples in t are of the same pp attachment type then the result is a leaf labelled with this type else let us briefly explain each step of the algorithm1if the examples belong to the same class the tree expansion terminateshowever such situation is very unlikely due to the nonperfect training datatherefore we relaxed the complete homogeneity condition by terminating the expansion when more than 77 of the examples in the set belonged to the same class if the set t is still heterogeneous and there are no more attribute values to divide with the tree is terminated and the leaf is marked by the majority class of the node2we consider the most informative attribute to be the one which splits the set t into the most homogenous subsets ie subsets with either a high percentage of samples with adjectival attachments and a low percentage of adverbial ones or viceversathe optimal split would be such that all the subsets would contain only samples of one attachment typefor each attribute a we split the set into subsets each associated with attribute value aw and containing samples which were unifiable with value aw then we calculate the overall heterogeneity of all these subsets as a weighted sum of their expected information oh y pplog2plog 2 p where p and p represent the conditional probabilities of the adverbial and adjectival attachments respectivelythe attribute with the lowest overall heterogeneity is selected for the decision tree expansionin the following example we classification in this case means deciding whether the pp is adjectival or averbialverbs of all the node quadruples belong to the wordnet class v nouns to the class n and descriptions pthe class d we assume in this example that the wordnet hierarchy class v has three subclasses class n has two subclasses and class d has also two subclasses 8we use the values vi v2 and v3 ni and n2 and dl and 132 as potential values of the attribute a splitting by verb results in three subnodes with an overall heterogeneity 056 splitting by noun in two subnodes with oh099 and by description with oh088therefore in this case we would choose the verb as an attribute for the tree expansion3the attribute is either a verb noun or a description noun9its values correspond to the concept identificators of wordnetat the beginning of the tree induction the top roots of the wordnet hierarchy are taken as attribute values for splitting the set of training examplesat first all the training examples are split into subsets which correspond to the topmost concepts of wordnet which contains 11 topical roots for nouns and description nouns and 337 for verbs the training examples are grouped into subnodes according to the disambiguated senses of their content wordsthis means that quadruples with words that belong to the same top classes start at the same nodeeach group is further split by the attribute which provides less heterogeneous splitting branches that lead to empty subnodes are pruned this process repeats in all the emerging subnodes using the attribute values which correspond to the wordnet hierarchy moving from its top to its leaveswhen splitting the set of training examples by the attribute a according to its values avv the emerging subsets contain those quadruples whose attribute a value is lower in the wordnet hierarchy ie belongs to the same classif some quadruples had the attribute value equal to the values of a an additional subset is added but its further splitting by the same attribute is prohibitedas soon as the decision tree is induced classifying an unseen quadruple is a relatively simple procedureat first the word senses of the quadruple are disambiguated by the algorithm described in chapter 2 which is modified to exclude the sdt iteration cyclesthen a path is traversed in the decision tree starting at its root and ending at a leafat each internal node we follow the branch labelled by the attribute value which is the semantic ancestor of the attribute value of the quadruple the quadruple is assigned the attachment type associated with the leaf ie adjectival or adverbialif no match is found for the attribute value of the quadruple at any given node the quadruple is assigned the majority type of the current nodethe training and testing data extracted from the penn tree bank ma93 are identical to that used by irrr94 cb951 for comparison purposesquotthe data contained 20801 training and 3097 testing quadruples with 51 prepositions and ensured that there was no implicit training of the method on the test set itselfwe have processed the training data in the following way cr converted a the verbs into lower cases e converted all the words into base forms 0 replaced four digit numbers by year el replaced nouns ending by ing and not in wordnet by action 0 eliminated examples with verbs that are not in wordnet a eliminated examples with lowercase nouns that are not in wordnet except for pronouns whose senses were substituted by universal pronoun synsets er the uppercase nouns were assigned their lower case equivalent senses plus the senses of company and person al the upper case nouns not contained in wordnet were assigned the senses of company and person 0 disabled all the intransitive senses of verbs e assigned all the words the sets of wordnet senses the above processing together with the elimination of double occurrences and contradicting examples reduced the training set to 17577 quadruples with an average quadruple ambiguity of 86 as of the ambiguity definition in section 12because the induction of the decision tree for the pp attachment is based on a supervised learning from sensetagged examples it was necessary to sensedisambiguate the entire training setthis was done by the iterative algorithm described in chapter 2to form an approximate evaluation of the quality of this disambiguation we have randomly selected 500 words manually12 assigned sets of possible senses to them and compared these with the automatic disambiguationif the automatically chosen sense was present in the manually assigned set the disambiguation was considered correctout of these 500 words 362 could be considered correctly disambiguated which represents slightly over 72we can argue that the insufficient disambiguation context sparse data problem and empirically set iteration step in the disambiguating algorithm lead to an unreliable disambiguationhowever it is necessary to maintain the understanding that it is the pp attachment rather than the sense disambiguation that is our primary goaladditionally because the words of the input sentences for the pp attachment are to be assigned senses in the same manner the sense disambiguation error is concealedalhouhg the disambiguation of the training set is computationally the most expensive part of the system it is done only oncethe disambiguation of unseen examples is done by the same algorithm which is modified to exclude the sdt iteration cyclesit is therefore reasonably fast even for reallife applicationsthe pp attachment using the decision tree is extremely efficient and reliablewe have induced the decision tree separately for each preposition in the training corpus covering the 51 most common prepositionsthe induced decision trees are relatively shallow and the classification of unseen sentences is rapidas shown in the following table our algorithm appears to have surpassed many existing methods and is very close to human performance on the same testing data13the fact that many words in both the training and the testing sets were not found in wordnet caused a reduction in the accuracythis is because training examples with an error or with a word not found in wordnet could not fully participate on the decision tree inductionthis reduced the original training set of 20801 quadruples to 17577in the case of the testing set many of the 3097 testing quadruples were also handicapped by having no entry in wordnetattachment of these had to be based on a partial quadruple and was usually assigned at a higher level of the decision tree which reduced the overall accuracyin order to conduct a fair comparison however we used the same testing set as the methods shown in the above tableif just the examples with full wordnet entries were used the accuracy rose to 908although the algorithm does not provide high enough accuracy from the point of view of word sense disambiguation it is more important to bear in mind that our main goal is the pp attachment ambiguity resolutionthe relatively low accuracy of the word sense disambiguation is compensated by the fact that the same sense disambiguation error is present in both the training set and the classified quadruplethe use of the same training set for both the pp attachment and the sense disambiguation provides a positive bias in favour of correct attachmentuntil we have a sufficiently big enough word sense tagged corpus we can only hypothesise on the importance of the correct sense disambiguation for the pp attachmentexperiments however show that if the positive bias between the word senses of the training set and the testing quadruples is removed the accuracy of the pp attachment falls substantiallywe have conducted an experiment in which the disambiguated senses of the testing set were replaced by the most frequent senses ie the first senses as defined in wordnetthis caused a substantial reduction of accuracy to 765the fact that our approximate disambiguation leads to 881 correct pp attachment is partly to be attributed to the positive bias of disambiguation of the testing examples against the same training set which is also used for the decision tree inductionthe disambiguation errors are thus hidden by their replication in both the training and the testing setsas we have already mentioned collins and brooks cb951 based their method on matching the testing quadruples against the set of training examplesthe decision on the attachment was made according to which attachment type had a higher count in the training corpusif no match for the given quadruple was found the algorithm backedoff to a combined frequency count of the occurences of matches on three words only ie on the verbnounpreposition verbprepositiondescription and nounprepositiondescriptionif no match was found on any of the three words combination the algorithm backedoff to a combined match on two words ie one of the content words with a prepositionif there was further no match found on two words the attachment type was assigned according to the prepositional statistics or if the preposition was not present in the training corpus the quadruple was assigned the adjectival defaultthere was a substantial decrease of accuracy between the triples and doubles stageour algorithm on the other hand has substantially reduced the number of classifications based on fewer wordsthis is because at the top of the decision tree all of the semantic tops of all of the content words of the given quadruple are compared with the semantic generalisations of the training examples represented through the nodes of the decision treeonly if the homogeneity termination condition is satisfied before all three content words are compared the decision is based on less than a full quadruplethe decision tree therefore represents a very useful mechanism for determining the semantic level at which the decision on the pp attachment is madecollins and brooks have also demonstrated the importance of low count events in training data by an experiment where all counts less than 5 were put to zerothis effectively made their algorithm ignore low count events which resulted in the decrease of accuracy from 841 to 816this important feature is maintained in our approach by small homogenous leaves at higher levels of the decision tree which usually accommodate the low count training examplesfigure 3 shows an interesting aspect of learning the prepositional phrase attachment from a huge corpuswe have selected five most common prepositions and compared their learning curvesit turned out that for the size of a training set smaller than 1000 examples learning is rather unreliable and dependent on the quality of the chosen quadruplesfor a bigger training set the accuracy grows with its size until a certain maximum accuracy level is reachedthis level is different for different prepositions and we hypothesise that it can be broken only when a wider sentential or discourse context is usedaccuracysize 0 500 1000 1500 2000 2500 3000 3500 training corpus size our algorithm also provides a qualification certainty based on the heterogeneity of the decision tree leavesthe tree leaves are heterogeneous for two reasons 1 the tree expansion is terminated when a node contains more than 77 of examples belonging to the same class or 2 when there are examples in the node that cannot be further divided because the tree has reached the bottom of the wordnet hierarchythe table 2 shows that the incorrect attachments usually occur with a lower certainty than the correct ones ie most of the incorrect attachments are marked as less certain the prepositional statistics indicates that there were no matches found for the given quadruple and the attachment was decided based on the statistical frequency of the given prepositionadjectival default was used in three cases when the preposition was not found in the training setthe certainty between 05 and 08 accounts mostly for the examples whose attachment was made through the decision tree but there was either a small number of examples that participated on the creation of the tree branch or the examples were not sufficiently representative most of the examples in this category possibly require a wider sentential context for further improvement of accuracythe certainty bigger than 08 and smaller than 10 accounts for the situations when the decision was based on a leaf whose further expansion was terminated by the homogeneity termination condition or simply some noisy or incorrectly disambiguated examples were involved in its creation15examples which did not reach the bottom of the decision tree and were assigned the majority class of the node from which there was no appropriate branch to follow were all classified with certainty between 05 and 10the decision with certainly 10 is always based on a homogenous leafit does not exhibit the highest accuracy because many of the homogenous leaves are formed from only very few examples and many of these are erroneousas figure 3 shows each preposition has a different saturation accuracy which cannot be surpassed unless a wider sentential context is usedwe believe however that a bigger corpus would provide better wordsense disambiguation which in turn would allow to increase the homogeneity limit for the termination of the tree expansionheterogeneous nodes which force the expansion of the decision tree to unnecessary extent are caused by i examples with an error in the word sense disambiguation or by 2 examples that can be both adjectival and adverbial if taken out of contextthe second case cannot be eliminated by a bigger training corpus however the reduction of noisy examples would contribute to an increase in accuracy mainly in the case of small nodes which can now contain more noisy examples than correct ones and thus force a wrong attachmentwe feel that a bigger corpus would provide us with an increase of accuracy of quotcertainty iquot attachments which partly includes attachments based on the small leavesalso we believe that a bigger training corpus would increase performance in the case of less frequent prepositions which do not have enough training examples to allow for induction of a reliable decision treethe most computationally expensive part of the system is the word sense disambiguation of the training corpusthis however is done only once and the disambiguated corpus is stored for future classifications of unseen quadruplesthe above experiments confirmed the expectations that using the semantic information in combination with even a very limited context leads to a substantial improvement of nlp techniquesalthough our method exhibits an accuracy close to the human performance we feel that there is still a space for improvement particularly in using a wider sentential context more training data andor more accurate sense disambiguation techniquewe believe that there is further space for elaboration of our method in particular it would be interesting to know the exact relations between the accuracy and the termination condition and between the corpus size and the optimum termination condition separately for each prepositionat the moment we are working on an implementation of the algorithm to work on with a wider sentential context and on its incorporation within a more complex nlp system
W97-0109
corpus based pp attachment ambiguity resolution with a semantic dictionarythis paper deals with two important ambiguities of natural language prepositional phrase attachment and word sense ambiguitywe propose a new supervised learning method for pp attachment based on a semantically tagged corpusbecause any sufficiently big sensetagged corpus does not exist we also propose a new unsupervised context based word sense disambiguation algorithm which amends the training corpus for the pp attachment by word sense tagswe present the results of our approach and evaluate the achieved pp attachment accuracy in comparison with other methodswe developed a customized explicit wsd algorithm as part of their decision tree system
finding terminology translations from nonparallel corpora we present a statistical word feature the word relation matrix which can be used to find translated pairs of words and terms from nonparallel corpora across language groups online dictionary entries are used as seed words to generate word relation matrices for the unknown words according to correlation measures word relation matrices are then mapped across the corpora to find translation pairs translation accuracies are around 30 when only the top candidate is counted nevertheless top 20 candidate output give a 509 average increase in accuracy on human translator performance despite a surge in research using parallel corpora for various machine translation tasks the amount of available bilingual parallel corpora is still relatively small in comparison to the large amount of available monolingual textit is unlikely that one can find parallel corpora in any given domain in electronic formthis is a particularly acute problem in language pairs such as chineseenglish or japaneseenglish where there are fewer translated texts than in european language pairswhile we should make use of any existing parallel corpora as lexical translation resources we should not ignore the even larger amount of monolingual texthowever using nonparallel corpora for lexical translation has been a daunting task considered much more difficult than that with parallel corporain this paper we present an initial algorithm for translating technical terms using a pair of nonparallel corporaevaluation results show translation precisions at around 30 when only the top candidate is consideredwhile this precision is lower than that achieved with parallel corpora we show that top 20 candidate output from our algorithm allows translators to increase their accuracy by 509in the following sections we first describe a pair of nonparallel corpora we use for experiments and then we introduce the word relation matrix a statistical word feature representation for technical term translation from nonparallel corporawe evaluate the effectiveness of this feature with two sets of experiments using englishenglish and englishjapanese nonparallel corporafew attempts have been made to explore nonparallel corpora of monolingual texts in the same domainearly work uses a pair of nonparallel texts for the task of lexical disambiguation between several senses of a word this basic idea extends to choosing a translation among multiple candidates given collocation informationa similar idea is later applied by to show the plausibility of correlations between words in nonparallel texthe proposed a matrix permutation method matching cooccurrence patterns in two nonparallel texts but noted that computational limitations hamper further extension of this methodusing the same idea demonstrated how to eliminate candidate words in a bilingual dictionaryall the above works point to a certain discriminatory feature in monolingual texts context and word relationshowever these works remain in the realm of solving ambiguities or choosing the best candidate among a small set of possibilitiesit is argued in that feature vectors of 100000 dimensions are likely to be needed for high resolution discriminant analysisit is so far questionable whether feature vectors of lower dimensions are discriminating enough for extracting bilingual lexical pairs from nonparallel corpora with a large number of candidatesis it possible to achieve bilingual lexicon translation by looking at words in relation to other wordsin this paper we hope to she would some light on this questionin our experiments we use two sets of nonparallel corpora wall street journal from 1993 and 1994 divided into two nonoverlapping partseach resulting english corpus has 1036m bytes of data wall street journal in english and nikkei financial news in japanese from the same time periodthe wsj text contains 49m bytes of data and the nikkei 127m bytessince the nikkei is encoded in twobyte japanese character sets the latter is equivalent to about 60m bytes of data in englishthe english wall street journal nonparallel corpus gives us an easier test set on which to startthe output of this corpus should consist of words matching to themselves as translationsit is useful as a baseline evaluation test set providing an estimate on performancethe wsjnikkei corpus is the most nonparallel type of corpusin addition to being written in languages across linguistic families by different journalists wsjnikkei also share only a limited amount of common topicthe wall street journal tends to focus on yous domestic economic and political news whereas the nikkei financial news focuses on economic and political events in japan and in asiadue to the large difference in content language writing style we consider this corpus more difficult than othershowever the result we obtain from this corpus gives us a lowerbound on the performance of our algorithmbilingual lexicon translation algorithms for parallel corpora in general make use of fixed correlations between a pair of bilingual terms reflected in their frequent cooccurrences in translated texts to find lexicon translationswe use correlations both between monolingual lexical units and between bilingual or multilingual lexical units to find a consistent pattern which is represented as statistical word features for translationwe illustrate the possible correlations using the word debentures in the two different parts of wsjfigure 1 shows segments from both texts containing the word debenturesuniversal said its 15 34 debentures due dec sold 75 million of 6 debentures priced at par and due sept sold 40 million of 6 14 convertible debentures priced at par and due march 15 gte offered a 250 million issue of 8 12 debentures due in 30 years 250 million of notes due 1997 and 250 million of debentures due 2017 sold 300 million of 7 12 convertible debentures due 2012 at par said it agreed to issue 125 million canadian in convertible debentures senior subordinated debentures was offered through drexel said it completed the redemption of all 16 million of its 9 subordinated debentures due 2003 moody assigned a baa3 rating to a proposed 100 million convertible subordinated debenture it and its 12 12 senior subordinated debentures at par 20 million of convertible debentures due june 1 issues of 110 million of senior notes due 1997 and 115 million of convertible debentures due said it reached an agreement with holders of 30 million of its convertible subordinated debentui downgraded the subordinated debentures of bank of montreal common shares and 35 million of convertible debentures due 2012 35 million of convertible debentures due may 15 financed with 450 million of new western union senior secured debentures to be placed by dre commission to issue as much as 125 million of 30year debentures packaged with common stoc to redeem its entire 55 million face amount of 8 34 convertible subordinated debentures dul figure 1 shows that we use online dictionaries to provide the it seed word liststo avoid problems of polysemy and nonstandardization in dictionary entries we choose a more reliable less ambiguous subset of dictionary entries as the seed word listthis subset contains dictionary entries which occur at midrange frequency in the corpus so that they are more likely to be content wordsthey must occur in both sides of the nonparallel corpora and have fewer number of candidate translationssuch seed words serve as the textual anchor points in nonparallel corporafor example we obtained 1416 entries from the japaneseenglish online dictionary edict using these criteriaword correlations are important statistical information which has been successfully employed to find bilingual word pairs from parallel corporaword correlations w are computed from general likelihood scores based on the cooccurrence of words in common segmentssegments are either sentences paragraphs or string groups delimited by anchor points where a number of segments where both words occur number of segments where only w3 occur c number of segments where only wt occur d number of segments where neither words occur all correlation measures use the above likelihood scores in different formulationsin our word relation matrix representation we use the correlation measure w between a seed word tv and an unknown word tv a b c and d are computed from the segments in the monolingual text of the nonparallel corpusw is the weighted mutual information in our algorithm since it is most suitable for lexicon compilation of midfrequency technical words or terms as an initial step all pr are precomputed for the seed words in both languageswe have experimented with various segment sizes ranging from phrases delimited by all punctuations a sentence to an entire paragraphfrom our experiment results we conclude that the right segment size is a function of the frequency of the seed words segment size oc frequency if the seed words are frequent and if the segment size is as large as a paragraph size then these frequent seed words could occur in every single segmentin this case the chances for cooccurrence between such seed words and all new words are very high close to onewith large segments such seed words are too biasing and thus smaller segment size must be usedconversely we need a larger segment size if seed word frequency is lowconsequently we use the paragraph as the segment size for our experiment on wall street journalnikkei corpus since all the seed words are midfrequency content wordswe computed all binary vectors of the 1416 seed words w where the ith dimension of the vector is 1 if the seed word occurs in the ith paragraph in the text zero otherwisewe use a smaller segment size between any two punctuations for the segment size for the wall street journal englishenglish corpus since many of the seed words are frequentnext pr is computed for all unknown words z in both textsthe worm vectors are then sorted according to wthe most correlated seed word w will have the top scoring w valuesconsequently if w has a high y value everywhere then the cosine measure between any tot and this w would be highthis violates our assumptions in that although w and wt might not correlate closely with the same set of seed words the matching score would be nevertheless highthis is another supporting reason for choosing midfrequency content words as seed wordsthe evaluation on the wsjwsj englishenglish corpus is intended as a pilot test on the discriminative power of the word relation matrixthis nonparallel corpus has minimal content and style differencesfurthermore using such an englishenglish test set the output can be evaluated automaticallya translated pair is considered correct if they are identical english words307 seed words are chosen according to their occurrence frequency to minimize the number of function wordshowever a frequency of 3900 in a corpus of 15m words is quite highas a result a segment delimited by two punctuations is used as the context window sizefurthermore the frequent nature of the seed words led to our choice of the euclidean distance instead of the cosine measurethe choices of segment size seed words and euclidean distance measure are all direct consequences of the atypical nature of the englishenglish pilot test setwe selected a test set of 582 by 687 single words with midrange frequency from the wsj textswe computed the worm feature for each of these test words and computed the euclidean distance between every word in these setswe then calculated the accuracy by counting the number of words whose top one candidate is identical to itself obtaining a precision of 29by allowing ntop candidates the accuracy improves as shown in the graphs for 582 words output in figure 5 if we find the correct translation among the top 100 candidates we obtain a precision of around 58ntop candidates are useful as translator aidsmeanwhile precisions for translating less polysemous content words are higherif only the 445 content words are kept from the 582word set the precisions at different top n candidates for the 445word set are higher as shown in figure 5 by the dotted linewe believe the accuracy would be even higher if we only look at really unambiguous test words such as an entire technical termit is well known that polysemous words usually have only one sense when used as part of a collocation or technical term evaluations are also carried out on the wall street journal and nikkei financial news corpus matching technical terms in japanese to their counterpart in englishthis evaluation is a difficult test case because the two languages english and japanese are across language groups the two texts wall street journal and nikkei financial news do not focus on the same topics and the two texts are not written by the same authors1416 entries from the japaneseenglish online dictionary edict with occurrence frequencies between 100 and 1000 are chosen as seed wordssince these seed words have relatively low frequencies compared to the corpus size of around 7 million words for the wsj text we chose the segment size to be that of an entire paragraphfor the same reason the cosine measure is chosen as a matching functionfor evaluation we need to select a test set of known technical term translationswe handtranslated a selected set of technical terms from the nikkei financial news corpus and looked them up in the wall street journal textamong these 19 terms shown in figure 6 have their counterparts in the wsj textthree evaluations were carried outin all cases a translation is counted as correct if the top candidate is the right onetest i tries to find the correct translation for each of the nineteen japanese terms among the nineteen english termsto increase the candidate numbers test ii is carried out on 19 japanese terms with their english counterparts plus 293 other english terms giving a total of 312 possible english candidatesthe third test set iii consists of the nineteen japanese terms paired with their translations and 383 single english words in additionthe accuracies for the three test sets are shown in figure 7 precision ranges from 211 to 526figure 8 shows the ranking of the true translations among all the candidates for all 19 cases for the purpose of a translatoraidmost of the correct translations can be found among the top 20 candidatesthe previous two evaluations show that the precision of bestcandidate translation using our algorithm is around 30 on averagewhile it is far from ideal this is the first result of terminology translation from nonparallel corporameanwhile we have found that the correct translation is often among the top 20 candidatesthis leads us to conjecture that the output from this algorithm can be used as a translatoraidto evaluate this we again chose the nineteen englishjapanese terms from the wsjnikkei nonparallel corpus as a test setwe chose three evaluators who are all native chinese speakers with bilingual knowledge in english and chinesechinese speakers are able to recognize most japanese technical terms since they are very similar to chinesewe asked them to translate these nineteen japanese terms into english without using dictionaries or other reference materialthe translators have some general knowledge of international newshowever none of them specializes in economics or finance which is the domain of the wsjnikkei corpustheir output is in set aour system then proposes two sets of outputs for each japanese term our system proposes the top20 candidates from the set of 312 noun phrasesusing this candidate list the translators again translate the nineteen termstheir output based on this information is in seth for each japanese term our system proposes the top20 candidates from the set containing 383 single words plus the nineteen termsthe result of human translation based on this candidate list is in set c sets a b and c are all compared to the original translation in the corpusif the translation is the same as in the corpus then it is judged as correctthe results are shown in figure 9evaluators on average are able to translate 8 terms out of 19 by themselves whereas they can translate 18 terms on average with the aid of our outputtranslation precision increases on the average by 509we have described a statistical word signature feature the word relation matrix that can be used to find matching pairs of content words or terms in a pair of samedomain nonparallel bilingual textsevaluation shows a precision of about 30we showed that humans are able to translate more than twice as many japanese technical terms into english when our system output is used compared to translating a random set of 19 japanese terms without aidit is also a significant initial result for lexical translation from truly nonparallel corpora particularly across language groupsfor future work the quality of seed words can be improved by using a training algorithm to select seed words according to their discriminative powerthe dimensionality of worm vectors we have chosen is not optimala high dimensionality of vectors is usually favorable on the other hand high dimensionality can also lead to noise therefore dimensionality reduction methods such as the singular value decomposition or clustering is often usedin our case this means that we should choose a large subset of highly discriminative seed word pairsadditionally the word relation matrix could be used in combination with other word signature features for nonparallel corporain addition to the evaluation results we have also discovered that the content words in the same segment with a word or term all contribute to the occurrence of this wordthis feature represents some of the longdistance relations between the word and multiple other words which are not its immediate neighborsthe information can be used in language modeling in addition to the currently popular ngram models and word trigger pairs
W97-0119
finding terminology translations from nonparallel corporawe present a statistical word feature the word relation matrix which can be used to find translated pairs of words and terms from nonparallel corpora across language groupsonline dictionary entries are used as seed words to generate word relation matrices for the unknown words according to correlation measuresword relation matrices are then mapped across the corpora to find translation pairstranslation accuracies are around 30 when only the top candidate is countednevertheless top 20 candidate output give a 509 average increase in accuracy on human translator performancein our work a translation model applied to a pair of unrelated languages with a random selection of test words many of them multiword terms gives a precision around 30 when only the top candidate is proposed
selectional preference and sense disambiguation absence of is a real problem for corpusbased approaches to sense disambiguation one that is unlikely to be solved soon selectional preference is traditionally connected with sense ambiguity this paper explores how a statistical model of selectional preference requiring neither manual annotation of selection restrictions nor supervised training can be used in sense disambiguation it has long been observed that selectional constraints and word sense disambiguation are closely linkedindeed the exemplar for sense disambiguation in most computational settings discussion is katz and fodor use of boolean selection restrictions to constrain semantic interpretationfor example although burgundy can be interpreted as either a color or a beverage only the latter sense is available in the context of mary drank burgundy because the verb drink specifies the selection restriction liquid for its direct objectsproblems with this approach arise however as soon as the domain of interest becomes too large or too rich to specify semantic features and selection restrictions accurately by handthis paper concerns the use of selectional constraints for automatic sense disambiguation in such broadcoverage settingsthe approach combines statistical and knowledgebased methods but unlike many recent corpusbased approaches to sense disambiguation it takes as its starting point the assumption that senseannotated training text is not availablemotivating this assumption is not only the limited availability of such text at present but skepticism that the situation will change any time soonin marked contrast to annotated training material for partofspeech tagging there is no coarselevel set of sense distinctions widely agreed upon sense annotation has a comparatively high error rate and no fully automatic method provides high enough quality output to support the quotannotate automatically correct manuallyquot methodology used to provide high volume annotation by data providers like the penn treebank project the treatment of selectional preference used here is that proposed by resnik combining statistical and knowledgebased methodsthe basis of the approach is a probabilistic model capturing the cooccurrence behavior of predicates and conceptual classes in the taxonomythe intuition is illustrated in figure 1the prior distribution prr captures the probability of a class occurring as the argument in predicateargument relation r regardless of the identity of the predicatefor example given the verbsubject relationship the prior probability for tends to be significantly higher than the prior probability for however once the identity of the predicate is taken into account the probabilities can change if the verb is buzz then the probability for can be expected to be higher than its prior and will likely be lowerin probabilistic terms it is the difference between this conditional or posterior distribution and the prior distribution that determines selectional preferenceinformation theory provides an appropriate way to quantify the difference between the prior and posterior distributions in the form of relative entropy the model defines the selectional preference strength of a predicate as intuitively sr measures how much information in bits predicate p provides about the conceptual class of its argumentthe better pr approximates pr the less influence p is having on its argument and therefore the less strong its selectional preferencegiven this definition a natural way to characterize the quotsemantic fitquot of a particular class as the argument to a predicate is by its relative contribution to the overall selectional preference strengthin particular classes that fit very well can be expected to have higher posterior probabilities compared to their priors as is the case for in figure 1formally selectional association is defined as this model of selectional preference has turned out to make reasonable predictions about human judgments of argument plausibility obtained by psycholinguistic methods closely related proposals have been applied in syntactic disambiguation and to automatic acquisition of more katzfodoresque selection restrictions in the form of weighted disjunctions the selectional association has also been used recently to explore apparent cases of syntactic optionality if taxonomic classes were labeled explicitly in a training corpus estimation of probabilities in the model would be fairly straightforwardbut since text corpora contain words not classes it is necessary to treat each occurrence of a word in an argument position as if it might represent any of the conceptual classes to which it belongs and assign frequency counts accordinglyat present this is done by distributing the quotcreditquot for an observation uniformly across all the conceptual classes containing an observed argumentformally given a predicateargument relationship are a predicate p and a conceptual class c where countr is the number of times word w was observed as the argument of p with respect to r and classes is the number of taxonomic classes to which w belongsgiven the frequencies probabilities are currently estimated using maximum likelihood the use of word classes is itself a form of smoothing 1 this estimation method is similar to that used by yarowsky for roget thesaurus categories and works for similar reasonsas an example consider two instances of the verbobject relationship in a training corpus drink coffee and drink winecoffee has 2 senses in the wordnet 14 noun taxonomy and belongs to 13 classes in all and wine has 2 senses and belongs to a total of 16 classesthis means that the observed countverbobj 1 will be distributed by adding a to the joint frequency with drink for each of the 13 classes containing coffeesimilarly the joint frequency with drink will be incremented by for each of the 16 classes containing winecrucially although each of the two words is ambiguous only those taxonomic classes containing both words eg receive credit for both observed instancesin general because different words are ambiguous in different ways credit tends to accumulate in the taxonomy only in those classes for which there is real evidence of cooccurrence the rest tends to disperse unsystematically resulting primarily in noisethus despite the absence of class annotation in the training text it is still possible to arrive at a usable estimate of classbased probabilitiestable 1 presents a selected sample of resnik comparison with argument plausibility judgments made by human subjectswhat is most interesting here is the way in which strongly selecting word w is typically the head of a noun phrase which could lead the model astray for example toy soldiers behave differently from soldiers in principle addressing this issue requires that noun phrases be mapped to taxonomic classes based on their compositional interpretation however such complications rarely arise in practice verbs quotchoosequot the sense of their argumentsfor example letter has 3 senses in wordnet2 and belongs to 19 classes in allin order to approximate its plausibility as the object of write the selectional association with write was computed for all 19 classes and the highest value returned in this case since only one sense of letter has this class as an ancestor this method of determining argument plausibility has in essence performed sense disambiguation as a side effectthis observation suggests the following simple algorithm for disambiguation by selectional preferencelet n be a noun that stands in relationship r to predicate p and let 81 sk be its possible sensesfor i from 1 to k compute c1 fele is an ancestor of s ai max ar cec and assign ai as the score for sense sithe simplest way to use the resulting scores following miller et al is as follows if n has only one sense select it otherwise select the sense si for which ai is greatest breaking ties by random choicetask and materialstest and training materials were derived from the brown corpus of american english all of which has been parsed and manually verified by the penn treebank project and parts of which have been manually sensetagged by the wordnet group a parsed sensetagged corpus was obtained by merging the wordnet sensetagged corpus with the corresponding penn treebank parses3 the rest of the brown corpus remained as a parsed but not sensetagged training setthe test set for the verbobject relationship was constructed by first training a selectional preference model on the training corpus using the treebank tgrep utility to extract verbobject pairs from parse treesthe 100 verbs that select most strongly for their objects were identified excluding verbs appearing only once in the training corpus test instances of the form were then extracted from the merged test corpus including all triples where verb was one of the 100 test verbs4 evaluation materials were obtained in the same manner for several other surface syntactic relationships including verbsubject adjectivenoun modifierhead and headmodifier baselinefollowing miller et al disambiguation by random choice was used as a baseline if a noun has one sense use it otherwise select at random among its sensesresultssince both the algorithm and the baseline may involve random choices evaluation involved multiple runs with different random seedstable 2 summarizes the results taken over 10 runs considering only ambiguous test casesall differences between the means for algorithm and baseline were statistically significantdiscussionthe results of the experiment show that disambiguation using automatically acquired selectional constraints leads to performance significantly better than random choicenot surprisingly though the results are far from what one might expect to obtain with supervised trainingin that respect the most direct point of comparison is the performance of miller et al frequency heuristic always choose the most frequent sense of a word as evaluated using the full sensetagged corpus including nouns verbs adjectives and adverbsfor ambiguous words they report 582 correct as compared to a random baseline of 268crucially however the frequency heuristic requires sensetagged training data and this paper starts from the assumption that such data are unavailablea fairer comparison therefore considers alexcluded were some inapplicable cases eg where object was a proper noun tagged as ternative unsupervised algorithms though unfortunately the literature contains more proposed algorithms than quantitative evaluations of those algorithmsone experiment where results were reported was conducted by cowie et at their method involved using a stochastic search procedure to maximize the overlap in dictionary definitions for alternative senses of words cooccurring in a sentencethey report an accuracy of 72 for disambiguation to the homograph level and 47 for disambiguation to the sense levelsince the task here involved wordnet sense distinctions which are rather fine grained the latter value is more appropriate for comparisontheir experiment was more general in that they did not restrict themselves to nouns on the other hand their test set involved disambiguating words taken from full sentences so the percentage correct may have been improved by the presence of unambiguous wordssussna has also looked at unsupervised disambiguation of nouns using wordnetlike cowie et al his algorithm optimizes a measure of semantic coherence over an entire sentence in this case pairwise semantic distance between nouns in the sentence as measured using the noun taxonomycomparison of results is somewhat difficult however for two reasonsfirst sussna used an earlier version of wordnet having a significantly smaller noun taxonomy second and more significant in creating the test data sussna human sensetaggers were permitted to tag a noun with as many senses as they felt were quotgoodquot rather than making a forced choice sussna develops a scoring metric based on that fact rather than requiring exact matches to a single best sensethis is quite a reasonable move but unfortunately not an option in the present experimentnonetheless some comparison is possible since he reports a quot correctquot apparently treating a sense assignment as correct if any of the quotgoodquot senses is chosen his experiments have a lower bound of about 40 correct with his algorithm performing at 5355 considering only ambiguous casesthe best results reported for an unsupervised sense disambiguation method are those of yarowsky who uses evidence from a wider context to build up a cooccurrence model using classes from roget thesaurushe reports accuracy figures in the 7299 range in disambiguating test instances involving twelve quotinterestingquot polysemous wordsas in the experiments by cowie et al the choice of coarser distinctions presumably accounts in part for the high accuracyby way of comparison some words in yarowsky test set would require choosing among ten senses in wordnet as compared to a maximum of six using the roget thesaurus categories the mean level of polysemy for the tested words is a sixway distinction in wordnet as compared to a threeway distinction in roget thesaurusas an aside a rich taxonomy like wordnet permits a more continuous view of the sense vs homograph distinctionfor example town has three senses in wordnet corresponding to an administrative district a geographical area and a group of peoplegiven town as the object of leave selectional preference will produce a tie between the first two senses since both inherit their score from a common ancestor in effect the automatic selection of a class higher in the taxonomy as having the highest score provides the same coarse category that might be provided by a homographsense distinction in another settingthe choice of coarser category varies dynamically with the context as the argument in rural town the same two senses still tie but with as the common ancestor that determines the scorein other work yarowsky has shown that local collocational information including selectional constraints can be used to great effect in sense disambiguation though his algorithm requires supervised trainingthe present work can be viewed as an attempt to take advantage of the same kind of information but in an unsupervised settingalthough the definition of selectional preference strength is motivated by the use of relative entropy in information theory selectional association is not the approach would benefit from experimentation with alternative statistical association measures particularly a comparison with simple mutual information and with the likelihood ratiocombining information about selectional preference could also be helpful eg where a noun is both the object of a verb and modified by an adjective though such cases are rarer than one might expectmore important is information beyond selectional preference notably the wider context utilized by yarowsky performance of the method explored here is limited at present though not surprisingly so when taken in the context of previous attempts at unsupervised disambiguation using finegrained sensesone main message to take away from this experiment is the observation that although selectional preferences are widely viewed as an important factor in disambiguation their practical broadcoverage application appears limited at least when disambiguating nouns because many verbs and modifiers simply do not select strongly enough to make a significant differencethey may provide some evidence but most likely only as a complement to other sources of information such as frequencybased priors topical context and the likemuch of this work was conducted at sun microsystems laboratories in chelmsford massachusetts
W97-0209
selectional preference and sense disambiguationthe absence of training data is a real problem for corpusbased approaches to sense disambiguation one that is unlikely to be solved soonselectional preference is traditionally connected with sense ambiguity this paper explores how a statistical model of selectional preference requiring neither manual annotation of selection restrictions nor supervised training can be used in sense disambiguationwe define selectional preference as the amount of information a verb provides about its semantic argument classeswe present a method to acquire a set of conceptual classes for word senses employing selectional preferences based on the idea that certain linguistic predicates constraint the semantic interpretation of underlying words into certain classesin determining selectional preferences we use uniformly distributing observed frequencies for a given word across all its senses
a linear observed time statistical parser based on maximum entropy models 3 a search heuristic which attempts to find the highest scoring parse tree for a given input sentence abstract this paper presents a statistical parser for natural language that obtains a parsing accuracyroughly 87 precision and 86 recallwhich surpasses the best previously published results on the wall st journal domain the parser itself requires very little human intervention since the information it uses to make parsing decisions is specified in a concise and simple manner and is combined in a fully automatic way under the maximum entropy framework running time of the parser on test sentence linear with respect to the sentence length furthermore the parser returns several scored parses for a sentence and this paper shows that a scheme to pick the best parse from the 20 highest scoring a dramatically higher accuracy of 93 precision and recall this paper presents a statistical parser for natural language that obtains a parsing accuracyroughly 87 precision and 86 recallwhich surpasses the best previously published results on the wall st journal domainthe parser itself requires very little human intervention since the information it uses to make parsing decisions is specified in a concise and simple manner and is combined in a fully automatic way under the maximum entropy frameworkthe observed running time of the parser on a test sentence is linear with respect to the sentence lengthfurthermore the parser returns several scored parses for a sentence and this paper shows that a scheme to pick the best parse from the 20 highest scoring parses could yield a dramatically higher accuracy of 93 precision and recallthis paper presents a statistical parser for natural language that finds one or more scored syntactic parse trees for a given input sentencethe parsing accuracyroughly 87 precision and 86 recall surpasses the best previously published results on the wall st journal domainthe parser consists of the following three conceptually distinct parts the maximum entropy models used here are similar in form to those in the models compute the probabilities of actions based on certain syntactic characteristics or features of the current contextthe features used here are defined in a concise and simple manner and their relative importance is determined automatically by applying a training procedure on a corpus of syntactically annotated sentences such as the penn treebank although creating the annotated corpus requires much linguistic expertise creating the feature set for the parser itself requires very little linguistic effortalso the search heuristic is very simple and its observed running time on a test sentence is linear with respect to the sentence lengthfurthermore the search heuristic returns several scored parses for a sentence and this paper shows that a scheme to pick the best parse from the 20 highest scoring parses could yield a dramatically higher accuracy of 93 precision and recallsections 2 3 and 4 describe the treebuilding procedures the maximum entropy models and the search heuristic respectivelysection 5 describes experiments with the penn treebank and section 6 compares this paper with previously published worksthe parser uses four procedures tag chunk build and check that incrementally build parse trees with their actionsthe procedures are applied in three lefttoright passes over the input sentence the first pass applies tag the second pass applies chunk and the third pass applies build and checkthe passes the procedures they apply and the actions of the procedures are summarized in table 1 and described belowthe actions of the procedures are designed so that any possible complete parse tree t for the input sentence corresponds to exactly one sequence of actions call this sequence the derivation of t each procedure when given a derivation d al an predicts some action ani to create a new derivation d al ani typically the procedures postulate many different values for a1 which because the parser to explore many different derivations when parsing an input sentencebut for demonstration purposes figures 17 trace one possible derivation for the sentence quoti saw the man with the telescopequot using the partofspeech tag set and constituent label set of the penn treebankthe first pass takes an input sentence shown in figure 1 and uses tag to assign each word a pos tagthe result of applying tag to each word is shown in figure 2the second pass takes the output of the first pass and uses chunk to determine the quotflatquot phrase chunks of the sentence where a phrase is quotflatquot if and only if it is a constituent whose children consist solely of pos tagsstarting from the left chunk assigns each pair a quotchunkquot tag either start x join x or otherfigure 3 shows the result after the second passthe chunk tags are then used for chunk detection in which any consecutive sequence of words vim wn true kik a a advance dxvdi applies relevant tree building procedure to d and returns list of new derivations whose action probabilities pass the threshold q insert dxh void inserts d in heap h extract h d removes and returns derivation in h with highest score completed d truefalse returns true if and only if d is a complete derivation sentence length we would expect it to run in linear observed time with respect to sentence lengthfigure 8 confirms our assumptions about the linear observed running timethe maximum entropy parser was trained on sections 2 through 21 of the penn treebank wall st journal corpus release 2 and tested on section 23 for comparison with other workall trees were stripped of their semantic tags coreference information and quotation marks for both training and testingthe parseval measures compare a proposed parse p with the corresponding correct treebank parse t as follows a constituent in p is quotcorrectquot if there exists a constituent in t of the same label that spans the same wordstable 5 shows results using the parseval measures as well as results using the slightly more forgiving measures of and table 5 shows that the maximum entropy parser performs better than the parsers presented in and 2 which have the best previously published parsing accuracies on the wall st journal domainit is often advantageous to produce the top n parses instead of just the top 1 since additional information can be used in a secondary model that reorders the top n and hopefully improves the quality of the top ranked parsesuppose there exists a quotperfectquot reranking scheme that for each sentence magically picks the best parse from the top n parses produced by the maximum entropy parser where the best parse has the highest average precision and recall when compared to the treebank parsethe performance of this quotperfectquot scheme is then an upper bound on the performance of any reranking scheme that might be used to reorder the top n parsesfigure 9 shows that the quotperfectquot scheme would achieve roughly 93 precision and recall which is a dramatic increase over the top 1 accuracy of 87 precision and 86 recallfigure 10 shows that the quotexact matchquot which counts the percentage of times the proposed parse p is identical to the treebank parse t rises substantially to about 53 from 30 when the quotperfectquot scheme is appliedfor this reason research into reranking schemes appears to be a promising step towards the goal of improving parsing accuracythe two parsers which have previously reported the best accuracies on the penn treebank wall st journal are the bigram parser described in and the spatter parser described in the parser presented here outperforms both the bigram parser and the spatter parser and uses different modelling technology and different information to drive its decisionsthe bigram parser is a statistical ckystyle chart parser which uses cooccurrence statistics of headmodifier pairs to find the best parsethe maximum entropy parser is a statistical shiftreduce style parser that cannot always access headmodifier pairsfor example the checkcons predicate of the maximum entropy parser may use two words such that neither is the intended head of the proposed consituent that the check procedure must judgeand unlike the bigram parser the maximum entropy parser cannot use head word information besides quotflatquot chunks in the right contextthe bigram parser uses a backedoff estimation scheme that is customized for a particular task whereas the maximum entropy parser uses a general purpose modelling techniquethis allows the maximum entropy parser to easily integrate varying kinds of features such as those for punctuation whereas the bigram parser uses handcrafted punctuation rulesfurthermore the customized estimation framework of the bigram parser must use information that has been carefully selected for its value whereas the maximum entropy framework robustly integrates any kind of information obviating the need to screen it firstthe spatter parser is a historybased parser that uses decision tree models to guide the operations of a few tree building proceduresit differs from the maximum entropy parser in how it builds trees and more critically in how its decision trees use informationthe spatter decision trees use predicates on word classes created with a statistical clustering technique whereas the maximum entropy parser uses predicates that contain merely the words themselves and thus lacks the need for a word clustering procedurefurthermore the top k bfs search heuristic appears to be much simpler than the stack decoder algorithm outlined in the maximum entropy parser presented here achieves a parsing accuracy which exceeds the best previously published results and parses a test sentence in linear observed time with respect to the sentence lengthit uses simple and concisely specified predicates which can added or modified quickly with little human effort under the maximum entropy frameworklastly this paper clearly demonstrates that schemes for reranking the top 20 parses deserve research effort since they could yield vastly better accuracy resultsmany thanks to mike collins and professor mitch marcus from the university of pennsylvania for their helpful comments on this work
W97-0301
a linear observed time statistical parser based on maximum entropy modelsthis paper presents a statistical parser for natural language that obtains a parsing accuracyroughly 87 precision and 86 recallwhich surpasses the best previously published results on the wall st journal domainthe parser itself requires very little human intervention since the information it uses to make parsing decisions is specified in a concise and simple manner and is combined in a fully automatic way under the maximum entropy frameworkthe observed running time of the parser on a test sentence is linear with respect to the sentence lengthfurthermore the parser returns several scored parses for a sentence and this paper shows that a scheme to pick the best parse from the 20 highest scoring parses could yield a dramatically higher accuracy of 93 precision and recallwe introduce the idea of oracle reranking suppose there exists a perfect reranking scheme that magically picks the best parse that has the highest fscore among the top k parses for each sentencewe take a pipeline approach in that the classifiers are trained on individual decisions rather than on the overall quality of the parser and chained to yield the global structure
global thresholding and multiplepass parsing we present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method at the same performance level we also present a new thresholding technique global thresholding which combined with the new beam thresholding gives an additional factor of two improvement and a novel technique multiple pass parsing that can be combined with the others to yield yet another 50 improvement we use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms in this paper we examine thresholding techniques for statistical parserswhile there exist theoretically efficient algorithms for parsing probabilistic contextfree grammars and related formalisms practical parsing algorithms usually make use of pruning techniques such as beam thresholding for increased speedwe introduce two novel thresholding techniques global thresholding and multiplepass parsing and one significant variation on traditional beam thresholdingwe examine the value of these techniques when used separately and when combinedin order to examine the combined techniques we also introduce an algorithm for optimizing the settings comments on earlier drafts and the anonymous reviewers for their extensive comments of multiple thresholdswhen all three thresholding methods are used together they yield very significant speedups over traditional beam thresholding while achieving the same level of performancewe apply our techniques to cky chart parsing one of the most commonly used parsing methods in natural language processingin a cky chart parser a twodimensional matrix of cells the chart is filled ineach cell in the chart corresponds to a span of the sentence and each cell of the chart contains the nonterminals that could generate that spancells covering shorter spans are filled in first so we also refer to this kind of parser as a bottomup chart parserthe parser fills in a cell in the chart by examining the nonterminals in lower shorter cells and combining these nonterminals according to the rules of the grammarthe more nonterminals there are in the shorter cells the more combinations of nonterminals the parser must considerin some grammars such as pcfgs probabilities are associated with the grammar rulesthis introduces problems since in many pcfgs almost any combination of nonterminals is possible perhaps with some low probabilitythe large number of possibilities can greatly slow parsingon the other hand the probabilities also introduce new opportunitiesfor instance if in a particular cell in the chart there is some nonterminal that generates the span with high probability and another that generates that span with low probability then we can remove the less likely nonterminal from the cellthe less likely nonterminal will probably not be part of either the correct parse or the tree returned by the parser so removing it will do little harmthis technique is called beam thresholdingif we use a loose beam threshold removing only those nonterminals that are much less probable than the best nonterminal in a cell our parser will run only slightly faster than with no thresholding while performance measures such as precision and recall will remain virtually unchangedon the other hand if we use a tight threshold removing nonterminals that are almost as probable as the best nonterminal in a cell then we can get a considerable speedup but at a considerable costfigure 1 shows the tradeoff between accuracy and timein this paper we will consider three different kinds of thresholdingthe first of these is a variation on traditional beam searchin traditional beam search only the probability of a nonterminal generating the terminals of the cell span is usedwe have found that a minor variation introduced in section 2 in which we also consider the prior probability that each nonterminal is part of the correct parse can lead to nearly an order of magnitude improvementthe problem with beam search is that it only compares nonterminals to other nonterminals in the same cellconsider the case in which a particular cell contains only bad nonterminals all of roughly equal probabilitywe cannot threshold out these nodes because even though they are all bad none is much worse than the bestthus what we want is a thresholding technique that uses some global information for thresholding rather than just using information in a single cellthe second kind of thresholding we consider is a novel technique global thresholding described in section 3global thresholding makes use of the observation that for a nonterminal to be part of the correct parse it must be part of a sequence of reasonably probable nonterminals covering the whole sentencethe last technique we consider multiplepass parsing is introduced in section 4the basic idea is that we can use information from parsing with one grammar to speed parsing with anotherwe run two passes the first of which is fast and simple eliminating from consideration many unlikely potential constituentsthe second pass is more complicated and slower but also more accuratebecause we have already eliminated many nodes in our first pass the second pass can run much faster and despite the fact that we have to run two passes the added savings in the second pass can easily outweigh the cost of the first oneexperimental comparisons of these techniques show that they lead to considerable speedups over traditional thresholding when used separatelywe also wished to combine the thresholding techniques this is relatively difficult since searching for the optimal thresholding parameters in a multidimensional space is potentially very time consumingwe designed a variant on a gradient descent search algorithm to find the optimal parametersusing all three thresholding methods together and the parameter search algorithm we achieved our best results running an estimated 30 times faster than traditional beam search at the same performance levelthe first and simplest technique we will examine is beam thresholdingwhile this technique is used as part of many search algorithms beam thresholding with pcfgs is most similar to beam thresholding as used in speech recognitionbeam thresholding is often used in statistical parsers such as that of collins consider a nonterminal x in a cell covering the span of terminals titkwe will refer to this as node int and denote it by however this does not give information about the probability of the node in the context of the full parse treefor instance two nodes one an np and the other a frau may have equal inside probabilities but since there are far more nps than there are frau clauses the np node is more likely overalltherefore we must consider more information than just the inside probabilitythe outside probability of a node nfk is the probability of that node given the surrounding terminals of the sentence iep which we denote by aideally we would multiply the inside probability by the outside probability and normalizethis product would give us the overall probability that the node is part of the correct parseunfortunately there is no good way to quickly compute the outside probability of a node during bottomup chart parsing thus we instead multiply the inside probability simply by the prior probability of the nonterminal type p which is an approximation to the outside probabilityour final thresholding measure is px3in section 74 we will show experiments comparing insideprobability beam thresholding to beam thresholding using the inside probability times the priorusing the prior can lead to a speedup of up to a factor of 10 at the same performance levelto the best of our knowledge using the prior probability in beam thresholding is new although not particularly insightful on our partcollins independently observed the usefulness of this modification and caraballo and charniak used a related technique in a bestfirst parserwe think that the main reason this technique was not used sooner is that beam thresholding for pcfgs is derived from beam thresholding in speech recognition using hidden markov models in an hmm the forward probability of a given state corresponds to the probability of reaching that state from the start statethe probability of eventually reaching the final state from any state is always 1thus the forward probability is all that is neededthe same is true in some top down probabilistic parsing algorithms such as stochastic versions of earley algorithm however in a bottomup algorithm we need the extra factor that indicates the probability of getting from the start symbol to the nonterminal in question which we approximate by the prior probabilityas we noted this can be very different for different nonterminalsas mentioned earlier the problem with beam thresholding is that it can only threshold out the worst nodes of a cellit cannot threshold out an entire cell even if there are no good nodes in itto remedy this problem we introduce a novel thresholding technique global thresholdingthe key insight of global thresholding is due to rayner and carter rayner et al noticed that a particular node cannot be part of the correct parse if there are no nodes in adjacent cellsin fact it must be part of a sequence of nodes stretching from the start of the string to the endin a probabilistic framework where almost every node will have some probability we can rephrase this requirement as being that the node must be part of a reasonably probable sequencefigure 2 shows an example of this insightnodes a b and c will not be thresholded out because each is part of a sequence from the beginning to the end of the charton the other hand nodes x y and z will be thresholded out because none is part of such a sequencerayner et al used this insight for a hierarchical nonrecursive grammar and only used their technique to prune after the first level of the grammarthey computed a score for each sequence as the minimum of the scores of each node in the sequence and computed a score for each node in the sequence as the minimum of three scores one based on statistics about nodes to the left one based on nodes to the right and one based on unigram statisticswe wanted to extend the work of rayner et al to general pcfgs including those that were recursiveour approach therefore differs from theirs in many waysrayner et al ignore the inside probabilities of nodes while this may work after processing only the first level of a grammar when the inside probabilities will be relatively homogeneous it could cause problems after other levels when the inside probability of a node will give important information about its usefulnesson the other hand because long nodes will tend to have low inside probabilities taking the minimum of all scores strongly favors sequences of short nodesfurthermore their algorithm requires time 0 to run just oncethis is acceptable if the algorithm is run only after the first level but running it more often would lead to an overall run time of 0finally we hoped to find an algorithm that was somewhat less heuristic in natureour global thresholding technique thresholds out node n if the ratio between the most probable sequence of nodes including node n and the overall most probable sequence of nodes is less than some threshold t formally denoting sequences of nodes by l we threshold node n if now the hard part is determining p the probability of a node sequenceunfortunately there is no way to do this efficiently as part of the intermediate computation of a bottomup chart parserwe will approximate p as follows ome other parsing techniques such as stochastic versions of earley parsers efficiently compute related probabilities but we will not explore these parsers herewe confess that our real interest is in more complicated grammars such as those that use head wordsgrammars such as these can best be parsed bottom upthat is we assume independence between the elements of a sequencethe probability of node l nx is just its prior probability times its inside 3k probability as beforethe most important difference between global thresholding and beam thresholding is that global thresholding is global any node in the chart can help prune out any other nodein stark contrast beam thresholding only compares nodes to other nodes covering the same spanbeam thresholding typically allows tighter thresholds since there are fewer approximations but does not benefit from global informationglobal thresholding is performed in a bottomup chart parser immediately after each length is completedit thus runs n times during the course of parsing a sentence of length n we use the simple dynamic programming algorithm in figure 3there are 0 nodes in the chart and each node is examined exactly three times so the run time of this algorithm is 0the first section of the algorithm works forwards computing for each i f i which contains the score of the best sequence covering terminals titii thus fn1 contains the score of the best sequence covering the whole sentence maxi pthe algorithm works analogously to the viterbi algorithm for hmmsthe second section is analogous but works backwards computing bi which contains the score of the best sequence covering terminals titn once we have computed the preceding arrays computing maxilivel p is straightforwardwe simply want the score of the best sequence covering the nodes to the left of n fnstart times the score of the node itself times the score of the best sequence of nodes fromn isetnargtniength to the end which is just bnstart n thusing this expression we can threshold each node quicklysince this algorithm is run n times during the course of parsing and requires time 0 each time it runs the algorithm requires time 0 overallexperiments will show that the time it saves easily outweighs the time it usesin this section we discuss a novel thresholding technique multiplepass parsingwe show that multiplepass parsing techniques can yield large speedupsmultiplepass parsing is a variation on a new technique in speech recognition multiplepass speech recognition which we introduce firstin an idealized multiplepass speech recognizer we first run a simple pass computing the forward and backward probabilitiesthis first pass runs relatively quicklywe can use information from this simple fast first pass to eliminate most states and then run a more complicated slower second pass that does not examine states that were deemed unlikely by the first passthe extra time of running two passes is more than made up for by the time saved in the second passthe mathematics of multiplepass recognition is fairly simplein the first simple pass we record the forward probabilities oi and backward probabilities 3 of each state i at each time t now a and a respectivelyrememthat s the probability that na is in the correct parse thus we run our first pass computing this expression for each nodewe can then eliminate from consideration in our later passes all nodes for which the probability of being in the correct parse was too small in the first passof course for our second pass to be more accurate it will probably be more complicated typically containing an increased number of nonterminals and productionsthus we create a mapping function for length 2 to n for start i to n length i such that p l r add p to chartlengthstart from each first pass nonterminal to a set of second pass nonterminals and threshold out those second pass nonterminals that map from lowscoring first pass nonterminalswe call this mapping function the descendants function2 there are many possible examples of first and second pass combinationsfor instance the first pass could use regular nonterminals such as np and vp and the second pass could use nonterminals augmented with headword informationthe descendants function then appends the possible head words to the first pass nonterminals to get the second pass oneseven though the correspondence between forwardbackward and insideoutside probabilities is very close there are important differences between speechrecognition hmms and naturallanguage processing pcfgsin particular we have found that it is more important to threshold productions than nonterminalsthat is rather than just noticing that a particular nonterminal vp spanning the words quotkilled the rabbitquot is very likely we also note that the production vp 4 v np is likelyboth the first and second pass parsing algorithms are simple variations on cky parsingin the first pass we now keep track of each production instance associated with a node iea n no computing the inside and outside probabilities of eachthe second pass requires more changeslet us denote the descendants of nonterminal x by x1xxin the second pass for each production 21n this paper we will assume that each second pass nonterminal can descend from at most one first pass nonterminal in each cellthe grammars used here have this propertyif this assumption is violated multiplepass parsing is still possible but some of the algorithms need to be changed of the form n nia ni11 in the first pass that was not thresholded out by multipass thresholding beam thresholding etc we consider every descendant production instance that is all those of the form np ni nkz1i for appropriate values of p q r this algorithm is given in figure 4 which uses a current pass matrix chart to keep track of nonterminals in the current pass and a previous pass matrix prey chart to keep track of nonterminals in the previous passwe use one additional optimization keeping track of the descendants of each nonterminal in each cell in prevchart which are in the corresponding cell of chartwe tried multiplepass thresholding in two different waysin the first technique we tried productioninstance thresholding we remove from consideration in the second pass the descendants of all production instances whose combined insideoutside probability falls below a thresholdin the second technique node thresholding we remove from consideration the descendants of all nodes whose insideoutside probability falls below a thresholdin our pilot experiments we found that in some cases one technique works slightly better and in some cases the other doeswe therefore ran our experiments using both thresholds togetherone nice feature of multiplepass parsing is that under special circumstances it is an admissible search technique meaning that we are guaranteed to find the best solution with itin particular if we parse using no thresholding and our grammars have the property that for every nonzero probability parse in the second pass there is an analogous nonzero probability parse in the first pass then multiplepass search is admissibleunder these circumstances no nonzero probability parse will be thresholded out but many zero probability parses may be removed from considerationwhile we will almost always wish to parse using thresholds it is nice to know that multiplepass parsing can be seen as an approximation to an admissible technique where the degree of approximation is controlled by the thresholding parameterthe use of any one of these techniques does not exclude the use of the othersthere is no reason that we cannot use beam thresholding global thresholding and multiplepass parsing all at the same timein general it would not make sense to use a technique such as multiplepass parsing without other thresholding techniques our first pass would be overwhelmingly slow without some sort of thresholdingthere are however some practical considerationsto optimize a single threshold we could simply sweep our parameters over a one dimensional range and pick the best speed versus performance tradeoffin combining multiple techniques we need to find optimal combinations of thresholding parametersrather than having to examine 10 values in a single dimensional space we might have to examine 100 combinations in a two dimensional spacelater we show experiments with up to six thresholdssince we do not have time to parse with one million parameter combinations we need a better search algorithmideally we would like to be able to pick a performance level and find the best set of thresholds for achieving that performance level as quickly as possibleif this is our goal then a normal gradient descent technique will not work since we cannot use such a technique to optimize one function of a set of variables while holding another one constant 3 we wanted a metric of performance which would be sensitive to changes in threshold valuesin particular our ideal metric would be strictly increasing as our thresholds loosened so that every loosening of threshold values would produce a measurable increase in performancethe closer we get to this ideal the fewer sentences we need to test during parameter optimizationwe tried an experiment in which we ran beam thresholding with a tight threshold and then a loose threshold on all sentences of section 0 of length np vp and then later computes some better probability for the np it must update the probability for the s as wellthis could propagate through much of the chartto remedy this caraballo et al only propagated probabilities that caused a large enough change also the question of when an agendabased system should stop is a little discussed issue and difficult since there is no obvious stopping criterionbecause of these issues we chose not to implement an agendabased system for comparisonas mentioned earlier rayner and carter describe a system that is the inspiration for global thresholdingbecause of the limitation of their system to nonrecursive grammars and the other differences discussed in section 3 global thresholding represents a significant improvementcollins uses two thresholding techniquesthe first of these is essentially beam thresholding for each rule p r if nonterminal l in left cell if nonterminal r in right cell add p to parent cell algorithm one for each nonterminal l in left cell for eachnonterminal r in right cell for each rule p l r add p to parent cell without a priorin the second technique there is a constant probability thresholdany nodes with a probability below this threshold are prunedif the parse fails parsing is restarted with the constant loweredwe attempted to duplicate this technique but achieved only negligible performance improvementscollins reports a 38 speedup when this technique is combined with loose beam thresholding compared to loose beam thresholding aloneperhaps our lack of success is due to differences between our grammars which are fairly different formalismswhen collins began using a formalism somewhat closer to ours he needed to change his beam thresholding to take into account the prior so this is not unlikelyhwa using a model similar to pcfgs stochastic lexicalized tree insertion grammars also was not able to obtain a speedup using this techniquethere is previous work in the speech recognition community on automatically optimizing some parameters however this previous work differed significantly from ours both in the techniques used and in the parameters optimizedin particular previous work focused on optimizing weights for various components such as the language model componentin contrast we optimize thresholding parametersprevious techniques could not be used or easily adapted to thresholding parametersthe inner loop of the cky algorithm which determines for every pair of cells what nodes must be added to the parent can be written in several different wayswhich way this is done interacts with thresholding techniquesthere are two possibilities as shown in figure 7we used the second technique since the first technique gets no speedup from most thresholding systemsall experiments were trained on sections 218 of the penn treebank version iia few were tested where noted on the first 200 sentences of section 00 of length at most 40 wordsin one experiment we used the first 15 of length at most 40 and in the remainder of our experiments we used those sentences in the first 1001 of length at most 40our parameter optimization algorithm always used the first 31 sentences of length at most 40 words from section 19we ran some experiments on more sentences but there were three sentences in this larger test set that could not be parsed with beam thresholding even with loose settings of the threshold we therefore chose to report the smaller test set since it is difficult to compare techniques which did not parse exactly the same sentenceswe needed several grammars for our experiments so that we could test the multiplepass parsing algorithmthe grammar rules and their associated probabilities were determined by reading them off of the training section of the treebank in a manner very similar to that used by charniak the main grammar we chose was essentially of the following form that is our grammar was binary branching except that we also allowed unary branching productionsthere were never more than five subscripted symbols for any nonterminal although there could be fewer than five if there were fewer than five symbols remaining on the right hand sidethus our grammar was a kind of 6gram model on symbols in the grammar4 figure 8 shows an example of how we converted trees to binary branching with our grammarwe refer to this grammar as the 6gram grammarthe terminals of the grammar were the partofspeech symbols in the treebankany experiments that do not mention which grammar we used were run with the 6gram grammarfor a simple grammar we wanted something that would be very fastthe fastest grammar we can think of we call the terminal grammar because it has one nonterminal for each terminal symbol in the alphabetthe nonterminal symbol indicates the first terminal in its spanthe parses are binary branching in the same way that the 6gram grammar parses arefigure 9 shows how to convert a parse tree to the terminal grammarsince there is only one nonterminal possible for each cell of the chart parsing is quick for this grammarfor technical and practical reasons we actually wanted a marginally more complicated grammar which included the quotprimequot symbol of the 6gram grammar indicating that a cell is part of the same constituent as its parenttherefore we doubled the size of the grammar so that there would be both primed and nonprimed versions of each terminal we call this the terminalprime grammar and also show how to convert to it in figure 9this is the grammar we actually used as the first pass in our multiplepass parsing experimentsthe goal of a good thresholding algorithm is to trade off correctness for increased speedwe must thus measure both correctness and speed and there are some subtleties to measuring eachfirst the traditional way of measuring correctness is with metrics such as precision and recallunfortunately there are two problems with these measuresfirst they are two numbers neither useful without the othersecond they are subject to considerable noisein pilot experiments we found that as we changed our thresholding values monotonically precision and recall changed nonmonotonically we attribute this to the fact that we must choose a single parse from our parse forest and as we tighten a thresholding parameter we may threshold out either good or bad parsesfurthermore rather than just changing precision or recall by a small amount a single thresholded item may completely change the shape of the resulting treethus precision and recall are only smooth with very large sets of test datahowever because of the large number of experiments we wished to run using a large set of test data was not feasiblethus we looked for a surrogate measure and decided to use the total inside probability of all parses which with no thresholding is just the probability of the sentence given the modelif we denote the total inside probability with no thresholding by i and the total inside probability with thresholding by it then il is the probability that we did not threshold out the correct parse given the modelthus maximizing it should maximize correctnesssince probabilities can become very small we instead minimize entropies the negative logarithm of the probabilitiesfigure 11 shows that with a large data set entropy correlates well with precision and recall and that with smaller sets it is much smootherentropy is smoother because it is a function of many more variables in one experiment there were about 16000 constituents which contributed to precision and recall measurements versus 151 million productions potentially contributing to entropythus we choose entropy as our measure of correctness for most experimentswhen we did measure precision and recall we used the metric as defined by collins note that the fact that entropy changes smoothly and monotonically is critical for the performance of the multiple parameter optimization algorithmfurthermore we may have to run quite a few iterations of that algorithm to get convergence so the fact that entropy is smooth for relatively small numbers of sentences is a large helpthus the discovery that entropy is a good surrogate for precision and recall is nontrivialthe same kinds of observations could be extended to speech recognition to optimize multiple thresholds there a topic for future researchnote that for some sentences with too tight thresholding the parser will fail to find any parse at allwe dealt with these cases by restarting the parser with all thresholds lowered by a factor of 5 itffating this loosening until a parse could be foundthis is why for some tight thresholds the parser may be slower than with looser thresholds the sentence has to be parsed twice once with tight thresholds and once with loose onesnext we needed to choose a measure of timethere are two obvious measures amount of work done by the parser and elapsed timeif we measure amount of work done by the parser in terms of the number of productions with nonzero probability examined by the parser wehave a fairly implementationindependent machineindependent measure of speedon the other hand because we used many different thresholding algorithms some with a fair amount of overhead this measure seems inappropriatemultiplepass parsing requires use of the outside algorithm global thresholding uses its own dynamic programming algorithm and even beam thresholding has some pernode overheadthus we will give most measurements in terms of elapsed time not including loading the grammar and other 0 overheadwe did want to verify that elapsed time was a reasonable measure so we did a beam thresholding experiment to make sure that elapsed time and number of productions examined were well correlated using 200 sentences and an exponential sweep of the thresholding parameterthe results shown in figure 10 clearly indicate that time is a good proxy for productions examinedour first goal was to show that entropy is a good surrogate for precision and recallwe thus tried two experiments one with a relatively large test set of 200 sentences and one with a relatively small test set of 15 sentencespresumably the 200 sentence test set should be much less noisy and fairly indicative of performancewe graphed both precision and recall and entropy versus time as we swept the thresholding parameter over a sequence of valuesthe results are in figure 11as can be seen entropy is significantly smoother than precision and recall for both size test corporaour second goal was to check that the prior probability is indeed helpfulwe ran two experiments one with the prior and one withoutsince the experiments without the prior were much worse than those with it all other beam thresholding experiments included the priorthe results shown in figure 12 indicate that the prior is a critical componentthis experiment was run on 200 sentences of test datanotice that as the time increases the data tends to approach an asymptote as shown in the left hand graph of figure 12in order to make these small asymptotic changes more clear we wished to expand the scale towards the asymptotethe right hand graph was plotted with this expanded scale based on log a slight variation on a normal log scalewe use this scale in all the remaining entropy graphsa normal logarithmic scale is used for the time axisthe fact that the time axis is logarithmic is especially useful for determining how much more efficient one algorithm is than another at a given performance levelif one picks a performance level on the vertical axis then the distance between the two curves at that level represents the ratio between their speedsthere is roughly a factor of 8 to 10 difference between using the prior and not using it at all graphed performance levels with a slow trend towards smaller differences as the thresholds are loosenedwe tried experiments comparing global thresholding to beam thresholdingfigure 13 shows the results of this experiment and later experimentsin the best case global thresholding works twice as well as beam thresholding in the sense that to achieve the same level of performance requires only half as much time although smaller improvements were more typicalwe have found that in general global thresholding works better on simpler grammarsin some complicated grammars we explored in other work there were systematic strong correlations between nodes which violated the independence approximation used in global thresholdingthis prevented us from using global thresholding with these grammarsin the future we may modify global thresholding to model some of these correlationswhile global thresholding works better than beam thresholding in general each has its own strengthsglobal thresholding can threshold across cells but because of the approximations used the thresholds must generally be looserbeam thresholding can only threshold within a cell but can do so fairly tightlycombining the two offers the potential to get the advantages of bothwe ran a series of experiments using the thresholding optimization algorithm of section 5figure 13 gives the resultsthe combination of beam and global thresholding together is clearly better than either alone in some cases running 40 faster than global thresholding alone while achieving the same performance levelthe combination generally runs twice as fast as beam thresholding alone although up to a factor of threemultiplepass parsing improves even further on our experiments combining beam and global thresholdingnote that we used both beam and global thresholding for both the first and second pass in these experimentsthe first pass grammar was the very simple terminalprime grammar and the second pass grammar was the usual 6gram grammarwe evaluated multiplepass parsing slightly differently from the other thresholding techniquesin the experiments conducted here our first and second pass grammars were very different from each otherfor a given parse to be returned it must be in the intersection of both grammars and reasonably likely according to bothsince the first and second pass grammars capture different information parses which are likely according to both are especially goodthe entropy of a sentence measures its likelihood according to the second pass but ignores the fact that the returned parse must also be likely according to the first passthus entropy our measure in the previous experiments which measures only likelihood according to the final pass is not necessarily the right measure to usewe therefore give precision and recall results in this sectionwe still optimized our thresholding parameters using the same 31 sentence held out corpus and minimizing entropy versus number of productions as beforewe should note that when we used a first pass grammar that captured a strict subset of the information in the second pass grammar we have found that entropy is a very good measure of performanceas in our earlier experiments it tends to be well correlated with precision and recall but less subject to noiseit is only because of the grammar mismatch that we have changed the evaluationfigure 14 shows precision and recall curves for single pass versus multiple pass experimentsas in the entropy curves we can determine the performance ratio by looking across horizontallyfor instance the multipass recognizer achieves a 74 recall level using 2500 seconds while the best single pass algorithm requires about 4500 seconds to reach that leveldue to the noise resulting from precision and recall measurements it is hard to exactly quantify the advantage from multiple pass parsing but it is generally about 50in this paper we only considered applying multiplepass and global thresholding techniques to parsing probabilistic contextfree grammarshowever just about any probabilistic grammar formalism for which inside and outside probabilities can be computed can benefit from these techniquesfor instance probabilistic link grammars could benefit from our algorithmswe have however had trouble using global thresholding with grammars that strongly violated the independence assumptions of global thresholdingone especially interesting possibility is to apply multiplepass techniques to formalisms that require 0 parsing time such as stochastic bracketing transduction grammar and stochastic tree adjoining grammars sbtg is a contextfreelike formalism designed for translation from one language to another it uses a four dimensional chart to index spans in both the source and target language simultaneouslyit would be interesting to try speeding up an sbtg parser by running an 0 first pass on the source language alone and using this to prune parsing of the full sbtgthe stag formalism is a mildly contextsensitive formalism requiring 0 time to parsemost stag productions in practical grammars are actually contextfreethe traditional way to speed up stag parsing is to use the contextfree subset of an stag to form a stochastic tree insertion grammar an 0 formalism but this method has problems because the stig undergenerates since it is missing some elementary treesa different approach would be to use multiplepass parsingwe could first find a contextfree covering grammar for the stag and use this as a first pass and then use the full stag for the second passthe grammars described here are fairly simple presented for purposes of explicationin other work in preparation in which we have used a significantly more complicated grammar which we call the probabilistic feature grammar the improvements from multiplepass parsing are even more dramatic single pass experiments are simply too slow to run at allwe have also found the automatic thresholding parameter optimization algorithm to be very usefulbefore writing the parameter optimization algorithm we developed the pfg grammar and the multiplepass parsing technique and ran a series of experiments using hand optimized parameterswe recently ran the optimization algorithm and reran the experiments achieving a factor of two speedup with no performance losswhile we had not spent a great deal of time hand optimizing these parameters we are very encouraged by the optimization algorithm practical utilitythis paper introduces four new techniques beam thresholding with priors global thresholding multiplepass parsing and automatic search for thresholding parametersbeam thresholding with priors can lead to almost an order of magnitude improvement over beam thresholding without priorsglobal thresholding can be up to three times as efficient as the new beam thresholding technique although the typical improvement is closer to 50when global thresholding and beam thresholding are combined they are usually two to three times as fast as beam thresholding alonemultiplepass parsing can lead to up to an additional 50 improvement with the grammars in this paperwe expect the parameter optimization algorithm to be broadly useful
W97-0302
global thresholding and multiplepass parsingwe present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method at the same performance levelwe also present a new thresholding technique global thresholding which combined with the new beam thresholding gives an additional factor of two improvement and a novel technique multiple pass parsing that can be combined with the others to yield yet another 50 improvementwe use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithmswe describe a method for producing a simple but crude approximate grammar of a standard contextfree grammar
automatic discovery of noncompositional compounds in parallel data automatic segmentation of text into minimal contentbearing units is an unsolved problem even for languages like english spaces between words offer an easy first approximation but this approximation is not good enough for machine translation where many word sequences are not translated wordforword this paper presents an efficient automatic method for discovering sequences of words that are translated as a unit the method proceeds by comparing pairs of statistical translation models induced from parallel texts in two languages it can discover hundreds of noncompositional compounds on each iteration and constructs longer compounds out of shorter ones objective evaluation on a simple machine translation task has shown the method potential to improve the quality of mt output the method makes few assumptions about the data so it can be applied to parallel data other than parallel texts such as word spellings and pronunciations the optimal way to analyze linguistic data into its primitive elements is rarely obvious but often crucialidentifying phones and words in speech has been a major focus of researchautomatically finding words in text the problem addressed here is largely unsolved for languages such as chinese and thai which are written without spaces spaces in texts of languages like english offer an easy first approximation to minimal contentbearing unitshowever this approximation misanalyzes noncompositional compounds such as quotkick the bucketquot and quothot dogquot nccs are compound words whose meanings are a matter of convention and cannot be synthesized from the meanings of their spacedelimited componentstreating nccs as multiple words degrades the performance of machine translation information retrieval natural language generation and most other nlp applicationsnccs are usually not translated literally to other languagestherefore one way to discover nccs is to induce and analyze a translation model between two languagesthis paper is about an informationtheoretic approach to this kind of ontological discoverythe method is based on the insight that treatment of nccs as multiple words reduces the predictive power of translation modelswhether a given sequence of words is an ncc can be determined by comparing the predictive power of two translation models that differ on whether they treat the word sequence as an nccsearching a space of data models in this manner has been proposed before egby brown et al and wang et al but their particular methods have been limited by the computational expense of inducing data models and the typically vast number of potential nccs that need to be testedthe method presented here overcomes this limitation by making independence assumptions that allow hundreds of nccs to be discovered from each pair of induced translation modelsit is further accelerated by heuristics for gauging the a priori likelihood of validation for each candidate nccthe predictive power of a translation model depends on what the model is meant to predictthis paper considers two different applications of translation models and their corresponding objective functionsthe different objective functions lead to different mathematical formulations of predictive power different heuristics for estimating predictive power and different classifications of word sequences with respect to compositionalitymonolingual properties of nccs are not considered by either objective functionso the method will not detect phrases that are translated wordforword despite noncompositional semantics such as the english metaphors quotivory towerquot and quotbanana republicquot which translate literally into frenchon the other hand the method will detect word sequences that are often paraphrased in translation but have perfectly compositional meanings in the monolingual sensefor example quottax systemquot is most often translated into french as quotregime fiscalequot each new batch of validated nccs raises the value of the objective function for the given application as demonstrated in section 8you can skip ahead to table 4 for a random sample of the nccs that the method validated for use in a machine translation taskthe ncc detection method makes some assumptions about the properties of statistical translation models but no assumptions about the data from which the models are constructedtherefore the method is applicable to parallel data other than parallel textsfor example section 8 applies the method to orthographic and phonetic representations of english words to discover the nccs of english orthographya translation model can be constructed automatically from texts that exist in two languages the more accurate algorithms used for constructing translation models including the them algorithm alternate between two phasesin the first phase the algorithm finds and counts the most likely links between word tokens in the two halves of the bitextlinks connect words that are hypothesized to be mutual translationsin the second phase the algorithm estimates translation probabilities by dividing the link counts by the total number of linkslet s and t represent the distributions of linked words in the source and target textsa simple translation model is just a joint probability distribution pr which indicates the probability that a randomly selected link in the bitext linksthe decision whether a given sequence of words should count as an ncc can be made automatically if it can be expressed in terms of an explicit objective function for the given applicationthe first application i will consider is statistical machine translation involving a directed translation model and a target language model of the sort advocated by brown et al if only the translation model may be varied then the objective function for this application should be based on how well the translation model predicts the distribution of words in the target languagein information theory one such objective function is called mutual informationmutual information measures how well one random variable predicts another3 when pr is a text translation model mutual information indicates how well the model can predict the distribution of words in the target text given the distribution of words in the source text and vice versathis objective function may also be used for optimizing crosslanguage information retrieval where translational distributions must be estimated either for queries or for documents before queries and documents can be compared figure 1 shows a simple example of how recognition of nccs increases the mutual information of translation modelsthe english word quotbalancequot is most often translated into french as quotequilibrequot and quotsheetquot usually becomes quotfeuillequot however a quotbalance sheetquot is a quotbilanquot a translation model that does not recognize quotbalance sheetquot as an ncc would distribute the translation probabilities of quotbilanquot over multiple english words as shown in the incorrect modelthe incorrect model is uncertain about how quotbilanquot should be translatedon the other hand the correct model which recognizes quotbalance sheetquot as an ncc is completely certain about its translationas a result the mutual information pf the incorrect model isan explicit objective function immediately leads to a simple test of whether a given sequence of words should be treated as an ncc induce two translation models a trial translation model that involves the candidate ncc and a base translation model that does notif the value of the objective function is higher in the trial model than in the base model then the ncc is valid otherwise it is notin theory this test can be repeated for each sequence of words in the textin practice texts contain an enormous number of word sequences only a tiny fraction of which are nccs and it takes considerable computational effort to induce each translation modeltherefore it is necessary to test many nccs on each pair of translation modelssuppose we induce a trial translation model from texts e and f involving a number of nccs in the language s of e and compare it to a base translation model without any of those nccswe would like to keep the nccs that caused a net increase in the objective function i and discard those that caused a net decreasewe need some method of assigning credit for the difference in the value of i between the two modelsmore precisely we need a function ir over the words s e s such that the predictive value function ir represents the contribution of s to the objective function of the whole translation modeli will write simply i when t is clear from the contextcomparison of predictive value functions across translation models can only be done under let i and i be the predictive value functions for source words in the base translation model and in the trial translation model respectivelyunder assumption 1 the net change in the objective function effected by each candidate ncc xy is if azy 0 then xy is a valid ncc for the given applicationassumption 1 would likely be false if either x or y was a part of any candidate ncc other than sytherefore nccs that are tested at the same time must satisfy the mutual exclusion condition no word s e s may participate in more than one candidate ncc at the same timeassumption 1 may not be completely safe even with this restriction due to the imprecise nature of translation model construction algorithmsthe mutual exclusion condition implies that multiple tests must be performed to find the majority of nccs in a given textfurthermore equation 4 allows testing of only twoword nccscertainly longer nccs existgiven parallel texts e and f the following algorithm runs multiple ncc tests and allows for recognition of progressively longer nccs fortunately the objective function in equations 1 3induce a base translation model between e is already a summation over source wordsso its and f the algorithm can also be run in quottwosidedquot mode so that it looks for nccs in e and in f on alternate iterationsthis mode enables the translation model to link nccs in one language to nccs in the otherin its simplest form the algorithm only considers adjacent words as candidate nccshowever function words are translated very inconsistently and it is difficult to model their translational distributions accuratelyto make discovery of nccs involving function words more likely i consider content words that are separated by one or two functions words to be adjacentthus nccs like quotblow whistlequot and quoticing cakequot may contain gapsfusing nccs with gaps may fuse some words incorrectly when the ncc is a frozen expressionfor example we would want to recognize that quoticing cakequot is an ncc when we see it in new text but not if it occurs in a sentence like quotmary ate the icing off the cakequot it is necessary to determine whether the gap in a given ncc is fixed or notthus the price for this flexibility provided by ncc gaps is that before step 7 the algorithm fills gaps in proposed nccs by looking through the textsometimes nccs have multiple possible gap fillers for example quotmake up myyourhistheir mindquot when the gap filling procedure finds two or three possible fillers the most frequent filler is used and the rest are ignored in the hope that they will be discovered on the next iterationwhen there are more than three possible fillers the ncc retains the gapthe token fuser knows to shift all words in the ncc to the location of the leftmost wordeg an instance of the previous example in the text might be fused as quotmake_up__mind hisquot in principle the ncc discovery algorithm could iterate until axy 0 for a candidate ncc xy before inducing a translation model that involves this nccto do so it is necessary to estimate i i and i using only the base translation modelfirst a bit of notationlet lc and rc denote word contexts to the left and to the rightlet be the set of tokens of x whose right context is y and vice versa for now i and i can be estimated under assumption 2 when x occurs without y in its context it will be linked to the same target words by the trial translation model as by the base translation model and likewise for y without x estimating ii is more difficult because it requires knowledge of the entire translational distributions of both x and y conditioned on all the contexts of x and ysince we wish to consider hundreds of candidate nccs simultaneously and contexts from many megabytes of text all this information would not fit on disk let alone in memorythe best we can do is approximate with lowerorder distributions that are easier to computethe approximation begins with assumption 3 implies that for all t e t pr pr pr the approximation continues with under assumptions 3 and 4 we can estimate i as shown in figure 2the final form of equation 5 allows us to partition all the terms in equation 4 into two sets one for each of the components of the candidate ncc where terms in all the equation 12 depend only on the probability distributions pr pr and prall the terms in equation 13 depend only on pr pr and prthese distributions can be computed efficiently by memoryexternal sorting and streamed accumulationin bagofwords translation each word in the source text is simply replaced with its most likely translationno target language model is involvedfor this application it is sufficient to predict only the maximum likelihood translation of each source wordthe rest of the translational distribution can be ignoredlet mr be the most likely translation of each source word s according to the translation model again i will write simply m when t is clear from the contextthe objective function v for this application follows by analogy with the mutual information function in equation 1 the kronecker ö function is equal to one when its arguments are identical and zero otherwisethe form of the objective function again permits easy distribution of its value over the s e s the formula for estimating the net change in the objective function due to each candidate ncc remains the same it is easier to estimate the values of v using only the base translation model than to estimate the values of since only the most likely translations need to be considered instead of entire translational distributions v and v are again estimated under assumption 2 v can be estimated without making the strong assumptions 3 and 4instead i use the weaker assumption 5 let tx and ty be the most frequent translations of x and y in each other presence in the base translation modelthe most likely translation of xy in the trial translation model will be the more frequent of tx and tithis quantity can be computed exactly at a reasonable computational expenseto demonstrate the method applicability to data other than parallel texts and to illustrate some of its interesting properties i describe my last experiment firsti applied the mutual information objective function and its associated predictive value function to a data set consisting of spellings and pronunciations of 17381 english wordstable 1 shows the nccs of english spelling that the algorithm discovered on the first 10 iterationsthe table reveals some interesting behavior of the algorithmthe nccs quoterquot quotngquot and quotowquot were validated because this data set represents the sounds usually produced by these letter combinations with one phonemethe ncc quotesquot most often appears in wordfinal position where the quotequot is silenthowever when quotesquot is not wordfinal the quotequot is usually not silent and the most frequent following letter is quotsquot which is why the ncc quotessquot was validatednccs like quottioquot and quotoughquot are built up over multiple iterations sometimes out of pairs of previously discovered nccsthe other two experiments were carried out on transcripts of canadian parliamentary debates known as the hansardsfrench and english versions of these texts were aligned by sentence using the method of gale church morphological variants in both languages were stemmed to a canonical formthirteen million words were used for training and another two and a half million were used for testingall translation models were induced using the method of melamed six iterations of the ncc discovery algorithm were run in quottwosidedquot mode using the objective function and five iterations were run using the objective function v each iteration took approximately 78 hours on a 167mhz ultrasparc processor running unoptimized perl codetables 2 and 3 chart the ncc discovery processthe nccs proposed for the v objective function were much more likely to be validated than those proposed for i because the predictive value function v is much easier to estimate a priori than the predictive value function iin 3 iterations on the english side of the bitext 192 nccs were validated for i and 1432 were validated for v of the 1432 nccs validated for v 84 nccs consisted of 3 words 3 consisted of 4 words and 2 consisted of 5 wordsthe french nccs were longer on average due to the frequent quotn de nquot construction for noun compoundsthe first experiment on the hansards involved the mutual information objective function i and its associated predictive value function in equation 3the first step in the experiment was the construction of 5 new versions of the test data in addition to the original versionversion k of the test data was constructed by fusing all nccs validated up to iteration k on the training datathe second step was to induce a translation model from each version of the test datathere was no opportunity to measure the impact of ncc recognition under the objective function i on any real application but figure 3 shows that the mutual information of successive test translation models rose as desiredthe second experiment was based on the simpler objective function v and its associated predictive value function in equation 16the impact of ncc recognition on the bagofwords translation task was measured directly using bitextbased lexicon evaluation bible is a family of evaluation algorithms for comparing different translation methods objectively and automaticallythe algorithms are based on the observation that if translation method a is better than translation method b and each method produces a translation from one half of a heldout test bitext then the other half of that bitext will be more similar to the translation produced by a than to the translation produced by bin the present experiment the translation method was always bagofwords translation but using different translation modelsthe similarity of two texts was measured in terms of word precision and word recall in aligned sentence pairs ignoring word orderi compared the 6 base translation models induced in 6 iterations of the algorithm in section 55 the first model is numbered 0 to indicate that it did not recognize any nccsthe 6 translation models were evaluated on the test bitext using the following bible algorithm the bible algorithm compared the 6 models in both directions of translationthe results are detailed in figures 4 and 5figure 6 shows fmeasures that are standard in the information retrieval literature the absolute recall and precision values in these figures are quite low but this is not a reflection of the quality of the translation modelsrather it is an expected outcome of bible evaluation which is quite harshmany translations are not word for word in real bitexts and bible does not even give credit for synonymsthe best possible performance on this kind of bible evaluation has been estimated at 62 precision and 60 recall the purpose of bible is internally valid comparison rather than externally valid benchmarkingon a sufficiently large test bitext bible can expose the slightest differences in translation qualitythe number of nccs validated on each iteration was never more than 25 of the vocabulary sizethus the curves in figures 4 and 5 have a very small range but the trends are cleara qualitative assessment of the ncc discovery method can be made by looking at table 4it contains a random sample of 50 of the english nccs accumulated in the first five iterations of the algorithm in section 5 using the simpler objective function v all of the nccs in the table are noncompositional with respect to the objective function v many of the nccs like quotred tapequot and quotblaze the trailquot are true idiomssome nccs are incompleteegquotflowquot has not yet been recognized as a noncompositional part of quotflowthrough sharequot and likewise for quotheadquot in quotrear its ugly headquot these nccs would likely be completed if the algorithm were allowed to run for more iterationssome of the other entries deserve more explanationfirst quotdella nocequot is the last name of a canadian member of parliamentevery occurrence of this name in the french training text was tokenized as quotdella nocequot with a lowercase quotnquot because quotnocequot is a common noun in french meaning quotmarriagequot and the tokenization algorithm lowercases all capitalized words that are found in the lexiconwhen this word occurs in the french text without quotdellaquot its english translation is quotmarriagequot but when it occurs as part of the name its translation is quotnocequot so the french bigram quotdella nocequot is noncompositional with respect to the objective function v it was validated as an nccon a subsequent iteration the algorithm found that the english bigram quotdella nocequot was always linked to one french word the ncc quotdella_nocequot so it decided that the english quotdella nocequot must also be an nccthis is one of the few noncompositional personal names in the hansardsanother interesting entry in the table is the last onethe capitalized english words quotgenericquot and quotassociationquot are translated with perfect consistency to quotgenericquot and quotassociationquot respectively in the training textthe translation of the middle two words however is noncompositionalwhen quotpharmaceuticalquot and quotindustryquot occur together they are rendered in the french text without translation as quotpharmaceutical industryquot when they occur separately they are translated into quotpharmaceutiquequot and quotindustriequot thus the english bigram quotpharmaceutical industryquot is an ncc but the words that always occur around it are not part of the nccsimilar reasoning applies to quotship unprocessed uraniumquot the bigram is an ncc because its components are translated noncompositionally whenever they cooccurhowever quoturaniumquot is always translated as quoturaniumquot so it is not a part of the nccthis ncc demonstrates that valid nccs may cross the boundaries of grammatical constituentsin their seminal work on statistical machine translation brown et al implicitly accounted for nccs in the target language by estimating quotfertilityquot distributions for words in the source languagea source word s with fertility n could generate a sequence of 71 target words if each word in the sequence was also in the translational distribution of s and the target language model assigned a sufficiently high probability to the sequencehowever brown et al models do not account for nccs in the source languagerecognition of sourcelanguage nccs would certainly improve the performance of their models but brown et al warn that one must be discriminating in choosing multiword ceptsthe caution that we have displayed thus far in limiting ourselves to cepts with fewer than two words was motivated primarily by our respect for the featureless desert that multiword cepts offer a priori the heuristics in section 6 are designed specifically to find the interesting features in that featureless desertfurthermore translational equivalence relations involving explicit representations of targetlanguage nccs are more useful than fertility distributions for applications that do translation by table lookupmany authors define quotcollocationsquot in terms of monolingual frequency and partofspeech patternsmarkedly high frequency is a necessary property of nccs because otherwise they would fall out of usehowever at least for translationrelated applications it is not a sufficient propertynoncompositional translation cannot be detected reliably without looking at translational distributionsthe deficiency of criteria that ignore translational distributions is illustrated by their propensity to validate most personal names as quotcollocationsquot at least among west european languages translations of the vast majority of personal names are perfectly compositionalseveral authors have used mutual information and similar statistics as an objective function for word clustering for automatic determination of phonemic baseforms and for language modeling for speech recognition although the applications considered in this paper are different the strategy is similar search a space of data models for the one with maximum predictive powerwang et at also employ parallel texts and independence assumptions that are similar to those described in section 6like brown et at they report a modest improvement in model perplexity and encouraging qualitative resultsunfortunately their estimation method cannot propose more than ten or so wordpair clusters before the translation model must be reestimatedalso the particular clustering method that they hoped to improve using parallel data is not very robust for low frequenciesso like smadja et at they were forced to ignore all words that occur less than five timesif appropriate objective functions and predictive value functions can be found for these other tasks then the method in this paper might be applied to themthere has been some research into matching compositional phrases across bitextsfor example kupiec presented a method for finding translations of whole noun phraseswu showed how to use an existing translation lexicon to populate a database of quotphrasal correspondencesquot for use in examplebased mtthese compositional translation patterns enable more sophisticated approaches to mthowever they are only useful if they can be discovered reliably and efficientlytheir time may come when we have a better understanding of how to model the human translation processit is well known that two languages are more informative than one i have argued that texts in two languages are not only preferable but necessary for discovery of noncompositional compounds for translationrelated applicationsgiven a method for constructing statistical translation models nccs can be discovered by maximizing the models informationtheoretic predictive value over parallel data setsthis paper presented an efficient algorithm for such ontological discoveryproper recognition of nccs resulted in improved performance on a simple mt tasklists of nccs derived from parallel data may be useful for nlp applications that do not involve parallel datatranslationoriented ncc lists can be used directly in applications that have a human in the loop such as computerassisted lexicography computerassisted language learning and corpus linguisticsto the extent that translationoriented definitions of compositionality overlap with other definitions ncc lists derived from parallel data may benefit other applications where nccs play a role such as information retrieval and language modeling for speech recognition to the extent that different applications have different objective functions optimizing these functions can benefit from an understanding of how they differthe present work was a step towards such understanding because quotan explication of a monolingual idiom might best be given after bilingual idioms have been properly understoodquot the ncc discovery method makes few assumptions about the data sets from which the statistical translation models are inducedas demonstrated in section 8 the method can find nccs in english letter strings that are aligned with their phonetic representationswe hope to use this method to discover nccs in other kinds of parallel dataa natural next target is bitexts involving asian languagesperhaps the method presented here combined with an appropriate translation model can make some progress on the word identification problem for languages like chinese and japanese2 wrongful conviction erreur judiciaire 2 weak sister parent pauvre 2 of both the users and providers of transportation des utilisateurs et des transporteurs 2 understand the motivation saisir le motif 2 swimming pool piscine 2 ship unprocessed uranium expedier de luranium non raffine 2 by reason of insanity pour because dalienation mentale 2 lagence de presse libre du québec lagence de presse libre du québec 2 do cold weather research etudier leffet du froid 2 the bread basket of the nation le grenier du canada 2 turn back the boatload of european jews renvoyer tout ces juifs europeens 2 generic pharmaceutical industry association generic pharmaceutical industry association
W97-0311
automatic discovery of noncompositional compounds in parallel dataautomatic segmentation of text into minimal contentbearing units is an unsolved problem even for languages like englishspaces between words offer an easy first approximation but this approximation is not good enough for machine translation where many word sequences are not translated wordforwordthis paper presents an efficient automatic method for discovering sequences of words that are translated as a unitthe method proceeds by comparing pairs of statistical translation models induced from parallel texts in two languagesit can discover hundreds of noncompositional compounds on each iteration and constructs longer compounds out of shorter onesobjective evaluation on a simple machine translation task has shown the method potential to improve the quality of mt outputthe method makes few assumptions about the data so it can be applied to parallel data other than parallel texts such as word spellings and pronunciationswe propose a method for the recognition of multi word compounds in bi texts that is based on the predictive value of a translation modelwe investigates techniques for identifying noncompositional compounds in englishfrench parallel corpora and emphasises that translation models that take non compositional compounds into account are more accurate
a corpusbased approach for building semantic lexicons semantic knowledge can be a great asset to natural language processing systems but it is usually handcoded for each application although some semantic information is available in generalpurpose knowledge bases such as wordnet and cyc many applications require domainspecific lexicons that represent words and categories for a particular topic in this paper we present a corpusbased method that can be used to build semantic lexicons for specific categories the input to the system is a small set of seed words for a category and a representative text corpus the output is a ranked list of words that are associated with the category a user then reviews the topranked words and decides which ones should be entered in the semantic lexicon in experiments with five categories users typically found about 60 words per category in 1015 minutes to build a core semantic lexicon semantic information can be helpful in almost all aspects of natural language understanding including word sense disambiguation selectional restrictions attachment decisions and discourse processingsemantic knowledge can add a great deal of power and accuracy to natural language processing systemsbut semantic information is difficult to obtainin most cases semantic knowledge is encoded manually for each applicationthere have been a few largescale efforts to create broad semantic knowledge bases such as wordnet and cyc while these efforts may be useful for some applications we believe that they will never fully satisfy the need for semantic knowledgemany domains are characterized by their own sublanguage containing terms and jargon specific to the fieldrepresenting all sublanguages in a single knowledge base would be nearly impossiblefurthermore domainspecific semantic lexicons are useful for minimizing ambiguity problemswithin the context of a restricted domain many polysemous words have a strong preference for one word sense so knowing the most probable word sense in a domain can strongly constrain the ambiguitywe have been experimenting with a corpusbased method for building semantic lexicons semiautomaticallyour system uses a text corpus and a small set of seed words for a category to identify other words that also belong to the categorythe algorithm uses simple statistics and a bootstrapping mechanism to generate a ranked list of potential category wordsa human then reviews the top words and selects the best ones for the dictionaryour approach is geared toward fast semantic lexicon construction given a handful of seed words for a category and a representative text corpus one can build a semantic lexicon for a category in just a few minutesin the first section we describe the statistical bootstrapping algorithm for identifying candidate category words and ranking themnext we describe experimental results for five categoriesfinally we discuss our experiences with additional categories and seed word lists and summarize our resultsour work is based on the observation that category members are often surrounded by other category members in text for example in conjunctions lists appositives and nominal compounds given a few category members we wondered whether it would be possible to collect surrounding contexts and use statistics to identify other words that also belong to the categoryour approach was motivated by yarowsky word sense disambiguation algorithm and the notion of statistical salience although our system uses somewhat different statistical measures and techniqueswe begin with a small set of seed words for a categorywe experimented with different numbers of seed words but were surprised to find that only 5 seed words per category worked quite wellas an example the seed word lists used in our experiments are shown belowenergy fuel gas gasoline oil power financial bank banking currency dollar money military army commander infantry soldier troop vehicle airplane car jeep plane truck weapon bomb dynamite explosives gun rifle the input to our system is a text corpus and an initial set of seed words for each categoryideally the text corpus should contain many references to the categoryour approach is designed for domainspecific text processing so the text corpus should be a representative sample of texts for the domain and the categories should be semantic classes associated with the domaingiven a text corpus and an initial seed word list for a category c the algorithm for building a semantic lexicon is as follows the context windows do not cut across sentence boundariesnote that our context window is much narrower than those used by other researchers we experimented with larger window sizes and found that the narrow windows more consistently included words related to the target categorynote that this is not exactly a conditional probability because a single word occurrence can belong to more than one context windowfor example consider the sentence i bought an ak47 gun and an m16 riflethe word m16 would be in the context windows for both gun and rifle even though there was just one occurrence of it in the sentenceconsequently the category score for a word can be greater than 14next we remove stopwords numbers and any words with a corpus frequency 5we used a stopword list containing about 30 general nouns mostly pronouns and determiners the stopwords and numbers are not specific to any category and are common across many domains so we felt it was safe to remove themthe remaining nouns are sorted by category score and ranked so that the nouns most strongly associated with the category appear at the top5the top five nouns that are not already seed words are added to the seed word list dynamicallywe then go back to step 1 and repeat the processthis bootstrapping mechanism dynamically grows the seed word list so that each iteration produces a larger category contextin our experiments the top five nouns were added automatically without any human intervention but this sometimes allows noncategory words to dilute the growing seed word lista few inappropriate words are not likely to have much impact but many inappropriate words or a few highly frequent words can weaken the feedback processone could have a person verify that each word belongs to the target category before adding it to the seed word list but this would require human interaction at each iteration of the feedback cyclewe decided to see how well the technique could work without this additional human interaction but the potential benefits of human feedback still need to be investigatedafter several iterations the seed word list typically contains many relevant category wordsbut more importantly the ranked list contains many additional category words especially near the topthe number of iterations can make a big difference in the quality of the ranked listsince new seed words are generated dynamically without manual review the quality of the ranked list can deteriorate rapidly when too many noncategory words become seed wordsin our experiments we found that about eight iterations usually worked wellthe output of the system is the ranked list of nouns after the final iterationthe seed word list is thrown awaynote that the original seed words were already known to be category members and the new seed words are already in the ranked list because that is how they were selected2 finally a user must review the ranked list and identify the words that are true category membershow one defines a quottruequot category member is subjective and may depend on the specific application so we leave this exercise to a persontypically the words near the top of the ranked list are highly associated with the category but the density of category words decreases as one proceeds down the listthe user may scan down the list until a sufficient number of category words is found or as long as time permitsthe words selected by the user are added to a permanent semantic lexicon with the appropriate category labelour goal is to allow a user to build a semantic lexicon for one or more categories using only a small set of known category members as seed words and a text corpusthe output is a ranked list of potential category words that a user can review to create a semantic lexicon quicklythe success of this approach depends on the quality of the ranked list especially the density of category members near the topin the next section we describe experiments to evaluate our system2it is possible that a word may be near the top of the ranked list during one iteration but become buried at the bottom of the ranked list during later iterationshowever we have not observed this to be a problem so farwe performed experiments with five categories to evaluate the effectiveness and generality of our approach energy financial military vehicles and weaponsthe muc4 development corpus was used as the text corpus we chose these five categories because they represented relatively different semantic classes they were prevalent in the muc4 corpus and they seemed to be useful categoriesfor each category we began with the seed word lists shown in figure 1we ran the bootstrapping algorithm for eight iterations adding five new words to the seed word list after each cycleafter the final iteration we had ranked lists of potential category words for each of the five categoriesthe top 45 words3 from each ranked list are shown in figure 2while the ranked lists are far from perfect one can see that there are many category members near the top of each listit is also apparent that a few additional heuristics could be used to remove many of the extraneous wordsfor example our number processor failed to remove numbers with commas and the military category contains several ordinal numbers that could be easily identified and removedbut the key question is whether the ranked list contains many true category memberssince this is a subjective question we set up an experiment involving human judgesfor each category we selected the top 200 words from its ranked list and presented them to a userwe presented the words in random order so that the user had no idea how our system had ranked the wordsthis was done to minimize contextual effects each category was judged by two people independently4 the judges were asked to rate each word on a scale from 1 to 5 indicating how strongly it was associated with the categorysince category judgements can be highly subjective we gave them guidelines to help establish uniform criteriathe instructions that were given to the judges are shown in figure 3we asked the judges to rate the words on a scale from 1 to 5 because different degrees of category membership might be acceptable for different applicationssome applications might require strict cat3 note that some of these words are not nouns such as boarded and yousmadeour parser tags unknown words as nouns so sometimes unknown words are mistakenly selected for context windowsthe judges were members of our research group but not the authors alimoncovenas refers to an oil pipeline ala_aurora refers to an airportcriteria on a scale of 0 to 5 rate each word strength of association with the given category using the following criteriawe will use the category animal as an example5 core member of the category if a word is clearly a member of the category then it deserves a 5for example dogs and sparrows are members of the animal categoryif a word refers to a part of something that is a member of the category then it deserves a 4for example feathers and tails are parts of animalsif a word refers to something that is strongly associated with members of the category but is not actually a member of the category itself then it deserves a 3for example zoos and nests are strongly associated with animalsif a word refers to something that can be associated with members of the category but is also associated with many other types of things then it deserves a 2for example bowls and parks are weakly associated with animals1 no association with the category if a word has virtually no association with the category then it deserves a 1for example tables and moons have virtually no association with animals0 unknown word if you do not know what a word means then it should be labeled with a 0importantmany words have several distinct meaningsfor example the word quothorsequot can refer to an animal a piece of gymnastics equipment or it can mean to fool around if a word has any meaning associated with the given category then only consider that meaning when assigning numbersfor example the word quothorsequot would be a 5 because one of its meanings refers to an animal egory membership for example only words like gun rifle and bomb should be labeled as weaponsbut from a practical perspective subparts of category members might also be acceptablefor example if a cartridge or trigger is mentioned in the context of an event then one can infer that a gun was usedand for some applications any word that is strongly associated with a category might be useful to include in the semantic lexiconfor example words like ammunition or bullets are highly suggestive of a weaponin the umassmuc4 information extraction system the words ammunition and bullets were defined as weapons mainly for the purpose of selectional restrictionsthe human judges estimated that it took them approximately 1015 minutes on average to judge the 200 words for each categorysince the instructions allowed the users to assign a zero to a word if they did not know what it meant we manually removed the zeros and assigned ratings that we thought were appropriatewe considered ignoring the zeros but some of the categories would have been severely impactedfor example many of the legitimate weapons were not known to the judgesfortunately most of the unknown words were proper nouns with relatively unambiguous semantics so we do not believe that this process compromised the integrity of the experimentfinally we graphed the results from the human judgeswe counted the number of words judged as 5 by either judge the number of words judged as 5 or 4 by either judge the number of words judged as 5 4 or 3 by either judge and the number of words judged as either 5 4 3 or 2we plotted the results after each 20 words stepping down the ranked list to see whether the words near the top of the list were more highly associated with the category than words farther downwe also wanted to see whether the number of category words leveled off or whether it continued to growthe results from this experiment are shown in figures 48with the exception of the energy category we were able to find 2545 words that were judged as 4 or 5 for each categorythis was our strictest test because only true category members earned this ratingalthough this might not seem like a lot of category words 2545 words is enough to produce a reasonable core semantic lexiconfor example the words judged as 5 for each category are shown in figure 9figure 9 illustrates an important benefit of the corpusbased approachby sifting through a large text corpus the algorithm can find many relevant category words that a user would probably not enter in a semantic lexicon on their ownfor example suppose a user wanted to build a dictionary of vehicle wordsmost people would probably define words such as car truck plane and automobilebut it is doubtful that most people would think of words like gunships fighter carrier and ambulancesthe corpusbased algorithm is especially good at identifying words that are common in the text corpus even though they might not be commonly used in generalas another example specific types of weapons might not even be known to most users but they are abundant in the muc4 corpusif we consider all the words rated as 3 4 or 5 then we were able to find about 5065 words for every category except energymany of these words would be useful in a semantic dictionary for the categoryfor example some of the words rated as 3 for the vehicle category include flight flights aviation pilot airport and highwaysmost of the words rated as 2 are not specific to the target category but some of them might be useful for certain tasksfor example some words judged as 2 for the energy category are spill pole tower and fieldsthese words may appear in many different contexts but in texts about energy topics these words are likely to be relevant and probably should be defined in the dictionarytherefore we expect that a user would likely keep some of these words in the semantic lexicon but would probably be very selectivefinally the graphs show that most of the acquisition curves displayed positive slopes even at the end of the 200 wordsthis implies that more category words would likely have been found if the users had reviewed more than 200 wordsthe one exception again was the energy category which we will discuss in the next sectionthe size of the ranked lists ranged from 442 for the financial category to 919 for the military category so it would be interesting to know how many category members would have been found if we had given the entire lists to our judgeswhen we first began this work we were unsure about what types of categories would be amenable to this approachso we experimented with a number of different categoriesfortunately most of them worked fairly well but some of them did notwe do not claim to understand exactly what types of categories will work well and which ones will not but our early experiences did she would some light on the strengths and weaknesses of this approachin addition to the previous five categories we also experimented with categories for location commercial and personthe location category performed very well using seed words such as city town and provincewe did not formally evaluate this category because most of the category words were proper nouns and we did not expect that our judges would know what they werebut it is worth noting that this category achieved good results presumably because location names often cluster together in appositives conjunctions and nominal compoundsfor the commercial category we chose seed words such as store shop and marketonly a few new commercial words were identified such as hotel and restaurantin retrospect we realized that there were probably few words in the muc4 corpus that referred to commercial establishmentsthe relatively poor performance of the energy category was probably due to the same problemif a category is not wellrepresented in the corpus then it is doomed because inappropriate words become seed words in the early iterations and quickly derail the feedback loopthe person category produced mixed resultssome good category words were found such as rebel advisers criminal and citizenbut many of the words referred to organizations groups and actions some of these words seemed reasonable but it was hard to draw a line between specific references to people and concepts like organizations and groups that may or may not consist entirely of peoplethe large proportion of action words also diluted the listmore experiments are needed to better understand whether this category is inherently difficult or whether a more carefully chosen set of seed words would improve performancemore experiments are also needed to evaluate different seed word liststhe algorithm is clearly sensitive to the initial seed words but the degree of sensitivity is unknownfor the five categories reported in this paper we arbitrarily chose a few words that were central members of the categoryour initial seed words worked well enough that we did not experiment with them very muchbut we did perform a few experiments varying the number of seed wordsin general we found that additional seed words tend to improve performance but the results were not substantially different using five seed words or using tenof course there is also a law of diminishing returns using a seed word list containing 60 category words is almost like creating a semantic lexicon for the category by handbuilding semantic lexicons will always be a subjective process and the quality of a semantic lexicon is highly dependent on the task for which it will be usedbut there is no question that semantic knowledge is essential for many problems in natural language processingmost of the time semantic knowledge is defined manually for the target application but several techniques have been developed for generating semantic knowledge automaticallysome systems learn the meanings of unknown words using expectations derived from other word definitions in the surrounding context other approaches use example or casebased methods to match unknown word contexts against previously seen word contexts our task orientation is a bit different because we are trying to construct a semantic lexicon for a target category instead of classifying unknown or polysemous words in contextto our knowledge our system is the first one aimed at building semantic lexicons from raw text without using any additional semantic knowledgethe only lexical knowledge used by our parser is a partofspeech dictionary for syntactic processingalthough we used a handcrafted partofspeech dictionary for these experiments statistical and corpusbased taggers are readily available our corpusbased approach is designed to support fast semantic lexicon constructiona user only needs to supply a representative text corpus and a small set of seed words for each target categoryour experiments suggest that a core semantic lexicon can be built for each category with only 1015 minutes of human interactionwhile more work needs to be done to refine this procedure and characterize the types of categories it can handle we believe that this is a promising approach for corpusbased semantic knowledge acquisitionthis research was funded by nsf grant iri9509820 and the university of utah research committeewe would like to thank david bean jeff lorenzen and kiri wagstaff for their help in judging our category lists
W97-0313
a corpusbased approach for building semantic lexiconssemantic knowledge can be a great asset to natural language processing systems but it is usually handcoded for each applicationalthough some semantic information is available in generalpurpose knowledge bases such as wordnet and cyc many applications require domainspecific lexicons that represent words and categories for a particular topicin this paper we present a corpusbased method that can be used to build semantic lexicons for specific categoriesthe input to the system is a small set of seed words for a category and a representative text corpusthe output is a ranked list of words that are associated with the categorya user then reviews the topranked words and decides which ones should be entered in the semantic lexiconin experiments with five categories users typically found about 60 words per category in 1015 minutes to build a core semantic lexiconwe find that nouns in conjunctions or appositives tend to be semantically relatedwe suggest using conjunction and appositive data to cluster nouns we approximate this data by looking at the nearest np on each side of a particular npwe also give credit for words associated with but not belonging to a particular category
distinguishing word senses in untagged text this paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text the methods described in this paper mcquitty similarity analysis ward minimumvariance method and the them algorithm assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text these methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs overall the most accurate of these procedures is mcquitty similarity analysis in combination with a high dimensional feature set statistical methods for natural language processing are often dependent on the availability of costly knowledge sources such as manually annotated text or semantic networksthis limits the applicability of such approaches to domains where this hard to acquire knowledge is already availablethis paper presents three unsupervised learning algorithms that are able to distinguish among the known senses of a word based only on features that can be automatically extracted from untagged textthe object of unsupervised learning is to determine the class membership of each observation in a sample without using training examples of correct classificationswe discuss three algorithms mcquitty similarity analysis ward minimumvariance method and the them algorithm that can be used to distinguish among the known senses of an ambiguous word without the aid of disambiguated examplesthe them algorithm produces maximum likelihood estimates of the parameters of a probabilistic model where that model has been specified in advanceboth ward and mcquitty methods are agglomerative clustering algorithms that form classes of unlabeled observations that minimize their respective distance measures between class membersthe rest of this paper is organized as followsfirst we present introductions to ward and mcquitty methods and the them algorithm we discuss the thirteen words and the three feature sets used in our experimentswe present our experimental results and close with a discussion of related work in general clustering methods rely on the assumption that classes occupy distinct regions in the feature spacethe distance between two points in a multidimensional space can be measured using any of a wide variety of metrics observations are grouped in the manner that minimizes the distance between the members of each classward and mcquitty method are agglomerative clustering algorithms that differ primarily in how they compute the distance between clustersall such algorithms begin by placing each observation in a unique cluster ie a cluster of onethe two closest clusters are merged to form a new cluster that replaces the two merged clustersmerging of the two closest clusters continues until only some specified number of clusters remainhowever our data does not immediately lend itself to a distancebased interpretationour features represent partofspeech tags morphological characteristics and word cooccurrence such features are nominal and their values do not have scalegiven a pos feature for example we could choose noun 1 verb 2 adjective 3 and adverb 4that adverb is represented by a larger number than noun is purely coincidental and implies nothing about the relationship between nouns and adverbsthus before we employ either clustering algorithm we represent our data sample in terms of a dissimilarity matrixsuppose that we have n observations in a sample where each observation has q featuresthis data is represented in a n xn dissimilarity matrix such that the value in cell where i represents the row number and j represents the column is equal to the number of features in observations i and j that do not matchfor example in figure 1 we have four observationswe record the values of three nominal features for each observationthis sample can be represented by the 4 x 4 dissimilarity matrix shown in figure 2in the dissimilarity matrix cells and have the value 2 indicating that the first and second observations in figure 1 have different values for two of the three featuresa value of 0 indicates that observations i and j are identicalwhen clustering our data each observation is represented by its corresponding row in the dissimilarity matrixusing this representation observations that fall close together in feature space are likely to belong to the same class and are grouped together into clustersin this paper we use ward and mcquitty methods to form clusters of observations where each observation is represented by a row in a dissimilarity matrixin ward method the internal variance of a cluster is the sum of squared distances between each observation in the cluster and the mean observation for that cluster at each step in ward method a new cluster ckl with the smallest possible internal variance is created by merging the two clusters ck and cl that have the minimum variance between themthe variance between ck and cl is computed as follows where tk is the mean observation for cluster ck nk is the number of observations in ck and tr and nl are defined similarly for climplicit in ward method is the assumption that the sample comes from a mixture of normal distributionswhile nlp data is typically not well characterized by a normal distribution there is evidence that our data when represented by a dissimilarity matrix can be adequately characterized by a normal distributionhowever we will continue to investigate the appropriateness of this assumptionin mcquitty method clusters are based on a simple averaging of the feature mismatch counts found in the dissimilarity matrixat each step in mcquitty method a new cluster ckl is formed by merging the clusters ck and cl that have the fewest number of dissimilar features between themthe clusters to be merged ck and cl are identified by finding the cell where k 1 that has the minimum value in the dissimilarity matrixonce the new cluster ckl is created the dissimilarity matrix is updated to reflect the number of dissimilar features between ckl and all other existing clustersthe dissimilarity between any existing cluster cr and ckl is computed as where dki is the number of dissimilar features between clusters ck and c1 and dli is similarly defined for clusters cl and c1this is simply the average number of mismatches between each component of the new cluster and the existing clusterunlike ward method mcquitty method makes no assumptions concerning the distribution of the data samplethe expectation maximization algorithm commonly known as the them algorithm is an iterative estimation procedure in which a problem with missing data is recast to make use of complete data estimation techniquesin our work the sense of an ambiguous word is represented by a feature whose value is missingin order to use the them algorithm the parametric form of the model representing the data must be knownin these experiments we assume that the model form is the naive bayes in this model all features are conditionally independent given the value of the classification feature ie the sense of the ambiguous wordthis assumption is based on the success of the naive bayes model when applied to supervised wordsense disambiguation there are two potential problems when using the them algorithmfirst it is computationally expensive and convergence can be slow for problems with large numbers of model parametersunfortunately there is little to be done in this case other than reducing the dimensionality of the problem so that fewer parameters are estimatedsecond if the likelihood function is very irregular it may always converge to a local maxima and not find the global maximumin this case an alternative is to use the more computationally expensive method of gibbs sampling at the heart of the them algorithm lies the qfunctionthis is the expected value of the loglikelihood function for the complete data d where y is the observed data and s is the missing sense value here c is the current value of the maximum likelihood estimates of the model parameters and 02 is the improved estimate that we are seeking p is the likelihood of observing the complete data given the improved estimate of the model parameterswhen approximating the maximum of the likelihood function the them algorithm starts from a randomly generated initial estimate of c and then replaces 0 by the 0i which maximizes qthis process is broken down into two steps expectation and maximization the estep finds the expected values of the sufficient statistics of the complete model using the current estimates of the model parametersthe mstep makes maximum likelihood estimates of the model parameters using the sufficient statistics from the estepthese steps iterate until the parameter estimates 0 and 0i convergethe mstep is usually easy assuming it is easy for the complete data problem the estep is not necessarily sohowever for decomposable models such as the naive bayes the estep simplifies to the calculation of the expected counts in the marginal distributions of interdependent features where the expectation is with respect to 0the mstep simplifies to the calculation of new parameter estimates from these countsfurther these expected counts can be calculated by multiplying the sample size n by the probability of the complete data within each marginal distribution given 0 and the observed data within each marginal yrnthis simplifies to where count i is the current estimate of the expected count and p is formulated using 0for the naive bayes model with 3 observable features a b c and an unobservable classification feature 5 where 0 p p p p the e and msteps are where s a b and c denote specific values of s a b and c respectively and p and p are defined analogously to pexperiments were conducted to disambiguate 13 different words using 3 different feature setsin these experiments each of the 3 unsupervised disambiguation methods is applied to each of the 13 words using each of the 3 feature sets this defines a total of 117 different experimentsin addition each experiment was repeated 25 times in order to study the variance introduced by randomly selecting initial parameter estimates in the case of the them algorithm and randomly selecting among equally distant groups when clustering using ward and mcquitty methodsin order to evaluate the unsupervised learning algorithms we use sensetagged text in these experimentshowever this text is only used to evaluate the accuracy of our methodsthe classes discovered by the unsupervised learning algorithms are mapped to dictionary senses in a manner that maximizes their agreement with the sensetagged textif the sensetagged text were not available as would often be the case in an unsupervised experiment this mapping would have to be performed manuallythe words disambiguated and their sense distributions are shown in figure 3all data with the exception of the data for line come from the acldci wall street journal corpus with the exception of line each ambiguous word is tagged with a single sense defined in the longman dictionary of contemporary english the data for the 12 words tagged using ldoce senses are described in more detail in the line data comes from both the acldci wsj corpus and the american printing house for the blind corpuseach occurrence of line is tagged with a single sense defined in wordnet this data is described in more detail in every experiment utilizes all of the sentences available for each wordthe number of sentences available per word is shown as quottotal countquot in figure 3we have reduced the sense inventory of these words so that only the two or three most frequent senses are included in the text being disambiguatedfor several of the words there are minority senses that form a very small percentage of the total samplesuch minority classes are not yet well handled by unsupervised techniques therefore we do not consider them in this studywe define three different feature sets for use in these experimentsour objective is to evaluate the effect that different types of features have on the accuracy of unsupervised learning algorithms such as those discussed herewe are particularly interested in the impact of the overall dimensionality of the feature space and in determining how indicative different feature types are of word sensesour feature sets are composed of various combinations of the following five types of featuresmorphology the feature m represents the morphology of the ambiguous wordfor nouns m is binary indicating singular or pluralfor verbs the value of m indicates the tense of the verb and can have up to 7 possible valuesthis feature is not used for adjectivesadjective senses chief highest in rank 86 most important main 14 common 84 as in the phrase common stock belonging to or shared by 2 or more 8 happening often usual 8 last 94 on the occasion nearest in the past after all others 6 public 68 concerning people in general concerning the government and people 19 not secret or private 13 noun senses bill 68 a proposed law under consideration a piece of paper money or treasury bill 22 a list of things bought and their price 10 concern 64 a business firm worry anxiety 36 drug 57 a medicine used to make medicine a habitforming substance 43 interest 59 money paid for the use of money a share in a company or business 24 readiness to give attention 17 line 37 a wire connecting telephones a cord cable 32 an orderly series 30 verb senses agree 74 to concede after disagreement to share the same opinion 26 close 77 to end to stop operation 23 help 78 to enhance inanimate object to assist human object 22 include 91 to contain in addition to other parts to be a part of human subject 9 partofspeech features of the form p li represent the partofspeech of the word i positions to the left of the ambiguous wordpr represents the pos of the word i positions to the rightin these experiments we used 4 pos features pli pl2 pri and pr2 to record the pos of the words 1 and 2 positions to the left and right of the ambiguous wordeach pos feature can have one of 5 possible values noun verb adjective adverb or othercooccurrences features of the form ci are binary cooccurrence featuresthey indicate the presences or absences of a particular content word in the same sentence as the ambiguous wordwe use 3 binary cooccurrence features c1 c2 and c3 to represent the presences or absences of each of the three most frequent content words c1 being the most frequent content word c2 the second most frequent and c3 the thirdonly sentences containing the ambiguous word were used to establish word frequenciesfrequency based features like this one contain little information about low frequency classesfor words with skewed sense distribution it is likely that the most frequent content words will be associated only with the dominate senseas an example consider the 3 most frequent content words occurring in the sentences that contain chief officer executive and presidentchief has a majority class distribution of 86 and not surprisingly these three content words are all indicative of the dominate sense which is quothighest in rankquotthe set of content words used in formulating the cooccurrence features are shown in figure 4note that million and company occur frequentlythese are not likely to be indicative of a particular sense but more reflect the general nature of the wall street journal corpusunrestricted collocations features of the form ul and uri indicate the word occurring in the position i places to the left or right respectively of the ambiguous wordall features of this form have 21 possible valuesnineteen correspond to the 19 most frequent words that occur in that fixed position in all of the sentences that contain the particular ambiguous wordthere is also a value that indicates when the position i to the left or right is occupied by a word that is not among the 19 most frequent and a value indicating that the position i to the left or right falls outside of the sentence boundaryin these experiments we use 4 unrestricted collocation features ul2 uli uri and ur2as an example the values of these features for concern are as follows content collocations features of the form cli and cr1 indicate the content word occurring in the position 1 place to the left or right respectively of the ambiguous wordthe values of these features are defined much like the unrestricted collocations above except that these are restricted to the 19 most frequent content words that occur only one position to the left or right of the ambiguous wordto contrast this set of features with the unrestricted collocations consider concern againthe values of the features representing the 19 most frequent content words 1 position to the left and right are as follows feature sets a b and c the 3 feature sets used in these experiments are designated a b and c and are formulated as follows the dimensionality is the number of possible combinations of feature values and thus the size of the feature spacethese values vary since the number of possible values for m varies with the partofspeech of the ambiguous wordthe lower number is associated with adjectives and the higher with verbsto get a feeling for the adequacy of these feature sets we performed supervised learning experiments with the interest data using the naive bayes modelwe disambiguated 3 senses using a 101 trainingto test ratiothe average accuracies for each feature set over 100 random trials were as follows a 809 b 877 and c 827the window size the number of values for the pos features and the number of words considered in the collocation features are kept deliberately small in order to control the dimensionality of the problemin future work we will expand all of the above types of features and employ techniques to reduce dimensionality along the lines suggested in and figure 5 shows the average accuracy and standard deviation of disambiguation over 25 random trials for each combination of word feature set and learning algorithmthose cases where the average accuracy of one algorithm for a particular feature set is significantly higher than another algorithm as judged by the ttest are shown in bold facefor each word the most accurate overall experiment and those that are not significantly less accurate are underlinedalso included in figure 5 is the percentage of each sample that is composed of the majority sensethis is the accuracy that can be obtained by a majority classifier a simple classifier that assigns each ambiguous word to the most frequent sense in a samplehowever bear in mind that in unsupervised experiments the distribution of senses is not generally knownperhaps the most striking aspect of these results is that across all experiments only the nouns are disambiguated with accuracy greater than that of the majority classifierthis is at least partially explained by the fact that as a class the nouns have the most uniform distribution of sensesthis point will be elaborated on in section 61while the choice of feature set impacts accuracy overall it is only to a small degreewe return to this point in section 62the final result to be discussed in section 63 is that the differences in the accuracy of these three algorithms are statistically significant both on average and for individual wordsextremely skewed distributions pose a challenging learning problem since the sample contains precious little information regarding minority classesthis makes it difficult to learn their distributions without prior knowledgefor unsupervised approaches this problem is exacerbated by the difficultly in distinguishing the characteristics of the minority classes from noisein this study the accuracy of the unsupervised algorithms was less than that of the majority classifier in every case where the percentage of the majority sense exceeded 68however in the cases where the performance of these algorithms was less than that of the majority classifier they were often still providing high accuracy disambiguation clearly the distribution of classes is not the only factor affecting disambiguation accuracy compare the performance of these algorithms on bill and public which have roughly the same class distributionsit is difficult to quantify the effect of the distribution of classes on a learning algorithm particularly when using naturally occurring datain previous unsupervised experiments with interest using a modified version of feature set a we were able to achieve an increase of 36 percentage points over the accuracy of the majority classifier when the 3 classes were evenly distributed in the sample here our best performance using a larger sample with a natural distribution of senses is only an increase of 20 percentage points over the accuracy of the majority classifierbecause skewed distributions are common in lexical work they are an important consideration in formulating disambiguation experimentsin future work we will investigate procedures for feature selection that are more sensitive to minority classesreliance on frequency based features as used in this work means that the more skewed the sample is the more likely it is that the features will be indicative of only the majority classdespite varying the feature sets the relative accuracy of the three algorithms remains rather consistentfor 6 of the 13 words there was a single algorithm that was always significantly more accurate than the other two across all featuresthe them algorithm was most accurate for last and line with all three feature setsmcquitty method was significantly more accurate for chief common public and help regardless of the feature setdespite this consistency there were some observable trends associated with changes in feature setfor example mcquitty method was significantly more accurate overall in combination with feature set c while the them algorithm was more accurate with feature set a and the accuracy of ward method was the least favorable with feature set bfor the nouns there was no significant difference between feature sets a and b when using the them algorithmfor the verbs there was no significant difference between the three feature sets when using mcquitty methodthe adjectives were significantly more accurate when using mcquitty method and feature set c one possible explanation for the consistency of results as feature sets varied is that perhaps the features most indicative of word senses are included in all the sets due to the selection methods and the commonality of feature typesthese common features may be sufficient for the level of disambiguation achieved herethis explanation seems more plausible for the them algorithm where features are weighted but less so for mcquitty and ward which use a representation that does not allow feature weightingbased on the average accuracy over partofspeech categories the them algorithm performs with the highest accuracy for nouns while mcquitty method performs most accurately for verbs and adjectivesthis is true regardless of the feature set employedthe standard deviations give an indication of the effect of ties on the clustering algorithms and the effect of the random initialization on the the them algorithmin few cases is the standard deviation very smallfor the clustering algorithms a high standard deviation indicates that ties are having some effect on the cluster analysisthis is undesirable and may point to a need to expand the feature set in order to reduce tiesfor the them algorithm a high standard deviation means that the algorithm is not settling on any particular maximaresults may become more consistent if the number of parameters that must be estimated was reducedfigures 6 7 and 8 show the confusion matrices associated with the disambiguation of concern inleresi and help using feature sets a b and c respectivelya confusion matrix shows the number of cases where the sense discovered by the algorithm agrees with the manually assigned sense along the main diagonal disagreements are shown in the rest of the matrixin general these matrices reveal that both the them algorithm and ward method are more biased toward balanced distributions of senses than is mcquitty methodthis may explain the better performance of mcquitty method in disambiguating those words with the most skewed sense distributions the adjectives and adverbsit is possible to adjust the them algorithm away from this tendency towards discovering balanced distributions by providing prior knowledge of the expected sense distributionthis will be explored in future workbruce 1997ahowever all of these methods require that manually sense tagged text be available to train the algorithmfor most domains such text is not available and is expensive to createit seems more reasonable to assume that such text will not bootstrapping approaches require a small amount of disambiguated text in order to initialize the unsupervised learning algorithman early example of such an approach is described in a supervised learning algorithm is trained with a small amount of manually sense tagged text and applied to a held out test setthose examples in the test set that are most confidently disambiguated are added to the training samplea more recent bootstrapping approach is described in this algorithm requires a small number of training examples to serve as a seedthere are a variety of options discussed for automatically selecting seeds one is to identify collocations that uniquely distinguish between sensesfor plant the collocations manufacturing plant and living plant make such a distinctionbased on 106 examples of manufacturing plant and 82 examples of living plant this algorithm is able to distinguish between two senses of plant for 7350 examples with 97 percent accuracyexperiments with 11 other words using collocation seeds result in an average accuracy of 96 percentwhile does not discuss distinguishing more than 2 senses of a word there is no immediate reason to doubt that the quotone sense per collocationquot rule would still hold for a larger number of sensesin future work we will evaluate using the quotone sense per collocationquot rule to seed our various methodsthis may help in dealing with very skewed distributions of senses since we currently select collocations based simply on frequencyclustering has most often been applied in natural language processing as a method for inducing syntactic or semantically related groupings of words an early application of clustering to wordsense disambiguation is described in there words are represented in terms of the cooccurrence statistics of four letter sequencesthis representation uses 97 features to characterize a word where each feature is a linear combination of letter fourgrams formulated by a singular value decomposition of a 5000 by 5000 matrix of letter fourgram cooccurrence frequenciesthe weight associated with each feature reflects all usages of the word in the samplea context vector is formed for each occurrence of an ambiguous word by summing the vectors of the contextual words the set of context vectors for the word to be disambiguated are then clustered and the clusters are manually sense taggedthe features used in this work are complex and difficult to interpret and it is not clear that this complexity is required compares his method to and shows that for four words the former performs significantly better in distinguishing between two sensesother clustering approaches to wordsense disambiguation have been based on measures of semantic distance defined with respect to a semantic network such as wordnetmeasures of semantic distance are based on the path length between concepts in a network and are used to group semantically similar concepts provides an information theoretic definition of semantic distance based on wordnet apply another clustering approach to wordsense disambiguation they use cooccurrence data gathered from the machinereadable version of ldoce to define neighborhoods of related wordsconceptually the neighborhood of a word is a type of equivalence classit is composed of all other words that cooccur with the designated word a significant number of times in the ldoce sense definitionsthese neighborhoods are used to increase the number of words in the ldoce sense definitions while still maintaining some measure of lexical cohesionthe quotexpandedquot sense definitions are then compared to the context of an ambiguous word and the sensedefinition with the greatest number of word overlaps with the context is selected as correct propose that neighborhoods be subject dependentthey suggest that a word should potentially have different neighborhoods corresponding to the different ldoce subject codesubjectspecific neighborhoods are composed of words having at least one sense marked with that subject codethe only other application of the them algorithm to wordsense disambiguation is described in there the them algorithm is used as part of a supervised learning algorithm to distinguish city names from people namesa narrow window of context one or two words to either side was found to perform better than wider windowsthe results presented are preliminary but show an accuracy percentage in the midnineties when applied to dixon a name found to be quite ambiguousit should be noted that the them algorithm relates to a large body of work in speech processingthe baumwelch forwardbackward algorithm is a specialized form of the them algorithm that assumes the underlying parametric model is a hidden markov modelthe baumwelch forward backward algorithm has been used extensively in speech recognition supervised learning approaches to wordsense disambiguation fall victim to the knowledge acquisition bottleneckthe creation of sense tagged text sufficient to serve as a training sample is expensive and time consumingthis bottleneck is eliminated through the use of unsupervised learning approaches which distinguish the sense of a word based only on features that can be automatically identifiedin this study we evaluated the performance of three unsupervised learning algorithms on the disambiguation of 13 words in naturally occurring textthe algorithms are mcquitty similarity analysis ward minimumvariance method and the them algorithmour findings show that each of these algorithms is negatively impacted by highly skewed sense distributionsour methods and feature sets were found to be most successful in disambiguating nouns rather than adjectives or verbsoverall the most successful of our procedures was mcquitty similarity analysis in combination with a high dimensional feature setin future work we will investigate modifications of these algorithms and feature set selection that are more effective on highly skewed sense distributions
W97-0322
distinguishing word senses in untagged textthis paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged textthe methods described in this paper mcquitty similarity analysis ward minimumvariance method and the them algorithm assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in textthese methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbsoverall the most accurate of these procedures is mcquitty similarity analysis in combination with a high dimensional feature setwe propose a similarity based discrimination approach that computes similarity among each pair of instances of the target word
from discourse structures to text summaries we describe experiments that show that the concepts of rhetorical analysis and nucleanty can be used effectively for deternulling the most important units in a text we show how these concepts can be implemented and we discuss results that we obtained with a discoursebased summanzation program 1 motivation the evaluation of automatic summanzers has always been a thorny problem most papers on summarization describe the approach that they use and give some quotconsamples of the output in very few cases output of a summarization program with a humanmade summary or evaluated with the help of human subjects usually the results are modest unfortunately evaluating the results of a particular implementation does not enable one to determine what part of the failure is due to the implementation itself and what part to its underlying assumptions the position that we take in this paper is that in order to build highquality summarization programs one needs to evaluate not only a representative set of automatically generated outputs but also the adequacy of the assumptions that these programs use that way one is able to distinguish the problems that pertain to a particular implementation from those that pertain to the underlying theoretical framework and explore new ways to improve each with few exceptions automatic approaches to summarization have primarily addressed possible ways to determine the most important parts of a text for an excellent overview determining the salient parts is considered to be achievable because one or more of the following assumptions hold important sentences in a text contain words that are used frequently important sentences contain words that are used in the tide and section headings important sentences are located at the beginning or end of paragraphs important sentences are located at posilions in a text that are genre dependent these positions can be determined automatically through training techniques important sentences use words as quotgreatestquot and quotsignificantquot or indiphrases as quotthe main aim of this paperquot and quotthe purpose of this articlequot while nonimportant senuse words as quotimpossiblequot important sentences and concepts highest connected entities in elaborate semantic structures and imponant and nonimportant sentences are derivable from a discourse representation of the text in determinmg the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections computers are accurate tools flowever in determining the concepts that are semantically related or the discourse structure of a text computers are no longer so accurate rather they are highly dependent on the coverage of the linguistic resources that they use and the quality of the algondims that they implement although it is plausible that elaborate cohesionand coherencebased structures can be used effectively in summarization we believe that before building summarization programs we should determine the extent to which these assumptions hold in this paper we describe experiments that show that concepts of rhetorical analysis and nucleanty used effectively for determining the most important units in a text we show how these concepts were implemented and discuss results that we obtained with a discoursebased summarization program 2 from discourse trees to summaries an empirical view the evaluation of automatic summanzers has always been a thorny problem most papers on summarization describe the approach that they use and give some quotconvincingquot samples of the output in very few cases the direct output of a summarization program is compared with a humanmade summary or evaluated with the help of human subjects usually the results are modest unfortunately evaluating the results of a particular implementation does not enable one to determine what part of the failure is due to the implementation itself and what part to its underlying assumptions the position that we take in this paper is that in order to build highquality summarization programs one needs to evaluate not only a representative set of automatically generated outputs but also the adequacy of the assumptions that these programs use that way one is able to distinguish the problems that pertain to a particular implementation from those that pertain to the underlying theoretical framework and explore new ways to improve each with few exceptions automatic approaches to summarization have primarily addressed possible ways to determine the most important parts of a text for an excellent overview determining the salient parts is considered to be achievable because one or more of the following assumptions hold important sentences in a text contain words that are used frequently important sentences contain words that are used in the tide and section headings important sentences are located at the beginning or end of paragraphs important sentences are located at posilions in a text that are genre dependent these positions can be determined automatically through training techniques important sentences use baps words such as quotgreatestquot and quotsignificantquot or indicator phrases such as quotthe main aim of this paperquot and quotthe purpose of this articlequot while nonimportant sentences use stigma words such as quothardlyquot and quotimpossiblequot important sentences and concepts are the highest connected entities in elaborate semantic structures and imponant and nonimportant sentences are derivable from a discourse representation of the text in determinmg the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections computers are accurate tools flowever in determining the concepts that are semantically related or the discourse structure of a text computers are no longer so accurate rather they are highly dependent on the coverage of the linguistic resources that they use and the quality of the algondims that they implement although it is plausible that elaborate cohesionand coherencebased structures can be used effectively in summarization we believe that before building summarization programs we should determine the extent to which these assumptions hold in this paper we describe experiments that show that the concepts of rhetorical analysis and nucleanty can be used effectively for determining the most important units in a text we show how these concepts were implemented and discuss results that we obtained with a discoursebased summarization program 2 from discourse trees to summaries an empirical viewresearchers in computational linguistics have long speculated that the nuclei that pertain to a rhetorical structure tree constitute an adequate summanzation of the text for which that rstree was built however to our knowledge there was no experiment to confirm how valid this speculation really is in what follows we describe an experiment that shows that there exists a strong correlation between the nuclei of the rstree of a text and what readers perceive to be the most important units in a text we know from the results reported in the psychological literature on summarization that there exists a certain degree of disagreement between readers with respect to the importance that they assign to various textual units and that the disagreement is dependent on the quality of the text and the comprehension and summarization skills of the readers in an attempt to produce an adequate reference set of data we selected for our experiment five texts from scientific american that we considered to be wellwritten the texts ranged in size from 161 to 725 words we used square brackets to enclose the minimal textual units of each text overall the five texts were broken into 160 textual units with the shortest text being broken into 18 textual units and the longest into 70 the shortest text is given in below sun than earth 21 and slim atmospheric blanket3 mars experiences frigid weather conditions 4 surface temperatures typically average about 60 degrees celsius at the equator5 and can dip to 123 degrees c near the poles only the midday sun at tropical latitudes is warm enough to thaw ice on occasion but any hquid water formed in this way would evaporate almost nistantlyg because of the low atmospheric pressure 9 although the atmosphere holds a small amount of water19 and waterice clouds sometimes develop11 most martian weather involves blowing dust or carbon dioxide 12 each winter for example a blizzard of frozen carbon dioxide rages over one polei3 and a few meters of this dryice snow accumulatell as previously frozen carbon dioxide evaporates from the opposite polar cap is yet even on the summer pole16 where the sun remains in the sky all day 1ong171 strategy and asked 13 independent judges to rate each textual unit according to its importance to a potential summary the judges used a threepoint scale and assigned a score of 2 to the units that they believed to be very important and should appear in a concise summary i to those they considered moderately important which should appear in a long summary and 0 to those they considered unimportant which should not appear in any summary the judges were instructed that there were no nght or wrong answers and no upper or lower bounds with respect to the number of textual units that they should select as being important or moderately important the judges were all graduate students in computer science we assumed that they had developed adequate comprehension and summanzauon skills on their own so no training session was carried out table 1 presents the scores that were assigned by each judge to the units in text the same texts were also given to two computational linguists with solid knowledge of rhetorical structure theory the analysts were asked to build one rstree for each text we took then the rstrees built by the analysts and used our formalization of rst to associate with each node in a tree its salient units the salient units were computed recursively associating with each leaf in an rstree the leaf itself and to each internal node the salient units of the nucleus or nuclei of the rhetorical relation corresponding to that node we then computed for each textual unit a score depending on the depth in the tree where it occurred as a salient unit the textual units that were salient units of the top nodes in a tree had a higher score than those that were salient units of the nodes found at the bottom of a tree essentially from a rhetorical structure tree we derived an importance score for each textual unit the importance scores ranged from 0 to n where n was the depth of the rstreet table 1 presents the scores that were derived from the rstrees that were built by each analyst for text overall agreement among judgeswe measured the ability of judges to agree with one another using the notion of percent agreement that was defined by gale and used extensively in discourse segmentation studies percent agreement reflects the ratio of observed agreements with the majority opinion to possible agreements with the majority opinion the percent agreements computed for each of the five texts and each level of importance are given in table 2 the agreements among judges for our experiment seem to follow the same pattern as those described by other researchers in summarization that is the judges are quite consistent with respect to what they perceive as being very important and unimportant but less consistent with respect to what they perceive as being less important in contrast with the agreement observed among judges the percentage agreements computed for 1000 importance assignments that were randomly generated for the same texts followed a normal distribution with p 47 31o 004 these results suggest that the agreement among judges is significant agreement among judges with respect to the importance of each textual unitwe considered a textual unit to be labeled consistently if a simple majority of the judges assigned the same score to that unit overection 32 gives an example of how the importance scores were computed all the judges labeled consistently 140 of the 160 textual units in contrast a set of 1000 randomly generated importance scores showed agreement on average for only 50 of the 160 textual units p1ng0 05 the judges consistently labeled 36 of the units as very important 8 as less important and 96 as unimportant they were inconsistent with respect to 20 textual units for example for text the judges consistently labeled units 4 and 12 as very important units 5 and 6 as less iraportant units 1 2 3 7 8 9 10 11 13 14 15 17 as unimportant and were inconsistent in labeling unit 18 if we compute percent agreement figures only for the textual units for which at least 7 judges agreed we get 69 for the units considered very important 63 for those considered less important and 77 for those considered unimportant the overall percent agreement in this case is 75 statistical significanceit has often been emphasized that agreement figures of the kinds computed above could be misleading since the quottruequot set of important textual units cannot be independently known we cannot compute how valid the importance assignments of the judges were moreover although the agreement figures that would occur by chance offer a strong indication that our data are reliable they do not provide a precise measurement of reliability to compute a reliability figure we followed the same methodology as passonneau and litman and hearst and applied the cochran q summary statistics to our data cochran test assumes that a set of judges make binary decisions with respect to a dataset the null hypothesis is that the number of judges that take the same decision is randomly distributed since cochran test is appropriate only for binary judgments and since our main goal was to determine a reliability figure for the agreement among judges with respect to what they believe to be important we evaluated two versions of the data that reflected only one importance level in the first version we considered as being important the judgments with a sarre of 2 and unimportant the judgments with a score of 0 and 1 in the second version we considered as being important the judgments with a score of 2 and 1 and unimportant the judgments with a score of 0 essentially we mapped the judgment matrices of each of the five texts into matrices whose elements ranged over only two values 0 and 1 after these modifications were made we computed for each version and each text the cochran statistics q which approximates the belong to this reference set agreement between analystsonce we determined the set of textual units that the judges believed to be important we needed to determine the agreement between the analysts who built the discourse trees for the five texts because we did not know the distribution of the importance scores derived from the discourse trees we computed the correlation between the analysts by applying spearman correlation coefficient on the scores associated to each textual unit we interpreted these scores as ranks on a scale that measures the importance of the units in a text the spearman rank correlation coefficient is an alternative to the usual correlation coefficient it is based on the ranks of the data and not on the data itself so is resistant to outliers the null hypothesis tested by the spearman coefficient is that two variables are independent of each other against the alternative hypothesis that the rank of a variable is correlated with the rank of another variable the value of the statistics ranges from 1 indicating that high ranks of one variable occur with low ranks of the other variable through 0 indicating no correlation between the vanables to 1 indicating that high ranks of one variable occur with high ranks of the other callable the spearman correlation coefficient between the ranks assigned for each textual unit on the bases of the rstrees built by the two analysts was very high 0793 at the p 0 0001 level of significance the differences between the two analysts caine mainly from their interpretations of two of the texts the rstrees of one analyst mirrored the paragraph structure of the texts while the rstrees of the other mirrored a logical organization of the text which that analyst believed to be important agreement between the analysts and the judges with respect to the most important textual unitsin order to determine whether there exists any correspondence between what readers believe to be important and the nuclei of the rstrees we selected from each of the five texts the set of textual units that were labeled as quotvery importantquot by a majority of the judges for example for text we selected units 4 and 12 ic 11 of the units overall the judges selected 36 units as being very important which is approximately 22 of the units in a text the percentages of important units for the five texts were 113635 17 and 22 respectively we took the maximal scores computed for each textual unit from the rstrees built by each analyst and selected a percentage of units that matched the percentage of important units selected by the judges in the cases in which there were ties we selected a percentage of units that was closest to the one computed for the judges for example we selected units 4 and 12 which represented the most important 11 of units as induced from the rstree btult by the first analyst however we selected only unit 4 which represented 6 of the most important units as induced from the rstree built by the second analyst the reason for selecting only unit 4 for the second analyst was that units 1011 and 12 have the same score 4 if we had selected units 10 11 and 12 as well we would have ended up selecting 22 of the units in text which is farther from 11 than 6 hence we determined for each text the set of important units as labeled by judges and as derived from the rstrees of those texts we calculated for each text the recall and precision of the important units derived from the rstrees with respect to the units labeled important by the judges the overall recall and precision was the same for both analysts 56 recall and 66 precision in contrast the average recall and precision for the same percentages of units selected randomly 1000 times from the same five texts were both 25 7 a 0 059 in summarizing text it is often useful to consider not only clauses but full sentences to account for this we considered to be important all the textual units that pertained to a sentence that was characterized by at least one important textual unit for example we labeled as important textual units 1 to 4 in text because they make up a full sentence and because unit 4 was labeled as important for the adjusted data we determined again the percentages of important units for the five texts and we recalculated the mall and precision for both analysts the recall was 69 and 66 and the precision 82 and 75 respectively in contrast the average recall and precision for the same percentages of units selected randomly 1000 times from the same five texts were 38 4 a 0 048 these results confirm that there exists a strong correlation between the nuclei of the rstrees that pertain to a text and what readers perceive as being important in that text given the values of recall and precision that we obtained it is plausible that an adequate computational treatment of discourse theories would provide most of what is needed for selecting accurately the important units in a text however the results also suggest that rst by itself is not enough if one wants to strive for perfection the above results not only provide strong evidence that discourse theories can be used effectively for text summarization but also enable one to derive strategies that an automatic summarizer aught follow for example the spearman correlation coefficient between the judges and the first analyst the one who did not follow the paragraph structure was lower than the one between the judges and the second analyst it follows that most human judges are inclined to use the paragraph breaks as valuable sources of information when they interpret discourse if the atm of a summarization program is to mimic human behavior it seems adequate for the program to take advantage of the paragraph structure of the texts that it analyzes currently the rank assignment for each textual unit in an rstree is done entirely on the basis of the maximal depth in the tree where that unit is salient our data seem to support the fact that there exists a correlation also between the types of relations that are used to connect various textual units and the importance of those units in a text we plan to design other experiments that can provide clearcut evidence on the nature of this correlationour summarization program relies on a rhetorical parser that builds rstrees for unrestricted texts the mathematical foundations of the rhetorical parsing algonthm rely on a firseorder formalization of valid text structures the assumptions of the formalization are the following 1 the elementary units of complex text structures are nonoverlapping spans of text 2 rhetorical coherence and cohesive relations hold between textual units of various sizes 3 relations can be partitioned into two classes paratactic and hypotactic paratactic relations are those that hold between spans of equal importance hypotactic relations are those that hold between a span that is essential for the writer purpose i e a nucleus and a span that increases the understanding of the nucleus but is not essential for the writer purpose i a a satellite 4 the abstract structure of most texts is a binary treelike structure 5 if a relation holds between two textual spans of the tree structure of a text that relation also holds between the most important units of the constituent subspans the most important units of a textual span are determined recursively they correspond to the most important units of the unmediate subspans when the relation that holds between these subspans is paratactic and to the most important units of the nucleus subspan when the relation that holds between the immediate subspans is hypotacuc the rhetoncal parsing algorithm which is outlined in figure 1 is based on a comprehensive corpus analysis of more than 450 discourse markers and 7900 text fragments for details when given a text the rhetorical parser determines first the discourse markers and the elementary units that make up that text the parser uses then the information derived from the corpus analysis in order to hypothesize rhetoncal relations among the elementary units in the end the parser applies a constraintsatisfaction procedure to detemune the text structures that are valid if more than one valid structure is found the parser chooses one that is the quotbestquot according to a given metric the details of the algorithms that input a text t i determine the set d of all discourse markers in t and the set ut of elementary textual units in t 2 hypothesize a set of relations r between the elements of ut 3 determine the set valtrees of all valid rstrees of t that can be built using relations from r 4 determine the quotbestquot rstree in valtrees on the basis of a metric that assigns higher weights to the trees that are more skewed to the right are used by the rethoncal parser are discussed at length in when the rhetorical parser takes text as input it produces the rstree in figure 2 the convention that we use is that nuclei are surrounded by solid boxes and satellites by dotted boxes the links between a node and a subordinate nucleus or nuclei are represented by solid arrows and the links between a node and a subordinate satellite by dotted imes the nodes with only one satellite denote occurrences of parenthetical information for example textual unit 2 is labeled as parenthetical to the textual unit that results from juxtaposing 1 and 3 the numbers associated with each leaf correspond to the numencal labels in text the numbers associated with each internal node correspond to the salient units of that node and are explicitly represented in the rstree by inspecting the rstree in figure 2 one can nonce that the trees that are built by the program do not have the same granularity as the trees constructed by the analysts for example the program treats units 1314 and 15 as one elementary unit however as we argue in the corpus analysis on which our parser is built supports the observation that in most cases the global structure of the rstree is not affected by the inability of the rhetorical parser to uncover all clauses in a text most of the clauses that are not uncovered are nuclei of mir relations the summarization program takes the rstree produced by the rhetorical parser and selects the textual units that are most salient in that text if the aim of the program is to produce just a very short summary only the salient units associated with the internal nodes found closer to the root are selected the longer the summary one wants to generate the farther the selected salient units will be from the root in fact one can see that the rstrees built by the rhetoncal parser induce a partial order on the importance of the textual units for text the most important unit is 4 the textual units that are salient in the nodes found one level below represent the next level of importance the next level contains units 56 16 and 18 and so on to evaluate our program we associated with each textual unit in the rstrees built by the rhetorical parser a score in the same way we did for the rstrees built by the analysts for example the rstree in figure 2 has a depth of 6 because unit 4 is salient for the root it gets a score of 6 units 56 are salient for an internal node found two levels below the root therefore their score is 4 unit 9 is salient for a leaf found five levels below the root therefore its score is 1 table 1 presents the scores associated by our summarization program to each unit in text we used the importance scores assigned by our program to compute statistics similar to those discussed in the previous section when the program selected only the textual units with the highest scores in percentages that were equal to those of the judges the recall was 53 and the precision was 50 when the program selected the full sentences that were associated with the most unportant units in percentages that were equal to those of the judges the recall was 66 and the precision 68 the lower recall and precision scores associated with clauses seem to be caused primarily by the difference in granularity with respect to the way the texts were broken into subunits the program does not recover all minimal textual units and as a consequence its assignment of importance scores is coarser when full sentences are considered the judges and the program work at the same level of granularity and as a consequence the summarization results improve significantlywe are not aware of any rstbased summarization program for english however ono et al discuss a summarization program for japanese whose minimal textual units are sentences due to the differences between english and japanese it was impossible for us to compare ono summarizer with ours fundamental differences concerning the assumptions that underlie one work and ours are discussed at length in we were able to obtain only one other program that summarizes english text the one included in the microsoft office97 package we run the microsoft summanzation program on the five texts from scientific american and selected the same percentages of textual units as those considered important by the judges when we selected percentages of text that corresponded only to the clauses considered important by the judges the microsoft program recalled 28 of the units with a precision of 26 when we selected percentages of text that corresponded to sentences considered important by the judges the microsoft program recalled 41 of the units with a precision of 39 al microsoft figures are only slightly above those that correspond to the baseline algontlims that select important units randomly it follows that our program outperforms significantly the one found in the office97 package we are not aware of any other summarization program that can build summaries with granularity as fine as a clause we described the first experiment that shows that the concepts of rhetoncal analysis and nucleanty can be used effectively for summarizing text the experiment suggests that discoursebased methods can account for detemmimg the most important units in a text with a recall and precision as high as 70 we showed how the concepts of rhetorical analysis and nucleanty can be treated algorithmically and we compared recall and precision figures of a summarization program that implements these concepts with recall and precision figures that pertain to a baseline algorithm and to a commercial system the microsoft office97 summarizer the discoursebased summanzation program that we propose outperforms both the baseline and the commercial summarizer however since its results do not match yet the recall and precision figures that pertain to the manual discourse analyses it is likely that improvements of the rhetorical parser algorithm will result in better performance of subsequent implemetationsthe invaluable help he gave me during every stage of this work and to marilyn mantel david mitchell kevin schlueter and melanie baliko for their advice on expenmental design and statistics i am also grateful to marzena makuta for her help with the rst analyses and to my colleagues and friends who volunteered to act as judges in the experiments described here this reasearch was supported by the natural sciences and engineenng research council of canada
W97-0713
from discourse structures to text summarieswe describe experiments that show that the concepts of rhetorical analysts and nuclearity can be used effectively for determining the most important units in a textwe show how these concepts can be implemented and we discuss results that we obtained with a discoursebased summarization program
germanet a lexicalsemantic net for german we present the lexicalsemantic net for german quotgermanetquot which integrates conceptual ontological information with lexical semantics within and across word classes it is compatible with the princeton wordnet but integrates principlebased modifications on the constructional and organizational level as well as on the level of lexical and conceptual relations germanet includes a new treatment of regular polysemy artificial concepts and of particle verbs it furthermore encodes crossclassification and basic syntactic information constituting an interesting tool in exploring the interacof syntax and development of such a large scale resource is particularly important as german up to now lacks basic online tools for the semantic exploration of very large corpora germanet is a broadcoverage lexicalsemantic net for german which currently contains some 16000 words and aims at modeling at least the base vocabulary of germanit can be thought of as an online ontology in which meanings associated with words are grouped according to their semantic relatednessthe basic framework of germanet is similar to the princeton wordnet guaranteeing maximal compatibilitynevertheless some principlebased modifications have been appliedgermanet is built from scratch which means that it is neither a translation of the english wordnet nor is it based on a single dictionary or thesaurusthe development of a german wordnet has the advantage that the applications developed for english using wordnet as a resource can be used for german with only minor modificationsthis affects for example information extraction automatic sense disambiguation and intelligent document retrievalfurthermore germanet can serve as a training source for statistical methods in natural language processing and it makes future integration of german in multilingual resources such as eurowordnet possiblethis paper gives an overview of the resource situation followed by sections on the coverage of the net and the basic relations used for linkage of lexical and conceptual itemsthe main part of the paper is concerned with the construction principles of germanet and particular features of each of the word classesin english a variety of largescale online linguistic resources are availablethe application of these resources is essential for various nlp tasks in reducing time effort and error rate as well as guaranteeing a broader and more domainindependent coveragethe resources are typically put to use for the creation of consistent and large lexical databases for parsing and machine translation as well as for the treatment of lexical syntactic and semantic ambiguityfurthermore linguistic resources are becoming increasingly important as training and evaluation material for statistical methodsin german however not many largescale monolingual resources are publically available which can aid the building of a semantic netthe particular resource situation for german makes it necessary to rely to a large extent on manual labour for the creation process of a wordnet based on monolingual general and specialist dictionaries and literature as well as comparisons with the english wordnethowever we take a strongly corpusbased approach by determining the base vocabulary modeled in germanet by lemmatized frequency lists from text corporalthis list is further tuned by using other available sources such as the celex german databaseclustering methods which in principle can apply to large corpora without requiring any further information in order to give similar words as output proved to be interesting but not helpful for the construction of the core netselectional restrictions of verbs for nouns will however be automatically extracted by clustering methodswe use the princeton wordnet technology for the database format database compilation as well as the princeton wordnet interface applying extensions only where necessarythis results in maximal compatibilitygertnanet shares the basic database division into the four word classes noun adjective verb and adverb with wordnet although adverbs are not implemented in the current working phasefor each of the word classes the semantic space is divided into some 15 semantic fieldsthe purpose of this division is mainly of an organizational nature it allows to split the work into packagesnaturally the semantic fields are closely related to major nodes in the semantic networkhowever they do not have to agree completely with the net toplevel ontology since a lexicographer can always include relations across these fields and the division into fields is normally not shown to the user by the interface softwaregermanet only implements lemmaswe assume that inflected forms are mapped to base forms by an external morphological analyzer in general proper names and abbreviations are not integrated even though the lexicographer may do so for important and frequent casesfrequency counts from text corpora serve as a guideline for the inclusion of lemmasin the current version of the database multiword expressions are only covered occasionaly for proper names and terminological expressions derivates and a large number of high frequent german compounds are coded manually making frequent use 1we have access to a large tagged and lemmatized online corpus of 60000000 words comprising the ec1corpus and the tibinger newskorpus consisting of texts collected in tübingen from electronic newsgroups of crossclassificationan implementation of a more suitable rulebased classification of derivates and the unlimited number of semantically transparent compounds fails due to the lack of algorithms for their sound semantic classificationthe amount of polysemy is kept to a minimunt in germanet an additional sense of a word is only introduced if it conflicts with the coordinates of other senses of the word in the networkwhen in doubt germanet refers to the degree of polysemy given in standard monolingual print dictionariesadditionally germanet makes use of systematic crossclassificationtwo basic types of relations can be distinguished lexical relations which hold between different lexical realizations of concepts and conceptual relations which hold between different concepts in all their particular realizationssynonymy and antonymy are bidirectional lexical relations holding for all word classesall other relations are conceptual relationsan example for synonymy are torkeln and taumeln which both express the concept of the same particular lurching motionan example for antonymy are the adjectives kalt and warm these two relations are implemented and interpreted in gernianet as in wordnetthe relation pertains to relates denominal adjectives with their nominal base deverbal nominalizations with their verbal base and deadjectival nominalizations with their respective adjectival base this pointer is semantic and not morphological in nature because different morphological realizations can be used to denote derivations from different meanings of the same lemma while konventzonal is related to konvention the relation of hyponymy holds for all word classes and is implemented in germanet as in wordnet so for example rotkehlchen is a hyponym of vogel meronymy the partwhole relation holds only for nouns and is subdivided into three relations in wordnet germanet however currently assumes only one basic meronymy relationan example for meronytny is arm standing in the naeronymy relation to korper for verbs wordnet makes the assumption that the relation of entailment holds in two different situations in cases of temporal inclusion of two events as in schnarchen entailing schlafen in cases without temporal inclusion as in what fellbaum calls backward presupposition holding between grimyen and versuchen however these two cases are quite distinct from each other justifying their separation into two different relations in germanetthe relation of entailment is kept for the case of backward presuppositionfollowing a suggestion made in eurowordnet we distinguish temporal inclusion by its characteristics that the first event is always a subevent of the second and thus the relation is called subevent relationthe because relation in wordnet is restricted to hold between verbswe extend its coverage to account for resultative verbs by connecting the verb to its adjectival resultative statefor example offnen causes offen seleetional restrictions giving information about typical nominal arguments for verbs and adjectives are additionally implementedthey do not exist in wordnet even though their existence is claimed to be important to fully characterize a verbs lexical behavior these selectional properties will be generated automatically by clustering methods once a sensetagged corpus with germanet classes is availableanother additional pointer is created to account for regular polysemy in an elegant and efficient way marking potential regular polysemy at a very high level and thus avoiding duplication of entries and timeconsuming work as opposed to wordnet connectivity between word classes is a strong point of germanetthis is achieved in different ways the crossclass relations of wordnet are used more frequently certain wordnet relations are modified to cross word classes and new crossclass relations are introduced cross class relations are particularly important as the expression of one concept is often not restricted to a single word classadditionally the final version will contain examples for each concept which are to be automatically extracted from the corpussome of the guiding principles of the germanet ontology creation are different from wordnet and therefore now explainedwordnet does contain artificial concepts that is nonlexicalized conceptshowever they are neither marked nor put to systematic use nor even exactly definedin contrast germanet enforces the systematic usage of artificial concepts and especially marks them by a quotr thus they can be cut out on the interface level if the user wishes sowe encode two different sorts of artificial concepts lexical gaps which are of a conceptual nature meaning that they can be expected to be expressed in other languages and proper artificial concepts 2 advantages of artificial concepts are the avoidance of unmotivated cohyponyms and a systematic structuring of the datasee the following examples in figure 1 noble man is a cohyponym to the other three hyponyms of human even though the first three are related to a certain education and noble man refers to a state a person is in from birth onthis intuition is modeled in figure 2 with the additional artificial concept educated humanin figure 3 all concepts except for the leaves are proper artificial conceptsthat is one would not expect any language to explicitly verbalize the concept of for example manner of motion verbs which specify the specific instrument usednevertheless such a structuring is important because it captures semantic intuitions every speaker of german has and it groups verbs according to their semantic relatednesscontrary to wordnet gertnanet enforces the use of crossclassification whenever two conflicting hierarchies applythis becomes important for example in the classification of animals where folk and specialized biological hierarchy compete on a large scaleby crossclassifying between these two hierarchies the taxonomy becomes more accessible and integrates different semantic components which are essential to the meaning of the conceptsfor example in figure 4 the concept of a cat is shown to biologically be a vertebrate and a pet in the folk hierarchy whereas a whale is only a vertebrate and not a petthe concept of crossclassification is of great importance in the verbal domain as well where most concepts have several meaning components according to which they could be classifiedhowever relevant information would be lost if only one particular aspect was chosen with respect to hyponymyverbs of sound for example form a distinct semantic class the members of which differ with respect to additional verb classes with which they crossclassify in english as in germanaccording to levin some can be used as verbs of motion accompanied by sound others as verbs of introducing direct speech or verbs expressing the causation of the emission of a sound systematic crossclassification allows to capture this finegrained distinction easily and in a principlebased waywith respect to nouns the treatment of regular polysemy in germsnet deserves special attentiona number of proposals have been made for the representation of regular polysemy in the lexiconit is generally agreed that a pure sense enumeration approach is not sufficientinstead the different senses of a regularly polysemous word need to be treated in a more principlebased manner germanet is facing the problem that lexical entries are integrated in an ontology with strict inheritance rulesthis implies that any notion of regular polysemy must obey the rules of inheritanceit furthermore prohibits joint polysemous entries with dependencies from applying for only one aspect of a polysemous entrya familiar type of regular polysemy is the quotorganization building it occupiesquot polysernygermanet lists synonyms along with each concepttherefore it is not possible to merge such a type of polysemy into one concept and use crossclassification to point to both institution and building as in figure 5this is only possible if all synonyms of both senses and all their dependent nodes in the hierarchy share the same regular polysemy which is hardly ever the caseto allow for regular polysemy germanet introduces a special bidirectional relator which is placed to the top concepts for which the regular polysemy holds in figure 6 the entry bank may have the synonyms depository financial institution banking concern banking company which are not synonyms of bank2 in addition banki may have hyponyms such as credit union agent bank commercial bank full service bank which do not share the regular polysemy of baraki and bank2statistically frequent cases of regular polysemy are manually and explicitly encoded in the netthis is necessary because they often really are two separate concepts and each sense may have different synonyms however the polysemy pointer additionally allows the recognition of statistically infrequent uses of a word sense created by regular polysemyso for example the sentence i had crocodile for hutch is very infrequent in that crocodile is not commonly perceived as meat but only as animalnevertheless we know that a regular polysemy exists between meat and animaltherefore we can reconstruct via the regular polysemy pointer that the meat sense is referred to in this particular sentence even though it is not explicitly encodedthus the pointer can be conceived of as an implementation of a simple default via which the net can account for language productivity and regularity in an effective manneradjectives in germanet are modeled in a taxonornical manner making heavy use of the hyponymy relation which is very different from the satellite approach taken in wordnetour approach avoids the rather fuzzy concept of indirect antonyms introduced by wordnetadditionally we do not introduce artificial antonyms as wordnet does the taxonomical classes follow with an additional class for pertainym83syntactic frames and particle verbs deserve special attention in the verbal domainthe frames used in germanet differ from those in wordnet and particle verbs as such are treated in wordnet at alleach verb sense is linked to one or more syntactic frames which are encoded on a lexical rather than on a conceptual levelthe frames used in germanet are based on the complementation codes provided by celex the notation in germanet differs from the celex database in providing a notation for the subject and a complementation code for obligatory reflexive phrasesgermanet provides frames for verb senses rather than for lemmas implying a full disambiguation of the celex complementation codes for germanetsyntactic information in gerrnanet differs from that given in wordnet in several waysit marks expletive subjects and reflexives explicitly encodes case information which is especially important in german distinguishes between different realizations of prepositional and adverbial phrases and marks toinfinitival as well as pure infinitival complements explicitlyparticles pose a particular problem in germanthey are very productive which would lead to an explosion of entries if each particle verb was explicitly encodedsome particles establish a regular semantic pattern which can not be accounted for by a simple enumeration approach whereas others are very irregular and ambiguouswe therefore propose a mixed approach treating irregular particle verbs by enumeration and regular particle verbs in a compositional mannercomposition can be thought of as a default which can be overwritten by explicit entries in the databasewe assume a morphological component such as gertwol to apply before the compositional process startscomposition itself is implemented as follows relying on a separate lexicon for particlesthe particle lexicon is hierarchically structured and lists selectional restrictions with respect to the base verb selectedan example for the hierarchical structure is given in figure 7 where heraus is a hyponym of her and amsadjectives pertaining to a noun from which they derive their meaning selectional restrictions for particles include aktionsart a particular semantic verb field deictic orientation and directional orientation of the base verbthe evaluation of a particle verb takes the following stepsfirst germanet is searched for an explicit entry of the particle verb if no such entry exists the verb is morphologically analyzed and its semantics is compositionally determinedfor example the particle verb herauslaufen in figurer is a hyponym to millen as well as to herauscriteria for a compositional treatment are separability productivity and a regular semantics of the particle stiebels stegmann a wordnet for german has been described which compared with the princeton wordnet integrates principlebased modifications and extensions on the constructional and organizational level as well as on the level of lexical and conceptual relationsinnovative features of germanet are a new treatment of regular polysemy and of particle verbs as well as a principlebased encoding of crossclassification and artificial conceptsas compatibility with the princeton wordnet and eurowordnet is a major construction criteria of germanet german can now finally be integrated into multilingual largescale projects based on ontological and conceptual informationthis constitutes an important step towards the design of truly multilingual tools applicable in key areas such as information retrieval and intelligent internet search engines
W97-0802
germanet a lexicalsemantic net for germanwe present the lexicalsemantic net for german germanet which integrates conceptual ontological information with lexical semantics within and across word classesit is compatible with the princeton wordnet but integrates principlebased modifications on the constructional and organizational level as well as on the level of lexical and conceptual relationsgermanet includes a new treatment of regular polysemy artificial concepts and of particle verbsit furthermore encodes crossclassification and basic syntactic information constituting an interesting tool in exploring the interaction of syntax and semanticsthe development of such a large scale resource is particularly important as german up to now lacks basic online tools for the semantic exploration of very large corporagermanet is a large lexical database where words are associated with pos in formation and semantic sorts which are organized in a finegrained hierarchy
cogniac high precision coreference with limited knowledge and linguistic resources this paper presents a high precision pronoun resolution system that is capable of greater than 90 precision with 60 and better recall for some pronouns it is suggested that the system is resolving a subset of anaphors that do not require general world knowledge or sophisticated linguistic processing for successful resolution the system does this by being very sensitive to ambiguity and only resolving pronouns when very high confidence rules have been satisfied the system is capable of noticing ambiguity because it requires that there be a unique antecedent within a salience ranking and the salience rankings are not total orders ie two or more antecedents can be equally salient given the nature of the systems rules it is very likely that they are largely domain independent and that they reflect processing strategies used by humans for general language comprehension the system has been evaluated in two distinct experiments which support the overall validity of the approach pronoun resolution is one of the classic computational linguistics problemsit is also widely considered to be inherently an ai complete taskmeaning that resolution of pronouns requires full world knowledge and inferencecogniac is a pronoun resolution engine designed around the assumption that there is a subclass of anaphora that does not require general purpose reasoningthe kinds of information cogniac does require includes sentence detection partofspeech tagging simple noun phrase recognition basic semantic category information like gender number and in one configuration partial parse treeswhat distinguishes cogniac from algorithms that use similar sorts of information is that it will not resolve a pronoun in circumstances of ambiguitycrucially ambiguity is a function of how much knowledge an understander hassince cogniac does not have as rich a representation of world knowledge as humans it finds much more ambiguity in texts than humans do2 a path to high precision pronominal resolution avoid guesswork in ambiguous contexts it is probably safe to say that few referring pronouns are conveyed without the speakerwriter having an antecedent in mindambiguity occurs when the perceiver cannot recover from the context what conveyer has in mindi have found myself uttering pronouns which the hearer has no chance of recovering the antecedent to because they are not attending to the same part of the external environment quothe sure looks familiarquot or in text i am so focused on the context of what i am writing that use a pronoun to refer to a highly salient concept for me but the antecedent may completely evade a reader without my familiarity with the topicof course it is possible to explicitly leave the reader hanging as in quotearl and dave were working together when suddenly he fell into the threshing machinequot humans unlike most coreference algorithms notice such cases of ambiguity and can then ask for clarification or at least grumble about how we cannot climb into the writers head to figure out what they meantbut in that grumble we have articulated the essence of the problemwe do not have sufficient knowledge to satisfy ourselves that an antecedent has been foundpronoun resolution systems have extremely limited knowledge sources they cannot access a fraction of human common sense knowledgeto appreciate this consider the following text with grammatical tags replacing words with pronouns and names left in place the city council verbgroup the women np cc they vb nn mariana vbd pp sarah to vb herself pp dt md nn without lexical knowledge a human attempting to resolve the pronouns is in much the knowledge impoverished position of the typical coreference algorithmit is no surprise that texts with so little information provided in them tend to be more ambiguous than the texts in fleshed out formthe conclusion to draw from this example is that the limiting factor in cogniac is knowledge sources not an artificial restriction on domains or kinds of coreferencethis point will be resumed in the discussion section when what the consequences of fuller knowledge sources would be on cogniacfor noun phrase anaphora gathering semantically possible antecedents amounts to running all the noun phrases in a text through various databases for number and gender and perhaps then a classifier that determines whether a noun phrase is a company person or placethis set of candidate antecedents rarely has more than 5 members when some reasonable locality constraints are adhered to and this set almost always contains the actual antecedentthe remainder of the coreference resolution process amounts to picking the right entity from this setfor the kinds of data considered here there is a rarely a need for general world knowledge in assembling the initial set of possible antecedents for pronounsthis does not address the issue of inferred antecedents event reference discourse deixis and many other sorts of referring phenomenon which clearly require the use of world knowledge but are beyond the scope of this workas it happens recognizing the possible antecedents of these pronouns is within the capabilities of current knowledge sourcesbetter knowledge sources could be used to reduce the space of possible antecedentsfor example the well known winograd 19721 alternation the city council refused to give the women a permit because they fearedadvocated violencethere are two semantically possible antecedents to they the city council and the womenthe problem is picking the correct onedependent on verb choice they strongly prefers one antecedent to the othercapturing this generalization requires a sophisticated theory of verb meaning as relates to pronoun resolutionspeaking anecdotally these kinds of resolutions happen quite often in textcogniac recognizes knowledge intensive coreference and does not attempt to resolve such instancesi the named entity task at muc6 used a similar classification task and the best system performance was 96 precision97 recallfortunately not all instances of pronominal anaphora require world knowledge for successful resolutionin lieu of full world knowledge cogniac uses regularities of english usage in an attempt to mimic strategies used by humans when resolving pronounsfor example the syntax of a sentence highly constrains a reflexive pronoun antecedentalso if there is just one possible antecedent in entire the prior discourse then that entity is nearly always the correct antecedentcogniac consists of a set of such observations implemented in perlcogniac has been used with a range of linguistic resources ranging from scenarios where almost no linguistic processing of the text is done at all to partial parse trees being providedat the very least there must be sufficient linguistic resources to recognize pronouns in the text and the space of candidate antecedents must be identifiedfor the first experiment the text has been part of speech tagged and basal noun phrases have been identified with 11 as shown below marianannp motionedvbd forin sarahnnp toto seatnb herselfprp onin adt twoseaternn loungenn in addition finite clauses were identified and various regular expressions are used to identify subjects objects and what verbs take as arguments for the purposes of coreference restrictionswith this level of linguistic annotation nearly all the parts of cogniac can be used to resolve pronounsthe core rules of cogniac are given below with their performance on training data provided in addition examples where the rules successfully apply have been provided for most of the rules with the relevant anaphors and antecedents in boldfacethe term possible antecedents refers to the set of entities from the discourse that are compatible with an anaphor gender number and coreference restrictions mariana motioned for sarah to seat herself on a twoseater lounge sentence then pick i as the antecedent 114 correct and 2 incorrectrupert murdock news corp confirmed his interest in buying back the ailing new york postbut analysts said that if he winds up bidding for the paper possessive pronoun and there is a single exact string match i of the possessive in the prior sentence then pick i as the antecedent 4 correct and 1 incorrectafter he was dry joe carefully laid out the damp towel in front of his lockertravis went over to his locker took out a towel and started to dry off5 unique current sentence if there is a single possible antecedent in the readin portion of the current sentence then pick i as the antecedent 21 correct and 1 incorrectlike a large bear he sat motionlessly in the lounge in one of the faded armchairs watching constantinafter a week constantin tired of reading the old novels in the bottom shelf of the bookcasesomewhere among the gray well thumbed pages he had hoped to find a message from one of his predecessorsif the subject of the prior sentence contains a single possible antecedent i and the anaphor is the subject of the current sentence then pick i as the antecedent 11 correct and 0 incorrectbesides if he provoked malek uncertainties were introduced of which there were already far too manyhe noticed the supervisor enter the lounge the method of resolving pronouns within cogniac works as follows pronouns are resolved lefttoright in the textfor each pronoun the rules are applied in the presented orderfor a given rule if an antecedent is found then the appropriate annotations are made to the text and no more rules are tried for that pronoun otherwise the next rule is triedif no rules resolve the pronoun then it is left unresolvedthese rules are individually are high precision rules and collectively they add up to reasonable recallthe precision is 97 and the recall is 60 for 198 pronouns of training datathe naive algorithm hobbs 1976 works by specifying a total order on noun phrases in the prior discourse and comparing each noun phrase against the selectional restrictions of the anaphor and taking the antecedent to be the first one to satisfy themthe specification of the ordering constitutes a traversal order of the syntax tree of the anaphors clause and from there to embedding clauses and prior clausesthe winograd sentences with either verb would yield the following ordering of possible antecedents the city council the women the algorithm would resolve they to the city councilthis is incorrect on one choice of verb but the algorithm does not integrate the verb information into the salience rankingin comparison none of the six rules of cogniac would resolve the pronounrules have been tried that resolved a subject pronoun of a nested clause with the subject of the dominating clause but no configuration has been found that yielded sufficient precision2consequently they is not resolvedthe naive algorithm has some interesting propertiesfirst it models relative salience as relative depth in a search spacefor two candidate antecedents a and b if a is encountered before b in the search space then a is more salient than bsecond the relative saliency of all candidate antecedents is totally ordered that is for any two candidate antecedents a and b a is more salient than b xor b is more salient than a2 in experiment 2 discussed below the rule ubject same clause would resolve they to the city council but it was added to the muc6 system without testing and has shown itself to not be a high precision rulecogniac shares several features of the naive algorithm circumstances of many possible antecedents and will not resolve pronouns in such casesthe naive algorithm has no means of noting ambiguity and will resolve a pronoun as long as there is at least one possible antecedentperhaps the most convincing reason to endorse partially ordered salience rankings is that salience distinctions fade as the discourse moves onearl was working with ted the other dayhe fell into the threshing machineearl was working with ted the other dayall of the sudden the cows started making a ruckusthe noise was unbelievablehe fell into the threshing machinein the first example he takes earl as antecedent which is what rule 6 unique subjectsubject pronoun would resolve the pronoun tohowever in the second example the use of he is ambiguousa distinction that existed before is now gonethe naive algorithm would still maintain a salience distinction between earl and ted where cogniac has no rule that makes a salience distinction between subject and object of a sentence which has two intervening sentencesthe closest rule would be unique in discourse rule 1 which does not yield a unique antecedentcogniac has been evaluated in two different contextsthe goal of the first experiment was to establish relative performance of cogniac to hobbs naive algorithma convenient benchmark that allows indirect comparison to other algorithmsthe second experiment reports results on wall street journal datathe chosen domain for comparison with hobbs naive algorithm was narrative texts about two persons of the same gender told from a third person perspectivethe motivation for this data was that we wanted to maximize the ambiguity of resolving pronounsonly singular third person pronouns were consideredthe text was preprocessed with a partofspeech tagger over which basal noun phrases were delimited and finite clauses and their relative nesting were identified by machinethis preprocessing was subjected to hand correction in order to make comparison with hobbs as fair as possible since that was an entirely hand executed algorithm but cogniac was otherwise machine run and scorederrors were not chained ie in lefttoright processing of the text earlier mistakes were corrected before processing the next noun phrasesince the naive algorithm resolves all pronouns two lower precision rules were added to rules 16 for comparisons sakethe rules are the last two rules are lower precision than the first six but perform well enough to merit their inclusion in a resolve all pronouns configurationrule 7 performed reasonably well with 77 precision in training and rule 8 performed with 65 precision in training the first six rules each had a precision of greater than 90 for the training data with the exception of rule 4 which had a precision of 80 for 5 resolutionsthe summary performance of the naive algorithm and cogniac for the first 100 or so pronouns in three narrative texts are results for 298 third person pronouns in text about two same gender peoplesince both the naive algorithm and the resolve all pronouns configuration of cogniac are required to resolve all pronouns precision and recall figures are not appropriateinstead correct figures are giventhe high precision version of cogniac is reported with recall and precision measuresthe conclusion to draw from these results is if forced to commit to all anaphors cogniac performs comparably to the naive algorithmlappin and leass 3 rule 7 is based on the primitives of centering theory the cb of an utterance is the highest ranked np from the prior finite clause realized anaphorically in the current finite clauseplease see baldwin 95 for a full discussion of the details of the rule1994 correctly resolved 86 of 360 pronouns in computer manualslapin and leass run hobbs algorithm on the their data and the naive algorithm is correct 82 of the time4 worsethis allows indirect comparison with cogniac with the suggestive conclusion that the resolve all pronouns configuration of cogniac like the naive algorithm is at least in the ballpark of more modern approaches the breakdown of the individual rules is as follows performance of individual rules in experiment 1note the high precision of rules 1 6recall correctactual precision correctguessed far more interesting to consider is the performance of the high precision rules 1 through 6the first four rules perform quite well at 96 precision and 50 recall adding in rules 5 and 6 resolves a total of 190 pronouns correctly with only 16 mistakes a precision of 92 and recall of 64this contrasts strongly with the resolveallpronouns results of 78the last two rules 7 and 8 performed quite badly on the test datadespite their poor performance cogniac still remained comparable to the naive algorithm322 experiment 2 all pronouns in muc6 evaluation cogniac was used as the pronoun component in the university pennsylvania coreference entry5 in the muc6 evaluationpronominal anaphora constitutes 17 of coreference annotations in the evaluation data usedthe remaining instances of anaphora included common noun anaphora and coreferent instances of proper nounsas a result being part of a larger system changes were made to cogniac to make it fit in better with the other components of the overall system in addition to adding rules that were specialized for the new kinds of pronominal anaphorathese changes include 4 this is not to say that rap was not an advancement of the state of the arta significant aspect of that research is that both rap and the naive algorithm were machine executedthe naive algorithm was not machine executed in either the hobbs 76 paper or in the evaluation in this worka total of thirty articles were used in the formal evaluation of which i chose the first fifteen for closer analysisthe remaining fifteen were retained for future evaluationsthe performance of cogniac was as follows the precision is quite a bit worse than that encountered in the narrativethe performance of the individual rules was quite different from the narrative texts as shown in the table below the results for cogniac for all pronouns in the first 15 articles of the muc6 evaluationupon closer examination approximately 75 of the errors were due to factors outside the scope of the cogniac pronominal resolution componentsoftware problems accounted for 20 of the incorrect cases another 30 were due to semantic errors like misclassification of a noun phrase into person or company singularplural etcthe remaining errors were due to incorrect noun phrase identification failure to recognize pleonasticit or other cases where there is no instance of an antecedenthowever 25 of the errors were due directly to the rules of cogniac being plain wrongcogniac is both an engineering effort and a different approach to information processing in variable knowledge contextseach point is addressed in turna question raised by a reviewer asked whether there was any use for high precision coreference given that it is not resolving as much coreference as other methodsin the first experiment the high precision version of cogniac correctly resolved 62 of the pronouns as compared to the resolve all pronouns version which resolved 79 of thema 27 loss of overall recallthe answer to this question quite naturally depends on the application coreference is being used insome examples followinformation retrieval is characterized as a process by which a query is used to retrieve relevant documents from a text databasequeries are typically natural language based or boolean expressionsdocuments are retrieved and ranked for relevance using various string matching techniques with query terms in a document and the highest scoring documents are presented to the user firstthe role that coreference resolution might play in information retrieval is that retrieval algorithms that a count the number of matches to a query term in a document or b count the proximity of matches to query terms would benefit by noticing alternative realizations of the terms like he in place george bushin such an application high precision coreference would be more useful than high recall coreference if the information retrieval engine was returning too many irrelevant documents but getting a reasonable number of relevant documentsthe coreference would only help the scores of presumably relevant documents but at the expense of missing some relevant documentsa higher recall lower precision algorithm would potentially add more irrelevant documentsa direct application of the quotambiguity noticingquot ability of cogniac is in checking the coherence of pronoun use in text for children and english as a second language learnersambiguous pronoun use is a substantial problem for beginning writers and language learnerscogniac could scan texts as they are being written and evaluate whether there was sufficient syntactic support from the context to resolve the pronounif not then the user could be notified of a potentially ambiguous useit is not clear that cogniac current levels of performance could support such an application but it is a promising applicationinformation extraction amounts to filling in template like data structures from free texttypically the patterns which are used to fill the templates are hand builtthe latest muc6 evaluation involved management changes at companiesa major problem in information extraction is the fact that the desired information can be spread over many sentences in the text and coreference resolution is essential to relate relevant sentences to the correct individuals companies etcthe muc6 coreference task was developed with the idea that it would aid information extraction technologiesthe consequences for an incorrectly resolved pronoun can be devastating to the final template filling taskone runs the risk of conflating information about one individual with anotherhigh precision coreference appears to be a natural candidate for such applicationscogniac effectively circumscribes those cases where coreference can be done with high confidence and those cases that require greater world knowledge but how might cogniac be a part of a more knowledge rich coreference applicationcogniac as a set of seven or so high precision rules would act as an effective filter on what a more knowledge rich application would have to resolvebut the essential component behind cogniac is not the rules themselves but the control structure of behind its coreference resolution algorithmthis control structure could control general inference techniques as wellan interesting way to look at cogniac is as a search procedurethe naive algorithm can be over simplified as depth first search over parse treesdepth first search is also a perfectly reasonable control structure for an inference engine as it is with prologthe search structure of cogniac could be characterized as parallel iterative deepening with solutions being accepted only if a unique solution is found to the depth of the parallel searchbut there is not enough room in this paper to explore the general properties of cogniac search and evaluation strategyanother angle on cogniac role with more robust knowledge sources is to note that the recall limitations of cogniac for the class of pronounsdata considered are due to insufficient filtering mechanisms on candidate antecedentsthere is not a need to expand the space of candidate antecedents with additional knowledge but rather eliminate semantically plausible antecedents with constraints from verb knowledge and other sources of constraints currently not available to the systemhowever there are classes of coreference that require strong knowledge representation to assemble the initial set of candidate antecedentsthis includes the realm of inferred definites quoti went to the house and opened the doorquot and synonymy between definite common nouns as in quotthe tax and the levyhobbs 1976 ultimately rejects the naive algorithm as a standalone solution to the pronoun resolution problemin that rejection he states the naive algorithm does not workanyone can think of examples where it failsin these cases it not only fails it gives no indication that it has failed and offers no help in finding the real antecedenthobbs then articulates a vision of what the appropriate technology is which entails inference over an encoding of world knowledgebut is world knowledge inherent in resolving all pronouns as hobbs skepticism seems to conveyit has not been clear up to this point whether any anaphora can be resolved with high confidence given that there are clear examples which can only be resolved with sophisticated world knowledge eg the winograd city council sentencesbut the results from the first and second experiments demonstrate that it is possible to have respectable recall with very high precision for some kinds of pronominal resolutionhowever good performance does not necessarily falsify hobbs skepticismthe high precision component of cogniac still makes mistakes 89 error for the first experimentit is harder to evaluate the second experimentif it were the case that integration of world knowledge would have prevented those errors then hobbs skepticism still holds since cogniac has only minimized the role of world knowledge not eliminated itin looking at the mistakes made in the second experiment there were no examples that appeared to be beyond the scope of further improving the syntactic rules or expanding the basic categorization of noun phrases into person company or placefor the data considered so far there does appear to be a class of anaphors that can be reliably recognized and resolved with nonknowledge intensive techniqueswhether this holds in general remains an open question but it is a central design assumption behind the systema more satisfying answer to hobbs skepticism is contained in the earlier suggestive conjecture that world knowledge facilitates anaphora by eliminating ambiguitythis claim can be advanced to say that world knowledge comes into play in those cases of anaphora that do not fall under the purview of rules 1 through 7 and their refinementsif this is correct then the introduction of better world knowledge sources will help in the recall of the system rather than the precisionultimately the utility of cogniac is a function of how it performsthe high precision rules of cogniac performed very well greater than 90 precision with good recall for the first experimentin the second experiment components other than the rules of cogniac began to degrade the performance of the system undulybut there is promise in the high precision core of cogniac across varied domainscogniac is currently the common noun and pronoun resolution component of the university of pennsylvania coreference resolution software and general nlp software this paper does not address the common noun coreference aspects of the system but there are some interesting parallels with pronominal coreferencesome changes planned include the following sorts of coreference the processing of split antecedents john called marythey went to a moviethis class of coreference is quite challenging because the plural anaphor they must be able to collect a set of antecedents from the prior discoursebut how far should it look back and once it has found two antecedents should it continue to look for moreevent reference is a class of coreference that will also prove to be quite challengingfor example the computer won the matchit was a great triumphthe antecedent to it could be any of the computer the match or the event of winningthe space of ambiguity will certainly grow substantially when events are considered as candidate antecedentscurrently the system uses no verb semantics to try and constrain possible coreferencewhile the winograd sentences are too difficult for current robust lexical semantic systems simpler generalizations about what can fill an argument are possible consider the price of aluminum rose today due to large purchases by alcoa incit claimed that it was not trying to corner the marketsince it is an argument to claimed a verb that requires that its subject be animate we can eliminate the price of aluminum and today from consideration leaving alcoa inc as the sole singular antecedent from the prior sentencework has been done along these lines by dagan 90i would like to thank my advisors ellen prince and aravind joshi for their supportalso the comments of two anonymous reviewers proved quite helpful
W97-1306
cogniac high precision coreference with limited knowledge and linguistic resourcesthis paper presents a high precision pronoun resolution system that is capable of greater than 90 precision with 60 and better recall for some pronounsit is suggested that the system is resolving a subset of anaphors that do not require general world knowledge or sophisticated linguistic processing for successful resolutionthe system does this by being very sensitive to ambiguity and only resolving pronouns when very high confidence rules have been satisfiedthe system is capable of noticing ambiguity because it requires that there be a unique antecedent within a salience ranking and the salience rankings are not total orders ie two or more antecedents can be equally salientgiven the nature of the systems rules it is very likely that they are largely domain independent and that they reflect processing strategies used by humans for general language comprehensionthe system has been evaluated in two distinct experiments which support the overall validity of the approachour method cogniac is a knowledge poor approach to anaphora resolution based on a set of high confidence rules which are successively applied over the pronoun under consideration
indexing with wordnet synsets can improve text retrieval tem experiments in automatic document pro m sanderson 1994 word sense disambiguation information retrieval in of 17th international conference on research and development in information retrieval af smeaton and a quigley 1996 experiments on using semantic distances between words in imcaption retrieval proceedings of the international conference on research and development in ir a smeaton f kelledy and r odonnell 1995 trec4 experiments at dublin city university thresolding posting lists query expansion with and pos tagging of spanish in proceedings of trec4 m voorhees 1994 query relations in of the 17th annual international acmsigir conference on research and development in information retrieval text retrieval deals with the problem of finding all the relevant documents in a text collection for a given user querya largescale semantic database such as wordnet seems to have a great potential for this taskthere are at least two obvious reasons however the general feeling within the information retrieval community is that dealing explicitly with semantic information does not improve significantly the performance of text retrieval systemsthis impression is founded on the results of some experiments measuring the role of word sense disambiguation for text retrieval on one hand and some attempts to exploit the features of wordnet and other lexical databases on the other handin word sense ambiguity is shown to produce only minor effects on retrieval accuracy apparently confirming that querydocument matching strategies already perform an implicit disambiguationsanderson also estimates that if explicit wsd is performed with less than 90 accuracy the results are worse than non disambiguating at allin his experimental setup ambiguity is introduced artificially in the documents substituting randomly chosen pairs of words with artificially ambiguous terms while his results are very interesting it remains unclear in our opinion whether they would be corroborated with real occurrences of ambiguous wordsthere is also other minor weakness in sanderson experimentswhen he quotdisambiguatesquot a term such as springbank to get for instance bank he has done only a partial disambiguation as bank can be used in more than one sense in the text collectionbesides disambiguation many attempts have been done to exploit wordnet for text retrieval purposesmainly two aspects have been addressed the enrichment of queries with semanticallyrelated terms on one hand and the comparison of queries and documents via conceptual distance measures on the otherquery expansion with wordnet has shown to be potentially relevant to enhance recall as it permits matching relevant documents that could not contain any of the query terms however it has produced few successful experimentsfor instance manually expanded 50 queries over a trec1 collection using synonymy and other semantic relations from wordnet 13voorhees found that the expansion was useful with short incomplete queries and rather useless for complete topic statements where other expansion techniques worked betterfor short queries it remained the problem of selecting the expansions automatically doing it badly could degrade retrieval performance rather than enhancing itin a combination of rather sophisticated techniques based on wordnet including automatic disambiguation and measures of semantic relatedness between querydocument concepts resulted in a drop of effectivenessunfortunately the effects of wsd errors could not be discerned from the accuracy of the retrieval strategyhowever in retrieval on a small collection of image captions that is on very short documents is reasonably improved using measures of conceptual distance between words based on wordnet 14previously captions and queries had been manually disambiguated against wordnetthe reason for such success is that with very short documents the chance of finding the original terms of the query are much lower than for averagesize documents these results are in agreement with but it remains the question of whether the conceptual distance matching would scale up to longer documents and queriesin addition the experiments in _ only consider nouns while wordnet offers the chance to use all openclass words our essential retrieval strategy in the experiments reported here is to adapt a classical vector model based system using wordnet synsets as indexing space instead of word formsthis approach combines two benefits for retrieval one that terms are fully disambiguated and two that equivalent terms can be identified note that query expansion does not satisfy the first condition as the terms used to expand are words and therefore are in turn ambiguouson the other hand plain word sense disambiguation does not satisfy the second condition as equivalent senses of two different words are not matchedthus indexing by synsets gets maximum matching and minimum spurious matching seeming a good starting point to study text retrieval with wordnetgiven this approach our goal is to test two main issues which are not clearly answered to our knowledge by the experiments mentioned above wsdthis paper reports on our first results answering these questionsthe next section describes the test collection that we have producedthe experiments are described in section 3 and the last section discusses the results obtainedthe bestknown publicly available corpus handtagged with wordnet senses is semcor a subset of the brown corpus of about 100 documents that occupies about 11 mb the collection is rather heterogeneous covering politics sports music cinema philosophy excerpts from fiction novels scientific texts a new bigger version has been made available recently but we have not still adapted it for our collectionwe have adapted semcor in order to build a test collection that we call irsemcor in four manual steps ments with lengths varying between 4 and 50 words and an average of 22 words per summaryeach summary is a human explanation of the text contents not a mere bag of related keywordsthese summaries serve as queries on the text collection and then there is exactly one relevant document per querywe also generated a list of quotstopsensesquot and a list of quotstopsynsetsquot automatically translating a standard list of stop words for englishsuch a test collection offers the chance to measure the adequacy of wordnetbased approaches to ir independently from the disambiguator being used but also offers the chance to measure the role of automatic disambiguation by introducing different rates of quotdisambiguation errorsquot in the collectionthe only disadvantage is the small size of the collection which does not allow finegrained distinctions in the resultshowever it has proved large enough to give meaningful statistics for the experiments reported herealthough designed for our concrete text retrieval testing purposes the resulting database could also be useful for many other tasksfor instance it could be used to evaluate automatic summarization systems and other related taskswe have performed a number of experiments using a standard vectormodel based text retrieval system smafrr and three different indexing spaces the original terms in the documents the wordsenses corresponding to the document terms and the wordnet synsets corresponding to the document terms these are all the experiments considered here the filein this case it is a noun belonging to the nouncommunication filewith this collection we can see if plain disambiguation is helpful for retrieval because word senses are distinguished but synonymous word senses are not identifiedquotargument debatelquot this collection represents conceptual indexing as equivalent word senses are represented with a unique identifier4we produced different versions of the synset indexed collection introducing fixed percentages of erroneous synsetsthus we simulated a wordsense disambiguation process with 5 10 20 30 and 60 error ratesthe errors were introduced randomly in the ambiguous words of each documentwith this set of experiments we can measure the sensitivity of the retrieval process to disambiguation errorsin all cases we compared at c and nnn standard weighting schemes and they produced very similar resultsthus we only report here on the results for nnn weighting schemein figure 1 we compare different indexing approaches indexing by synsets indexing by words and indexing by word senses the leftmost point in each curve represents the percentage of documents that were successfully ranked as the most relevant for its summaryquerythe next point represents the documents retrieved as the first or the second most relevant to its summaryquery and so onnote that as there is only one relevant document per query the leftmost point is the most representative of each curvetherefore we have included this results separately in table 1the results are encouraging documents a 29 improvement with respect to smartthis is an excellent result although we should keep in mind that is obtained with manually disambiguated queries and documentsnevertheless it shows that wordnet can greatly enhance text retrieval the problem resides in achieving accurate automatic word sense disambiguation indexing by word senses improves performance when considering up to four documents retrieved for each querysummary although it is worse than indexing by synsetsthis confirms our intuition that synset indexing has advantages over plain word sense disambiguation because it permits matching semantically similar termstaking only the first document retrieved for each summary the disambiguated collection gives a 532 success against a 48 of the plain sn1art query which represents a 11 improvementfor recall levels higher than 085 however the disambiguated collection performs slightly worsethis may seem surprising as word sense disambiguation should only increase our knowledge about queries and documentsbut we should bear in mind that wordnet 15 is not the perfect database for text retrieval and indexing by word senses prevents some matchings that can be useful for retrievalfor instance design is used as a noun repeatedly in one of the documents while its summary uses design as a verbwordnet 15 does not include crosspartofspeech semantic relations so this relation cannot be used with word senses while term indexing simply does not distinguish themother problems of wordnet for text retrieval include too much finegrained sensedistinctions and lack of domain information see for a more detailed discussion on the adequacy of wordnet structure for text retrievalfigure 2 shows the sensitivity of the synset indexing system to degradation of disambiguation accuracy from the plot it can be seen that differs from result the main difference is that we are using concepts rather than word sensesbut in addition it must be noted that sanderson setup used artificially created ambiguous pseudo words which are not guaranteed to behave as real ambiguous wordsmoreover what he understands as disambiguating is selecting in the example bank or spring which remain to be ambiguous words themselvesit is too soon to say if stateoftheart wsd techniques can perform with less than 30 errors because each technique is evaluated in fairly different settingssome of the best results on a comparable setting are reported reported in they reach a 587 accuracy on a brown corpus subset and a 752 on a subset of the wall street journal corpusa more careful evaluation of the role of wsd is needed to know if this is good enough for our purposesanyway we have only emulated a wsd algorithm that just picks up one sense and discards the resta more reasonable approach here could be giving different probabilities for each sense of a word and use them to weight synsets in the vectorial representation of documents and queriesin figure 3 we have plot the results of runs with a nondisambiguated version of the queries both for word sense indexing and synset indexing against the manually disambiguated collection the synset run performs approximately as the basic smart runit seems therefore useless to apply conceptual indexing if no disambiguation of the query is feasiblethis is not a major problem in an interactive system that may help the user to disambiguate his query but it must be taken into account if the process is not interactive and the query is too short to do reliable disambiguationwe have experimented with a retrieval approach based on indexing in terms of wordnet synsets instead of word forms trying to address two questions 1 what potential does wordnet offer for text retrieval abstracting from the problem of sense disambiguation and 2 what is the sensitivity of retrieval performance to disambiguation errorsthe answer to the first question is that indexing by synsets can be very helpful for text retrieval our experiments give up to a 29 improvement over a standard smart run indexing with wordswe believe that these results have to be further contrasted but they strongly suggest that wordnet can be more useful to text retrieval than it was previously thoughtthe second question needs further more finegrained experiences to be clearly answeredhowever for our test collection we find that error rates below 30 still produce better results than standard word indexing and that from 30 to 60 error rates it does not behave worse than the standard smart runwe also find that the queries have to be disambiguated to take advantage of the approach otherwise the best possible results with synset indexing does not improve the performance of standard word indexingour first goal now is to improve our retrieval system in many ways studying how to enrich the query with semantically related synsets how to cornpare documents and queries using semantic information beyond the cosine measure and how to obtain weights for synsets according to their position in the wordnet hierarchy among other issuesa second goal is to apply synset indexing in a crosslanguage environment using the euro wordnet multilingual database indexing by synsets offers a neat way of performing languageindependent retrieval by mapping synsets into the eurowordnet interlingual index that links monolingual wordnets for all the languages covered by eurowordnetthis research is being supported by the european community project le 4003 and also partially by the spanish government project tic9612430o301we are indebted to renee pohlmann for giving us good pointers at an early stage of this work and to anseimo peilas and david fernandez for their help finishing up the test collection
W98-0705
indexing with wordnet synsets can improve text retrievalthe classical vector space model for text retrieval is shown to give better results if wordnet synsets are chosen as the indexing space instead of word formsthis result is obtained for a manually disambiguated test collection derived from the semcor semantic concordancethe sensitivity of retrieval performance to disambiguation errors when indexing documents is also measuredfinally it is observed that if queries are not disambiguated indexing by synsets performs only as good as standard word indexingwe point out some more weaknesses of wordnet for information retrieval purposes in particular the lack of domain information and the fact that sense distinctions are excessively finegrained for the task
an empirical approach to conceptual case frame acquisition time 52 process eliminates the need for a human to assign roles to the extraction patterns by hand as had been necessary when using autoslog or autoslogts by themselves for example the pattern quotmachinegunned quot had strong semantic preferences for civilian location so was expanded to have three conceptual roles with four selectional restrictions the expanded extraction pattern for quotmachinegunned quot is quotmachinegunned quot vehicle only semantic categories that were associated with a pattern are included as selectional restric for example the also represents possible terrorism victims but it was not strongly associated with the pattern our rationale is that an individual pattern may have a strong preference for only a subset of the categories that can be associated with a role for example the pattern quot was ambushedquot showed a preference for but not which makes sense because it is hard to imagine ama building including only as selectional restriction for targets might help eliminate incorrect building extractions one could argue that this pattern is not likely to find building extractions anyway so the selectional restriction will not matter but the selectional restriction might help filter out incorrect extractions due to misparses or ultimately it is an empirical question whether it is better to include all of the semantic categories associated with a conceptual role or not finally we merge the expanded extraction patterns into multislot case frames all extraction patterns that share the same trigger word and compatible syntactic constraints are merged into a single structure for example we would merge all patterns triggered by a specific verb in its passive voice for example the patterns quot was kidnappedquot quotwas kidnapped by quot and quotwas kidnapped in quot would be merged into a single case frame similarly we would merge all patterns triggered by a specific verb in its active voice for example we would merge patterns for the active form of quotdestroyedquot that extract the subject of quotdestroyedquot its direct object and any prepositional phrases that are associated with it we also merge syntactically compatible patterns that are triggered by the same noun or by the same infinitive verb structure when merge extraction patterns into a case frame of the slots are simply unioned together 4 examples in this section we show several examples of case frames that were generated automatically by our system figure 5 shows a simple case frame triggered by active forms of the verb quotambushedquot the subject extracted as a has a selectional of direct object is exas a has a selectional restriction that the case frame does not contain even though it is theoretically possible to ambush people during training the quotambushed quot pattern extracted 13 people 11 of were recognized as since our domain roles only list civilians and government as legitimate terrorism a victim slot was not created this example shows how the case frames are tailored for the domain empirically caseframe vehicle figure 5 case frame for active forms of quotambushedquot figure 6 shows a case frame triggered by active of quotblew_upquot this case frame extracts information from an entire sentence into a single struc the subject object all be extracted together caseframe subject vehicle figure 6 case frame for active forms of quotblew_upquot the case frame in figure 7 illustrates how a semantic category can show up in multiple places this case frame will handle phrases like quotthe guerrillas detonated a bombquot as well as quotthe bomb detonatedquot both constructions are very common in the training corpus so the system added slots for both possibilities it would be easy for a human to overlook some of these variations when creating case frames by hand the case frame in figure 8 is activated by the noun quotattackquot and includes slots for a variety of prepositional phrases the same preposition can recognize different types of information subject instrument subject weapon figure 7 case frame for active forms of quotdetonatedquot be extracted from quotonquot quotagainstquot or quotatquot this example again shows the power of corpusbased methods to identify common constructions empirically anticipating all of these prepositional arguments would be difficult for a person caseframe vehicle civilian govofficial building civilian locationpp figure 8 case frame for noun forms of quotattackquot a disadvantage of this automated method is that inappropriate slots sometimes end up in the case frames for example figure 9 shows a case frame that is activated by passive forms of the verb quotkilledquot some of the slots are correct the subis assigned to the and objects of the preposition quotbyquot are assigned to the perpetrator and however the remaining slots do sense the is the result of polysemy many person names are also location names as quotfloresquot the was produced by inparses of date expressions the and slots were caused by incorrect role assignments the list of domain roles assumes that terrorists are always perpetrators and civilians are always victims but of course this is not true terrorists can be killed and civilians can be killers killed subject pp pp figure 9 case frame for passive forms of quotkilledquot the previous example illustrates some of the problems that can occur when generating case frames automatically currently we are assuming that each semantic category will be uniquely associated with a conceptual role which may be an unrealistic assumption for some domains one avenue for future work is to develop more sophisticated methods for mapping semantic preferences to conceptual roles one could also have a human review the case frames and manually remove inappropriate slots for now we chose to avoid additional human interaction and used the case frames exactly as they were generated the purpose of the selectional restrictions is to constrain the types of information that can be instantiated by each slot consequently we hoped that the case frames would be more reliably instantiated than the extraction patterns thereby producing fewer false hits to evaluate the case frames we used the same corpus and evaluation metrics as previous experiments with autoslog and autoslog ts so that we can draw comparisons between them for training we used the 1500 muc 4 development texts to generate the extraction patterns and the semantic lexicon autoslogts generated 44013 extraction patterns in its first pass after discarding the patterns that occurred only once the remaining 11517 patterns were applied to the corpus for the second pass and ranked for manual we reviewed the top 2168 and kept 306 extraction patterns for the final dictionary we built a semantic lexicon for nine categories aswith terrorism civilian gov official militarypeople location terror date vehicle weapon reviewed the top 500 words for each category it takes about 30 minutes to review a category assuming that the reviewer is familiar with the domain our final semantic dictionary contained 494 words in total the review process required approximately 6 personhours 15 hours to review the extraction patterns plus 45 hours to review the words for 9 semantic categories from the extraction patterns and semantic lexicon our system generated 137 conceptual case frames important question is how to deal with unknown words during extraction this is especially important in the terrorism domain because many of extracted items are proper names which cannot be expected to be in the semantic lexicon we allowed unknown words to fill all eligible slots and then used a precedence scheme so that each item was instantiated by only one slot precedence was based on the order of the roles shown in figure 4 this is not a very satisfying solution and one of the weaknesses of our current approach handling unknown words more intelligently is an important direction for future research we compared autoslogts extraction patterns decided to review the top but continued down the list until there were no more ties 54 slot cor mis mlb dup spu r p perp 25 31 10 18 84 45 31 victim 44 23 16 24 62 66 47 target 31 22 17 23 66 58 39 instr 16 15 7 17 23 52 52 total 116 91 50 82 235 56 41 table 1 autoslogts results the case frames using 100 blind from the muc4 test set the muc4 answer keys were used to score the output each extracted item was scored either mislabeled duplicate spurious item was it matched against the answer an item was it matched against the answer keys but was extracted as the wrong type of object cor i i extraction patterns and case frames achieved similar recall results although the case frames missed seven correct extractions however the case frames produced substantially fewer false hits producing 82 fewer spurious extractions note that perpetrators exhibited by far the lowest precision the reason is that the perpetrator slot received highest precedence among competing slots for unknown words changing the precedence relevant texts and 25 irrelevant texts from each of the tst3 and tst4 test sets rationale for scoring coreferent phrases as duplicates instead of spurious is that the extraction pattern or case frame was instantiated with a reference to the correct answer in other words the pattern did the right thing resolving coreferent phrases to produce the best answer is a problem for subsequent discourse analysis which is not addressed by the work presented here caveat is that the muc4 answer keys contain some quotoptionalquot answers we scored these as correct if they were extracted but they were never scored as missing which is how the quotoptionalquot items were scored in muc4 note that the number of possible extractions can vary depending on the output of the system reimplemented autoslogts to use a different sentence analyzer so these results are slightly different from those reported in slot cor mis mlb dup spu r p perp 26 30 4 17 71 46 36 victim 38 28 24 12 26 58 50 target 28 25 3 29 48 53 53 instr 17 14 2 19 8 55 78 total 109 97 33 77 153 53 50 table 2 case frame results scheme produces a bubble effect where many incorrect extractions shift to the primary default category the case frames therefore have the potential for even higher precision if the unknown words are handled better expanding the semantic lexicon is one option and additional work may suggest ways to choose slots for unknown words more intelligently 6 conclusions we have shown that conceptual case frames can be generated automatically using unannotated text as input coupled with a few hours of manual review our results for the terrorism domain show that the case frames achieve similar recall levels as the extraction patterns but with substantially fewer false hits our results are not directly comparable to the muc4 results because the muc4 systems contained additional components such as domainspecific discourse analyzers that resolved coreferent noun phrases merged event descriptions and filtered out irrelevant information the work presented here only addresses the initial stage of information extraction however in previous work we showed that autoslogts achieved performance comparable to autoslog which performed very well in the muc4 evaluation since the conceptual case frames achieved comparable recall and higher precision than autoslogts extraction patterns our results suggest that the case frames performed well relative to previous work on this domain several other systems learn extraction patterns that can also be viewed as conceptual case frames with selectional restrictions and crystal the case frames learned by our system are not necessarily more powerful then those generated by other systems the advantage of our approach is that it requires no special training resources our technique requires only preclassified training texts and a few hours of manual filtering to build the intermediate dictionaries given preclassified texts it is possible to build a dictionary of conceptual case frames for a new domain in one day another advantage of our approach is its highly empirical nature a corpus often reveals important patterns in a domain that are not necessarily intuitive to people by using corpusbased methods to generate all of the intermediate dictionaries and 55 the final case frame structures the most important words role assignments and semantic preferences are less likely to be missed our empirical approach aims to exploit the text corpus to automatically acquire the syntactic and semantic role assignments that are necessary to achieve good performance in the domain conceptual natural language processing typically involves case frame instantiation to recognize events and role objects in textfor example an nlp system designed for a business domain might use case frames to recognize business activities such as mergers acquisitions or joint venturesthe case frames would contain slots for thematic roles that are associated with each eventfor example case frames for business activities might contain slots for the agents and the objects unfortunately acquiring a good set of case frames for a domain can be a major undertakingcase frames are often lexically indexed so that each case frame is tailored for a specific set of linguistic expressions and their expectationsfor example one case frame might be activated by the phrase quotjoint venturequot and contain slots to recognize the partner cornpanies and objects of the joint venture a different case frame might be activated by the word quotacquisitionquot and contain slots to recognize the agent and the object of the acquisitiondevising the right set of role assignments for a case frame can be surprisingly difficultdetermining the necessary thematic roles for an event is relatively straightforward but anticipating how they will be manifested syntactically can be trickyfor example consider some of the manually defined case frames that were used to recognize terrorist events in the umass muc4 system the attack case frame shows a very common situation where multiple conceptual roles map to the same syntactic rolewhen quotattackedquot is used as a passive verb the subject may be either a victim or a physical target and the object of the preposition quotbyquot may be the agent or instrumentit is easy for a person to miss one of these possibilities when defining the case frame manuallythe accusation case frame shows that the same conceptual role can be filled by multiple syntactic rolesfor example the person accused of a crime may be the direct object of quotblamedquot or may be the object of the preposition quotonquot the sabotage case frame illustrates that a multitude of prepositional arguments may be necessary for some case framesprepositional arguments are especially difficult for a person to anticipate when defining case frames by handit is virtually impossible for a person to correctly and completely anticipate all of the arguments that are necessary for a large set of case frames for a domainomitting an important argument will result in the failure to recognize role objects in certain syntactic constructionsin practice people often turn to the corpus to look for argument structures that they might have missedfor example the umassmuc4 terrorism case frames were developed by applying an initial set of case frames to hundreds of sample texts and looking for places where the case frames failed to recognize desired informationbut this approach is extremely timeconsuming unless the answers are known in advance which is unrealistic for most applicationsit should be possible however to learn case frame structures automatically from a text corpustoward this end we have been developing a corpusbased approach to conceptual case frame acquisitionour approach builds upon earlier work on corpusbased methods for generating extraction patterns and semantic lexicons our new system constructs conceptual case frames by learning semantic preferences for extraction patterns and merging syntactically compatible patterns into more complex structuresthe resulting case frames can have slots for multiple role objects and each slot has a set of learned selectional restrictions for its role objectthe first section of this paper begins with background about autoslogts a corpusbased system for generating extraction patterns automatically and the extraction patterns that it generatesthe following section presents a new corpusbased algorithm that uses the extraction patterns as a building block for constructing conceptual case frame structureswe then show several examples of case frames that were generated automatically using this methodfinally we present experimental results that compare the performance of the case frames with the extraction patternsour results show that the conceptual case frames produce substantially fewer false hits than the extraction patternsin the past few years several systems have been developed to generate structures for information extraction automaticallyhowever these systems usually need special training resources that are expensive to obtainone of the first such systems was autoslog which generates extraction patterns from annotated textthe patterns produced by autoslog achieved 98 of the performance of handcrafted extraction patterns but autoslog requires a training corpus that is manually tagged with domainspecific annotationsanother early system palka requires domainspecific frames with keyword lists crystal requires an annotated training corpus rapier requires filled templates and liep requires keywords and annotated training examplespalka and crystal also require semantic lexicons while liep uses domainspecific concept recognizersautoslogts is a derivative of autoslog that was designed to obviate the need for special training dataautoslogts generates extraction patterns using only a quotpreclassifiedquot training corpus one set of texts that are relevant to the domain and one set of texts that are irrelevantthe texts do not need to be annotated in any wayautoslogts generates the same simple extraction patterns that autoslog generateseach pattern is activated by a keyword in a specific linguistic contextfor example one extraction pattern may be triggered by the word quotmurderedquot in passive verb constructions while a different extraction pattern may be triggered by quotmurderedquot in active verb constructionseach pattern extracts information from a syntactic constituent in the current clause the subject the direct object or a prepositional phraseautoslogts generates extraction patterns by making two passes over the corpusin the first pass autoslogts uses autoslog heuristics in an exhaustive fashion to generate a set of patterns that collectively extract every noun phrase in the corpusin the second pass autoslogts computes statistics to determine which extraction patterns are most strongly correlated with the relevant training textsthe patterns are ranked so that those most strongly associated with the domain appear at the topfigure 1 shows the top 20 extraction patterns produced by autoslogts for the muc4 terrorism domain the ranked list is then presented to a human to decide which patterns should be keptfor example the pattern quot explodedquot should be retained because it is likely to extract relevant information about bombingshowever the pattern quot saidquot should be discarded because it is not likely to extract information about terrorism and will probably extract a lot of irrelevant informationthe human reviewer assigns a conceptual role to each accepted pattern to characterize its extractionsfor example the pattern quot was murderedquot would be assigned the extraction patterns learned by autoslogts have two serious limitationsfirst each pattern extracts only one item which causes the output to be artificially fragmentedfor example the sentence quotguerrillas kidnapped the mayor in bogotaquot produces three extractions each in a separate structurethis fragmented representation causes unnecessary work for subsequent components that need to piece the information back togethersecond the patterns do not include semantic constraints so they produce many spurious extractionstheoretically conceptual case frames should overcome both of these limitationsmultislot case frames will allow several role objects associated with the same event to be instantiated as part of the same structurethis produces a more coherent representation which is more natural for subsequent event or discourse processingfurthermore if each slot has selectional restrictions associated with its legal role objects then the case frames should produce fewer false hits in the next section we describe a corpusbased algorithm that constructs conceptual case frames empirically by learning semantic preferences for each extraction pattern and using these preferences to assign conceptual roles automaticallyextraction patterns with compatible syntactic constraints are then isemantic constraints could be associated with the conceptual roles assigned by the human reviewer but our goal is to assign both the conceptual roles and selectional restrictions automatically merged to produce multislot case frames with selectional restrictionsthe conceptual case frames should be more reliable at identifying relevant information and the case frames can instantiate multiple role objects in a single structure to simplify subsequent discourse processingthe algorithm for building conceptual case frames begins with extraction patterns and a semantic lexicon for the domainthe semantic lexicon is a dictionary of words that belong to relevant semantic categorieswe used autoslogts to generate the extraction patterns and a corpusbased algorithm to generate the semantic lexicon2 the corpusbased algorithm that we used to build the semantic lexicon requires five quotseed wordsquot as input for each semantic category and produces a ranked list of words that are statistically associated with each categoryfirst the algorithm looks for all sentences in which a seed word is used as the head noun of a noun phrasefor each such occurrence of a seed word the algorithm collects a small context window around the seed wordthe context window consists of the closest noun to the left of the seed word and the closest noun to its rightthe context windows for all seed words that belong to the same category are then combined and each word is assigned a category scorethe category score is the conditional probability that the word appears in a category contextthe words are ranked by this score and the top five are dynamically added to the seed word listthis bootstrapping process dynamically grows the seed word list so that each iteration produces a larger category contextafter several iterations the final list of ranked words usually contains many words that belong to the category especially near the topthe ranked list is presented to a user who scans down the list and removes any words that do not belong to the categoryfor more details of this algorithm see a flowchart for the case frame generation process appears in figure 2autoslogts produces a ranked list of extraction patterns and our semantic lexicon generator produces a ranked list of words for each categorygenerating these lists is fully automatic but a human must review them to decide which extraction patterns and category words to keepthis is the only part of the process that involves human interactionnext the extraction patterns are applied to the texts to generate a semantic profile for each patternthe semantic profile shows the semantic categories that were extracted by each pattern based on the head noun of each extractionfigure 3 shows the semantic profile for the pattern quotattack on quot pfreq is the number of times that the extraction pattern fired sfreq is the number of times that the pattern extracted the given semantic category and prob is the estimated probability of the pattern extracting the given semantic category note that many extractions will not be labeled with any semantic category if the head noun is unknown figure 3 shows that attacks are often carried out on buildings civilians dates government officials locations military people and vehiclesit seems obvious that attacks will occur on people and on physical targets but a person might not realize that attacks will also occur on dates and on locations this example shows how the corpusbased approach can identify semantic preferences that a person might not anticipatealso note that the semantic profile shows no instances of attacks on terrorists or weapons which makes sense in this domainthe semantic profile is used to select semantic preferences that are strong enough to become selectional restrictionswe use the following formula to identify strong semantic preferences the first test selects semantic categories that are extracted with high frequency under the assumption that this reflects a real association with the categorythe second case selects semantic categories that represent a relatively high percentage of the extractions even though the frequency might be low in our experiments we chose f13 f22 and p01we used fairly lenient criteria because patterns can often extract several types of objects that belong to different semantic categories and many extractions contain unknown wordsalso remember that the semantic lexicon is reliable because it was reviewed by a person so it is usually meaningful when a pattern extracts a semantic category even oncethe thresholds are needed only to eliminate noise which can be caused by misparsed sentences or polysemous wordsthe semantic preferences are used to assign conceptual roles to each extraction patternat this point one additional piece of input is needed a list of conceptual roles and associated semantic categories for the domainthe conceptual roles identify the types of information that need to be recognizedfigure 4 shows the conceptual roles used for the terrorism domaineach extraction pattern is expanded to include a set of conceptual roles based on its semantic preferencesthese conceptual roles are assigned automatically based on a pattern semantic profilethis process eliminates the need for a human to assign roles to the extraction patterns by hand as had been necessary when using autoslog or autoslogts by themselvesfor example the pattern quotmachinegunned quot had strong semantic preferences for building civilian location and vehicle so it was expanded to have three conceptual roles with four selectional restrictionsthe expanded extraction pattern for quotmachinegunned quot is only semantic categories that were associated with a pattern are included as selectional restrictionsfor example the govofficial category also represents possible terrorism victims but it was not strongly associated with the patternour rationale is that an individual pattern may have a strong preference for only a subset of the categories that can be associated with a rolefor example the pattern quot was ambushedquot showed a preference for vehicle extractions but not building extractions which makes sense because it is hard to imagine ambushing a buildingincluding only vehicle as its selectional restriction for targets might help eliminate incorrect building extractionsone could argue that this pattern is not likely to find building extractions anyway so the selectional restriction will not matter but the selectional restriction might help filter out incorrect extractions due to misparses or metaphor ultimately it is an empirical question whether it is better to include all of the semantic categories associated with a conceptual role or notfinally we merge the expanded extraction patterns into multislot case framesall extraction patterns that share the same trigger word and compatible syntactic constraints are merged into a single structurefor example we would merge all patterns triggered by a specific verb in its passive voicefor example the patterns quot was kidnappedquot quotwas kidnapped by quot and quotwas kidnapped in quot would be merged into a single case framesimilarly we would merge all patterns triggered by a specific verb in its active voicefor example we would merge patterns for the active form of quotdestroyedquot that extract the subject of quotdestroyedquot its direct object and any prepositional phrases that are associated with itwe also merge syntactically compatible patterns that are triggered by the same noun or by the same infinitive verb structure when we merge extraction patterns into a case frame all of the slots are simply unioned togetherin this section we show several examples of case frames that were generated automatically by our systemfigure 5 shows a simple case frame triggered by active forms of the verb quotambushedquotthe subject is extracted as a perpetrator and has a selectional restriction of terroristthe direct object is extracted as a target and has a selectional restriction of vehiclenote that the case frame does not contain a victim slot even though it is theoretically possible to ambush peopleduring training the quotambushed quot pattern extracted 13 people 11 of whom were recognized as militarypeoplesince our domain roles only list civilians and government officials as legitimate terrorism victims3 a victim slot was not createdthis example shows how the case frames are tailored for the domain empiricallythe case frame in figure 7 illustrates how a semantic category can show up in multiple placesthis case frame will handle phrases like quotthe guerrillas detonated a bombquot as well as quotthe bomb detonatedquotboth constructions are very common in the training corpus so the system added slots for both possibilitiesit would be easy for a human to overlook some of these variations when creating case frames by handthe case frame in figure 8 is activated by the noun quotattackquot and includes slots for a variety of prepositional phrasesthe same preposition can recognize different types of information and the same role can be filled by different prepositions this example again shows the power of corpusbased methods to identify common constructions empiricallyanticipating all of these prepositional arguments would be difficult for a persona disadvantage of this automated method is that inappropriate slots sometimes end up in the case framesfor example figure 9 shows a case frame that is activated by passive forms of the verb quotkilledquotsome of the slots are correct the subject is assigned to the victim slot and objects of the preposition quotbyquot are assigned to the perpetrator and instrument slotshowever the remaining slots do not make sensethe location slot is the result of polysemy many person names are also location names such as quotfloresquotthe date slot was produced by incorrect parses of date expressionsthe perpetrator and victim slots were caused by incorrect role assignmentsthe list of domain roles assumes that terrorists are always perpetrators and civilians are always victims but of course this is not trueterrorists can be killed and civilians can be killersthe previous example illustrates some of the problems that can occur when generating case frames automaticallycurrently we are assuming that each semantic category will be uniquely associated with a conceptual role which may be an unrealistic assumption for some domainsone avenue for future work is to develop more sophisticated methods for mapping semantic preferences to conceptual rolesone could also have a human review the case frames and manually remove inappropriate slotsfor now we chose to avoid additional human interaction and used the case frames exactly as they were generatedthe purpose of the selectional restrictions is to constrain the types of information that can be instantiated by each slotconsequently we hoped that the case frames would be more reliably instantiated than the extraction patterns thereby producing fewer false hitsto evaluate the case frames we used the same corpus and evaluation metrics as previous experiments with autoslog and autoslogts so that we can draw comparisons between themfor training we used the 1500 muc4 development texts to generate the extraction patterns and the semantic lexiconautoslogts generated 44013 extraction patterns in its first passafter discarding the patterns that occurred only once the remaining 11517 patterns were applied to the corpus for the second pass and ranked for manual reviewwe reviewed the top 2168 patterns5 and kept 306 extraction patterns for the final dictionarywe built a semantic lexicon for nine categories associated with terrorism building civilian govofficial militarypeople location terrorist date vehicle weaponwe reviewed the top 500 words for each categoryit takes about 30 minutes to review a category assuming that the reviewer is familiar with the domainour final semantic dictionary contained 494 wordsin total the review process required approximately 6 personhours 15 hours to review the extraction patterns plus 45 hours to review the words for 9 semantic categoriesfrom the extraction patterns and semantic lexicon our system generated 137 conceptual case framesone important question is how to deal with unknown words during extractionthis is especially important in the terrorism domain because many of the extracted items are proper names which cannot be expected to be in the semantic lexiconwe allowed unknown words to fill all eligible slots and then used a precedence scheme so that each item was instantiated by only one slotprecedence was based on the order of the roles shown in figure 4this is not a very satisfying solution and one of the weaknesses of our current approachhandling unknown words more intelligently is an important direction for future researchwe compared autoslogts extraction patterns with the case frames using 100 blind texts6 from the muc4 test setthe muc4 answer keys were used to score the outputeach extracted item was scored as either correct mislabeled duplicate or spuriousan item was correct if it matched against the answer keysan item was mislabeled if it matched against the answer keys but was extracted as the wrong type of object an item was a duplicate if it was coreferent with an item in the answer keyscorrect items extracted more than once were scored as duplicates as well as correct but underspecified extractions such as quotkennedyquot instead of quotjohn f kennedyquot7 an item was spurious if it did not appear in the answer keysall items extracted from irrelevant texts were spuriousfinally items in the answer keys that were not extracted were counted as missingcorrect missing equals the total number of items in the answer keys8 table 1 shows the results8 for autoslogts extraction patterns and table 2 shows the results for the case frameswe computed recall as correct i and precision as i the extraction patterns and case frames achieved similar recall results although the case frames missed seven correct extractionshowever the case frames produced substantially fewer false hits producing 82 fewer spurious extractionsnote that perpetrators exhibited by far the lowest precisionthe reason is that the perpetrator slot received highest precedence among competing slots for unknown wordschanging the precedence scheme produces a bubble effect where many incorrect extractions shift to the primary default categorythe case frames therefore have the potential for even higher precision if the unknown words are handled betterexpanding the semantic lexicon is one option and additional work may suggest ways to choose slots for unknown words more intelligentlywe have shown that conceptual case frames can be generated automatically using unannotated text as input coupled with a few hours of manual reviewour results for the terrorism domain show that the case frames achieve similar recall levels as the extraction patterns but with substantially fewer false hitsour results are not directly comparable to the muc4 results because the muc4 systems contained additional components such as domainspecific discourse analyzers that resolved coreferent noun phrases merged event descriptions and filtered out irrelevant informationthe work presented here only addresses the initial stage of information extractionhowever in previous work we showed that autoslogts achieved performance comparable to autoslog which performed very well in the muc4 evaluation since the conceptual case frames achieved comparable recall and higher precision than autoslogts extraction patterns our results suggest that the case frames performed well relative to previous work on this domainseveral other systems learn extraction patterns that can also be viewed as conceptual case frames with selectional restrictions and crystal the case frames learned by our system are not necessarily more powerful then those generated by other systemsthe advantage of our approach is that it requires no special training resourcesour technique requires only preclassified training texts and a few hours of manual filtering to build the intermediate dictionariesgiven preclassified texts it is possible to build a dictionary of conceptual case frames for a new domain in one dayanother advantage of our approach is its highly empirical nature a corpus often reveals important patterns in a domain that are not necessarily intuitive to peopleby using corpusbased methods to generate all of the intermediate dictionaries and the final case frame structures the most important words role assignments and semantic preferences are less likely to be missedour empirical approach aims to exploit the text corpus to automatically acquire the syntactic and semantic role assignments that are necessary to achieve good performance in the domain
W98-1106
an empirical approach to conceptual case frame acquisitionconceptual natural language processing systems usually rely on case frame instantiation to recognize events and role objects in textbut generating a good set of case frames for a domain is timeconsuming tedious and prone to errors of omissionwe have developed a corpusbased algorithm for acquiring conceptual case frames empirically from unannotated textour algorithm builds on previous research on corpusbased methods for acquiring extraction patterns and semantic lexiconsgiven extraction patterns and a semantic lexicon for a domain our algorithm learns semantic preferences for each extraction pattern and merges the syntactically compatible patterns to produce multislot case frames with selectional restrictionsthe case frames generate more cohesive output and produce fewer false hits than the original extraction patternsour system requires only preclassified training texts and a few hours of manual review to filter the dictionaries demonstrating that conceptual case frames can be acquired from unannotated text without special training resourcesour conceptual case frame acquisition project extraction patterns a domain semantic lexicon and a list of conceptual roles and associated semantic categories for the domain are used to produce multipleslot case frames with selectional restrictions
edgebased bestfirst chart parsing bestfirst probabilistic chart parsing attempts to parse efficiently by working on edges that are judged quotbestquot by some probabilistic figure of merit recent work has used probabilistic contextfree grammars to assign probabilities to constituents and to use these probabilities as the starting point for the fom this paper extends this approach to using a probabilistic fom to judge edges thereby giving a much finergrained control over parsing effort we show how this can be accomplished in a particularly simple way using the common idea of binarizing the pcfg the results obtained are about a facof twenty improvement over the best results that is our parser achieves equivalent results using one twentieth the number of edges furthermore we show that this improvement is obtained with parsing precision and recall levels superior to those achieved by exhaustive parsing finding one parses for a sentence according to a contextfree grammar requires searchfortunately there are well known 0 algorithms for parsing where n is the length of the sentenceunfortunately for large grammars and longish sentences even 0 looks pretty bleakone wellknown 0 parsing method is chart parsingin this approach one maintains an agenda of items remaining to be processed one of which is processed during each iterationas each item is pulled off the agenda it is added to the chart and used to extend and create additional itemsin quotexhaustivequot chart parsing one removes items from the agenda in some relatively simple way and continues to do so until nothing remainsa commonly discussed alternative is to remove the constituents from the agenda according to a figure of merit the idea is that the fom selects quotgoodquot items to be processed leaving the quotbadquot ones the ones that are not in fact part of the correct parse sitting on the agendawhen one has a completed parse or perhaps several possible parses one simply stops parsing leaving items remaining on the agendathe time that would have been spent processing these remaining items is time saved and thus time earnedin our work we have found that exhaustively parsing maximum40word sentences from the penn ii treebank requires an average of about 12 million edges per sentencenumbers like this suggest that any approach that offers the possibility of reducing the work load is well worth pursuing a fact that has been noted by several researchersearly on kay suggested the use of the chart agenda for this purposemore recently the statistical approach to language processing and the use of probabilistic contextfree grammars has suggested using the pcfg probabilities to create a fombobrow and chitrao and grishman introduced bestfirst pcfg parsing the approach taken heresubsequent work has suggested different foms built from pcfg probabilities probably the most extensive comparison of possible metrics for bestfirst pcfg parsing is that of caraballo and charniak they consider a large number of foms and view them as approximations of some quotidealquot fomof these they recommend one as the best of the lotin this paper we basically adopt both their framework and their recommended fomthe next section describes their work in more detail besides cc the work that is most directly comparable to ours is that of goodman and ratnaparki goodman uses an fom that is similar to that of cc but one that should in general be somewhat more accuratehowever both goodman and ratnaparki work assumes that one is doing a beam search of some sort rather than a bestfirst search and their fom are unfortunately tied to their frameworks and thus cannot be adopted herewe briefly compare our results to theirs in section 5as noted our paper takes off from that of cc and uses the same fomthe major difference is simply that our parser uses the fom to rank edges rather than simply completed constituents as was done by ccwhat is interesting about our approach is that such a seemingly simple change can produce rather dramatic resultsrather than the thousands of edges required by cc the parser presented here requires hundreds or even if one is willing to pay a small price in accuracy tensin the approach taken in cc only completed edges ie constituents are entered into the agenda incomplete edges are always processed as soon as they are constructedat each iteration the constituent with the highest figure of merit is removed from the agenda added to the chart and used to extend current partially completed constituentsthus we characterize their work as constituentbased bestfirst chart parsingcc take as an quotidealquot fom the quantity tohere nj is a constituent of type i that spans the constituents from j up to but not including k and tom are the n partsofspeech of the sentencenote that cc simplify parsing by assuming that the input is a sequence of tags not wordswe make the same assumption in this paperthus taking p as an fom says that one should work on the constituent that is most likely to be correct given the tags of the sentenceas p can only be computed precisely after a full parse of the sentence cc derive several approximations in each case starting from the well known equation for p in terms of the inside and outside probabilities 3 and where fi and a are defined as follows informally this can be obtained by approximating the outside probability a in equation 1 with a bitag estimateof the five terms in equation 4 two can be directly estimated from training data the quotboundary statisticsquot p and p the tag sequence probabilitiy in the denominator is approximated using a bitag approximation the basic algorithm then is quite simpleone uses the standard chartparsing algorithm except at each iteration one takes from the agenda the constituent that maximizes the fom described in equation 4there are however two minor complexities that need to be notedthe first relates to the inside probability 0cc approximate it with the sum of the probabilities of all the parses for nlk found at that point in the parsethis in turn requires a somewhat complicated scheme to avoid repeatedly reevaluating equation 4 whenever a new parse is foundin this paper we adopt a slightly simpler methodwe approximate 3 by the most probable parse for nik rather than the sum of all the parses j we justify this on the grounds that our parser eventually returns the most probable parse so it seems reasonable to base our metric on its valuethis also simplifies updating 13 when new parses are found for nkour algorithm compares the probability of the new parse to the best already found for ni 4if the old one is higher nothing need be doneif the new one is higher it is simply added to the agendathe second complexity has to do with the fact that in equation 4 the probability of the tags t3k are approximated using two different distributions once in the numerator where we use the pcfg probabilities and once in the denominator where we use the bitag probabilitiesone fact noted by cc but not discussed in their paper is that typically the bitag model gives higher probabilities for a tag sequence than does the pcfg distributionfor any single tag t3 the difference is not much but as we use equation 4 to compute our fom for larger constituents the numerator becomes smaller and smaller with respect to the denominator effectively favoring smaller constituentsto avoid this one needs to normalize the two distributions to produce more similar resultswe have empirically measured the normalization factor and found that the bitag distribution produces probabilities that are approximately 13 times those produced by the pcfg distribution on a perword basiswe correct for this by making the pcfg probability of a known tag ri 1this has the effect of multiplying the inside probability 3 by rikjin section 4 we show how the behavior of our algorithm changes for is between 10 and 24informally our algorithm differs from the one presented in cc primarily in that we rank all edges incomplete as well as complete with respect to the foma straightforward way to extend cc in this fashion is to transform the grammar so that all productions are either unary or binaryonce this has been done there is no need for incomplete edges at all in bottomup parsing and parsing can be performed using the cky algorithm suitably extended to handle unary productionsone way to convert a pcfg into this form is leftfactoring leftfactoring replaces each production a 4 p where p is the production probability and 101 n 2 with the following set of binary productions in these productions a is the ith element of 3 and 13ij is the subsequence a of 0 but treated as a new single nonterminal in the leftfactored grammar for example the production vp 4 v np np pp 07 leftfactors to the following productions vp 4 np np pp 07 np np np pp 10 np 4 v np 10 it is not difficult to show that the leftfactored grammar defines the same probability distribution over strings as the original grammar and to devise a tree transformation that maps each parse tree of the original grammar into a unique parse tree of the leftfactored grammar of the same probabilityin fact the assumption that all productions are at most binary is not extraordinary since tabular parsers that construct complete parse forests in worstcase 0 time explicitly or implicitly convert their grammars into binary branching form sikkel and nijholt describe in detail the close relationship between the cky algorithm the earley algorithm and a bottomup variant of the earley algorithmthe key observation is that the new nonterminals 01i in a cky parse using a leftfactored grammar correspond to the set of nonempty incomplete edges a 4 01z13z1n in the bottomup variant of the earley algorithm where a 131 is a production of the original grammarspecifically the fundamental rule of chart parsing which combines an incomplete edge a a bo with a complete edge b 7 to yield the edge a a b 0 corresponds to the leftfactored productions ab a b if 3 is nonempty or a a b if i3 is emptythus in general a single new nonterminal in a cky parse using the leftfactored grammar abbreviates several incomplete edges in the earley algorithmfor our experiment we used a treebank grammar induced from sections 221 of the penn wall street journal text with section 22 reserved for testingall sentences of length greater than 40 were ignored for testing purposes as done in both cc and goodman we applied the binarization technique described above to the grammarwe chose to measure the amount of work done by the parser in terms of the average number of edges popped off the agenda before finding a parsethis method has the advantage of being platform independent as well as providing a measure of quotperfectionquothere perfection is the minimum number of edges we would need to pop off the agenda in order to create the correct parsefor the binarized grammar where each popped edge is a completed constituent this number is simply the number of terminals plus nonterminals in the sentence on average 475our algorithm includes some measures to reduce the number of items on the agenda and thus the number of popped edgeseach time we add a constituent to the chart we combine it with the constituents on either side of it potentially creating several new edgesfor each of these new edges we check to see if a matching constituent already exists in either the agenda or the chartif there is no match we simply add the new edge to the agendaif there is a match but the old parse of nild is better than the new one we discard the new parsefinally if we have found a better parse of n4 we add the new edge to the agenda removing the old one if it has not already been poppedwe tested the parser on section section 22 of the wsj text with various normalization constants 77 working on each sentence only until we reached the first full parsefor each sentence we recorded the number of popped edges needed to reach the first parse and the precision and recall of that parsethe average number of popped edges to first parse as a function of q is shown in figure 1 and the average precision and recall are shown in figure 2the number of popped edges decreases as ij increases from 10 to 17 then begins to increase againsee section 5 for discussion of these resultsthe precision and recall also decrease as 77 increasesnote that because we used a binarized grammer for parsing the trees produced by the parser contain binarized labels rather than the labels in the treebankin order to calculate precision and recall we quotdebinarizedquot the parser output and then calculated the figures as usualthese results suggest two further questions is the higher accuracy with lower 77 due in part to the higher number of edges poppedif so can we gain accuracy with higher i by letting the parser continue past the first parse to answer these questions we ran the parser again this time allowing it to continue parsing until it had popped 20 times as many edges as needed to reach the first parsethe results of this experiment are shown in figure 3 where we plot 2 as a function of edgesnote that regardless of 7 the accuracy of the parse increases given extra time but that all of the increase is achieved with only 15 to 2 times as many edges as needed for the first parsefor ij between 10 and 12 the highest accuracy is almost the same about 752 but this value is reached with an average of slightly under 400 edges when 77 12 compared to about 650 when 7 10to better understand the experimental results it first behooves us to compare them to those achieved previouslygoodman results are hard to compare against ours because his parser returns more than a singe best parse and because he measures processing time not edgeshowever he does give edgessecond for one of his 2000 4000 6000 8000 10000 parsers and this plus his parsing times suggests that for him edgessentence will measure in the tens of thousands a far cry from our hundredsratnaparki beam search parsing procedure produces higher accuracy results than our pcfg model and achieves this with a beam width of 20unfortunately his paper does not give statistics which can be directly compared with oursthe work by cc is easier to comparein figure 4 we reproduce cc results on the percentage of sentences parsed as a function of number of edges usedwe performed the same experiment and our results are included there as wellthis figure makes dramatic the order of magnitude improvement provided by our new scheme but it is not too easy to read numbers off of itsuch numbers are provided in table 1our figures were obtained using ri 12as can be seen our parser requires about one twentieth the number of edges required by ccindeed the low average number of edges to first parse is probably the most striking thing about our resultseven allowing for the fact that considerably more edges must be pushed than are popped the total number of edges required to first parse is quite smallsince the average number of edges required to construct just the test corpus trees is 475 our parsing system considers as few as 3 times as many edges as are required to actually produce the output treealmost as interesting if i is below 14 the precision and recall scores of the first parse are better than those obtained by running the parser to exhaustion even though the probability of the first parses our algorithm returns cannot be higher than that found by the exhaustive versionfurthermore as seen in figure 3 running our parser past the first parse by a small amount produces still more accurate parsesat 150 of the minimum number of edges and i 12 the precisionrecall figures are about 2 above those for the maximum likelihood parsewe have two theories of these phenomonait may be that the fom metric used to select constituents forces our parser to concentrate on edges which are plausible given their surrounding preterminals information which is ignored by the exhaustive maximum likelihood parseralternatively it may be that because our fom causes our parser to prefer edges with a high inside times outside probability it is in fact partially mimicking goodman labelled recall parsing algorithm which does not return the highest probability parse but attempts to maximize labelled bracket recall with the test setfinally it is interesting to note that the minimum number of edges per parse is reached when 77 165 which is considerably larger than the theoretical estimate of 13 given earliernotice that one effect of increasing r is to raise the fom for longer constituentsit may be that on average a partial parse is completed fastest if larger constituents receive more attention since they are more likely to lead quickly to a complete analysis which would be one consequence of the larger than expected n this last hypothesis is also consistent with the observation that average precision and recall sharply falls off when 77 is increased beyond its theoretically optimal value since then the parser is presumably focusing on relatively larger constituents and ignoring other strictly more plausible smaller onesit is worth noting that while we have presented the use of edgebased bestfirst chart parsing in the service of a rather pure form of pcfg parsing there is no particular reason to assume that the technique is so limited in its domain of applicabilityone can imagine the same techniques coupled with more informative probability distributions such as lexicalized pcfgs or even grammars not based upon literal rules but probability distributions that describe how rules are built up from smaller components clearly further research is warrantedbe this as it may the takehome lesson from this paper is simple combining an edgebased agenda with the figure of merit from cc to the best of our knowledge this is currently the most effecient parsing technique for pcfg grammars induced from large treebanksas such we strongly recommend this technique to others interested in pcfg parsing
W98-1115
edgebased bestfirst chart parsingbestfirst probabilistic chart parsing attempts to parse efficiently by working on edges that are judged best by some probabilistic figure of merit recent work has used probabilistic contextfree grammars to assign probabilities to constituents and to use these probabilities as the starting point for the fomthis paper extends this approach to using a probabilistic fom to judge edges thereby giving a much finergrained control over parsing effortwe show how this can be accomplished in a particularly simple way using the common idea of binarizing the pcfgthe results obtained are about a factor of twenty improvement over the best prior results that is our parser achieves equivalent results using one twentieth the number of edgesfurthermore we show that this improvement is obtained with parsing precision and recall levels superior to those achieved by exhaustive parsingwe introduce overparsing as a technique to improve parse accuracy by continuing parsing after the first complete parse tree is found
exploiting diverse knowledge sources via maximum entropy in named entity recognition this paper describes a novel statistical namedentity recognition system built around a maximum entity framework by working within the framework of maximum entropy theory and utilizing a flexible objectbased architecture the system is able to make use of an extraordinarily diverse range of knowledge sources in making its tagging decisions these knowledge sources include capitalization features lexical features features indicating the current section of text and dictionaries of single or multiword terms the purely statistical system contains no handgenerated patterns and achieves a result comparable with the best statistical systems however when combined with other handcoded systems the system achieves scores that exceed the highest comparable scores thusfar published named entity recognition is one of the simplest of the common message understanding tasksthe objective is to identify and categorize all members of certain categories of quotproper namesquot from a given corpusthe specific test bed which will be the subject of this paper is that of the seventh message understanding conference in which the task was to identify quotnamesquot falling into one of seven categories person organization location date time percentage and monetary amountthis paper describes a new system called quotmaximum entropy named entityquot or quotmenequot by working within the framework of maximum entropy theory and utilizing a flexible objectbased architecture the system is able to make use of an extraordinarily diverse range of knowledge sources in making its tagging decisionthese knowledge sources include capitalization features lexical features and features indicating the current section of textit makes use of a broad array of dictionaries of useful single or multiword terms such as first names company names and corporate suffixes and automatically handles cases where words are in more than one dictionaryour dictionaries required no manual editing and were either downloaded from the web or were simply quotobviousquot lists entered by handthis system built from offtheshelf knowledge sources contained no handgenerated patterns and achieved a result which is comparable with that of the best statistical systemsfurther experiments showed that when combined with handcoded systems from nyu the university of manitoba and isoquest inc mene was able to generate scores which exceeded the highest scores thusfar reported by any system on a muc evaluationgiven appropriate training data we believe that this system is highly portable to other domains and languages and have already achieved good results on uppercase englishwe also feel that there are plenty of avenues to explore in enhancing the system performance on englishlanguage newspaper textgiven a tokenization of a test corpus and a set of n tags which define the name categories of the task at hand the problem of named entity recognition can be reduced to the problem of assigning one of 4n 1 tags to each tokenfor any particular tag x from the set of n tags we could be in one of 4 states x_start x_continue x_end and x_uniquein addition a token could be tagged as quototherquot to indicate that it is not part of a named entityfor instance we would tag the phrase jerry lee lewis flew to paris as person_start person_continue person_end other other location_uniquethis approach is essentially the same as the 29 tags of muc7 form the space of quotfuturesquot for a maximum entropy formulation of our ne problema maximum entropy solution to this or any other similar problem allows the computation of p for any f from the space of possible futures f for every h from the space of possible histories h a quothistoryquot in maximum entropy is all of the conditioning data which enables you to make a decision among the space of futuresin the named entity problem we could reformulate this in terms of finding the probability of f associated with the token at index tin the test corpus as p p f itest corpus relative to token t the computation of p in me is dependent on a set of quotfeaturesquot which hopefully are helpful in making a prediction about the futurelike most current me modeling efforts in computational linguistics we restrict ourselves to features which are binary functions of the history and futurefor instance one of our features is here quotcurrenttokencapitalizedquot is a binary function which returns true if the quotcurrent tokenquot of the history h has an initial capitalized lettergiven a set of features and some training data the maximum entropy estimation process produces a model in which every feature gi has associated with it a parameter aithis allows us to compute the conditional probability as follows the maximum entropy estimation technique guarantees that for every feature gi the expected value of gi according to the me model will equal the empirical expectation of gi in the training corpusin other words here p is an empirical probability and pme is the probability assigned by the me modelmore complete discussions of me as applied to computational linguistics including a description of the me estimation procedure can be found in and the following are some additional references which are useful as introductions and examples of applications as many authors have remarked though the most useful thing about maximum entropy modeling is that it allows the modeler to concentrate on finding the features that characterize the problem while letting the me estimation routine worry about assigning the relative weights to the featuresmene consists of a set of c and pen l modules which forms a wrapper around a publicly available me toolkit which computes the values of the a parameters of equation 2 from a pair of training files created by menemene flexibility is due to its objectbased treatment of the three essential components of a maximum entropy system histories futures and features history objects in mene act as containers for a list of quothistory viewsquotthe history view classes each represent a different type of information about the history objectwhen the features attempt to determine whether or not they fire on a given history they request an appropriate history view object from the history object and then query the history view object to determine whether their firing conditions are satisfiednote that these history views generally hold information about a limited window around the current tokenif the current token is denoted as wo then our model only holds information about tokens w1 for all history views except the lexical onesfor these views the window is w2 w future objects on the other hand are trivial in that their only piece of data is an integer indicating which of the 29 members of the future space they representfeatures are implemented as binary valued functions which query the history and future objects to determine whether or not they quotfirequotin the following sections we will look at each of mene feature classes in turnwhile all of mene features have binaryvalued output the quotbinaryquot features are features whose associated historyview can be considered to be either on or off for a given tokenexamples are quotthe token begins with a capitalized letterquot or quotthe token is a fourdigit numberquotequation i gives an example of a binary featurethe 11 binary historyviews used by mene binary features are very similar to those used in bbn nymbleidentifinder system with two exceptions b when the space on which feature b activates must be a subset of the space for feature a it can be shown that the me model will yield the same results whether a and b are included as features or if and b are featuresconsequently mene allows all features to fire in overlapping casesfor instance in mene the initial cap features activate on the histories quotclintonquot quotibmquot and quotvalujetquot while in nymble the feature would only be active on quotclintonquot because the quotallcapquot feature would take precedence on quotibmquot and an quotinitialandinternalcapquot feature would take precedence on quotvalujetquotto create a lexical history view the tokens at w2 w2 are compared with a vocabulary and their vocabulary indices are recordedfor a given training corpus we define the vocabulary to be all tokens with a count of three or morewords not found in the vocabulary are assigned a distinguished quotunknownquot indexlexical feature example a more subtle feature picked up by mene preceding word is quottoquot and future is quotlocation_uniquequotgiven the domain of the muc7 training data quottoquot is a weak indicator but a real onethis is an example of a feature which mene can make use of but which the constructor of a handcoded system would probably regard as too risky to incorporatethis feature in conjunction with other weak features can allow mene to pick up names that other systems might missas discussed later these features are automatically acquired and the system can attain a very high level of performance using these features alonethis is encouraging since these lexical features are not dependent on any external knowledge source or linguistic intuition and thus are completely portable to new domainsthe new york times articles which constituted the muc7 test and training corpora were composed of six distinct sections including quotdatequot quotpreamblequot and quottextquotsection features activate according to which of these sections the current token is inexample feature if sectionview 9 the feature predicting quototherquot is much weaker in most casesit is only about 26 times as strong as quotorganization_startquot and quotorganization_endquot for instancemultiword dictionaries are a key element of meneeach entry in a mene dictionary consists of a term which is one or more tokens longdictionaries can be casesensitive or not on a dictionarybydictionary basisa preprocessing step summarizes the information in the dictionary on a tokenbytoken basis by assigning to every token one of the following five tags for each dictionary start continue end unique otherie if quotbritish airwaysquot was in our dictionary a dictionary feature would see the phrase quoton british airways flight 962quot as quotother start end other otherquottable 1 lists the dictionaries used by mene in the muc7 evaluationbelow is an example of a dictionary feature if firstnamedictionarynote that similar to the case of overlapping binary features we do not have to worry about words appearing in the dictionary which are commonly used in another senseie we can leave dangerouslooking names like quotaprilquot in the firstname dictionary because whenever the firstname feature fires on quotaprilquot the lexical and datedictionary features for quotaprilquot will also fire and assuming that the use of april as quotdatequot exceeded the use of april as person_start or person_unique we can expect that the lexical feature will have a high enough a value to outweigh the firstnamedictionary featurethis was confirmed in our test runs no instance of quotaprilquot was tagged as a name including one case quotthe death of ron brown in april in a similar plane crash quot which could be thought of as somewhat tricky because the month was not followed by a specific datenote that the system is not foolproof if a quotdangerousquot dictionary word appeared in only one dictionary and did not appear often enough in the training corpus to be included in the vocabulary but did appear in the test corpus we would probably mistag itfor nyu official entry in the muc7 evaluation mene took in the output of an enhanced version of the more traditional handcoded quotproteusquot namedentity tagger which we entered in muc6in addition subsequent to the evaluation the university of manitoba and isoquest inc shared with us the outputs of their systems on our training corpora as well as on various test corporathe output sent to us was the standard muc7 output so our collaborators did not have to do any special processing for usthese systems were incorporated into mene as simply three more history views by the following 2 step process the result of all this is that the quotfuturesquot produced by the three external systems become three quotexternal system historiesquot for menehere is an it is important to note that mene has features which predict a different future than the future predicted by the external systemthis can be seen as the process by which mene learns the errors which the external system is likely to makean example of this is that on the evaluation system the feature which predicted person_unique given a tag of person_unique by proteus had only a 76 higher weight than the feature which predicted person_start given person_uniquein other words proteus had a tendency to chop off multiword names at the first wordmene learned this and made it easy to override proteus in this wayin fact an analysis of the differences between the proteus output and the mene proteus output turned up a significant number of instances in which mene extended or contracted name boundaries in this waygiven proper training data mene can pinpoint and selectively correct the weaknesses of a handcoded systemmene currently has no direct ability to learn compound features or quotpatternsquotthe quothistoryquot side of a lexical feature activates based on only a single word for instancea sort of patternlike ability comes into the system from multiple features firing at onceie to predict that quotyorkquot in the name quotnew yorkquot is the end of a location we will have two features firing one predicts location_end when token_i is quotnewquotthe other predicts location_end when tokeno is quotyorkquotnevertheless it is possible that compound features would behave differently from two simultaneously firing quotatomicquot featureswe integrated this into the model in an ad hoc manner for the external system features where we constructed features which essentially query the external system history and the section history simultaneously to determine whether they fireie a particular feature might fire if proteus predicts person_start the current section is quotmain body of textquot and the future is quotperson_startquotthis allows mene to assign a lower a to a proteus prediction in the preamble vs a prediction in the main body of textproteus like many handcoded systems is more accurate in the main body of the text than in headlinetype materialwe found that this compound feature gave the system slightly higher performance than we got when we just used section features and external system features separatelyit seems reasonable that adding an ability to handle fully general compound features would improve system performance based on this limited experimentin addition to allowing us to predict futures based on multiword patterns it would also let us use other promising combinations of features such as distinguishing between capitalization in a headline vs in the main body of the textunfortunately this experiment will have to wait until we deploy a more sophisticated method of feature selection as discussed in the next sectionfeatures are chosen by a very simple methodall possible features from the classes we want included in our model are put into a quotfeature poolquotfor instance if we want lexical features in our model which activate on a range of token_2 token2 our vocabulary has a size of v and we have 29 futures we will add 29 lexical features to the poolthe v 1 term comes from the fact that we include all words in the vocabulary plus the unknown wordfrom this pool we then select all features which fire at least three times on the training corpusnote that this algorithm is entirely free of human interventiononce the modeler has selected the classes of features mene will both select all the relevant features and train the features to have the proper weightingswe deviate from this basic algorithm in three ways 1we exclude features which activate on some sort of quotdefaultquot value of a history viewmany history views have some sort of default value which they display for the vast majority of tokensfor instance a firstnamedictionary history view would say that the current token is not a name in over 99 of the casesrather than adding features which activate both when the token in question is and when it is not a first name we only include features which activate when the token is a first namea feature which activated when a token was not a first name while theoretically not harmful would have practical disadvantagesfirst of all the feature would probably be redundant because if the frequency of a future given a firstnamedictionary hit is constrained then the future frequency given a nonhit is also implicitly constrainedsecondly since this feature would fire on nearly every token it would slow down runtime performancefinally while maximum entropy models are designed to handle feature overlap a very high degree of overlap requires more iterations of the maximum entropy estimation routine and can lead to numerical difficulties like the previous heuristic this is based on the idea that features predicting named entities are more useful than features predicting the defaultnote that this method of feature selection would probably break down if we tried to incorporate general compound features into our model as described in the previous sectionthe model currently has about 24000 features when trained on 350 articles of textif we even considered all pairs of features as potential compound features the 0 compound features which we could build from our atomic features would undoubtedly yield an unacceptable slowdown in the model performanceclearly a more sophisticated feature selection routine such as the ones in or would be required in this caseafter having trained the features of an me model and assigned the proper weight to each of the features decoding a new piece of text is a fairly simple process the viterbi search is necessary because simply taking the highestprobability future assigned to each token would result in incompatible assignmentsfor instance an assignment of personstart location_end to two consecutive tokens would be invalidthe viterbi search finds the highest probability path in which there are no two tokens in which the second one cannot follow the first as defined by a table of all such invalid transitions mene maximum entropy training algorithm gives it reasonable performance with moderatesized training corpora or few information sources while allowing it to really shine when more training data and information sources are addedtable 2 shows mene performance on the muc7 quotdry runquot corpus which consisted of 25 articles mostly on the topic of aviation disastersall systems shown were trained on 350 articles on the same domain note the smooth progression of the scores as more data is added to the systemalso note that when combined under mene the three weakest systems mene proteus and manitoba outperform the strongest single system isoquestfinally the top score of 9712 from combining all three systems is a very strong resulton a different set of data the muc7 formal run data the accuracy of the two human taggers who were preparing the answer key was tested and it was discovered that one of them had an fmeasure of 9695 and the other of 9760 although we do not have human performance measures on the dry run test set it seems that we have attained a result which is at least competitive with that of a humanwe also did a series of runs to examine how the systems performed with different amounts of training datathese experiments are summarized in table 3note the 9738 allsystems result which we achieved by adding 75 articles from the formalrun test corpus to the basic 350article training datain addition to being an outstanding performance figure this number shows mene responsiveness to good training materiala few other conclusions can be drawn from this datafirst of all mene needs at least 20 articles of tagged training data to get acceptable performance on its ownsecondly there is a minimum amount of training data which is needed for mene to improve an external systemfor proteus and the manitoba system this number seems to be about 80 articles because they show a degradation of performance at 40since the isoquest system was stronger to start with mene required 150 articles to show an improvementnote the anomaly in comparing the 250 and 350 article columnsproteus shows only a very small gain and isoquest shows a deteriorationthese last 100 articles added to the system were tagged by us at nyu and we would humbly guess that we tagged them less carefully than the rest of the data which was tagged by bbn and science applications international corporation mene has also been run against alluppercase dataon this we achieved an fmeasure of 8819 for the meneonly system and 9138 for the mene proteus systemthe latter figure matches the best currently published result on withindomain allcaps dataon the other hand we scored lower on allcaps than bbn identifinder in the muc7 formal evaluation for reasons which are probably similar to the ones discussed in section 9 in the comparison of our mixed case performances we have put very little effort into optimizing mene on this type of corpus and believe that there is room for improvement herein another experiment we stripped out all features other than the lexical features and still achieved an fmeasure of 8813since these features do not rely on any external knowledge sources and are automatically generated this result is a strong indicator of men e portabilitythe muc7 formal evaluation involved a shift in topic which was not communicated to the participants beforehandthe training data focused on airline disasters while the test data was on missile and rocket launchesmene fared much more poorly on this data than it did on the withindomain data quoted above achieving an fmeasure of only 8880 for the mene proteus system and 8422 for the meneonly systemwhile 8880 was still the fourth highest score out of the twelve participants in the evaluation we feel that it is necessary to view this number as a crossdomain portability result rather than as an indicator of how the system can do on unseen data within its training domainve believe that if the system had been allowed to train on missilerocket launch articles its performance on these articles would have been much bettermore mene test results and discussion of the formal run can be found in me has been successfully applied to many other tasks in computational linguisticssome recent work for which there are solid comparable benchmarks is the work of adwait ratnaparkhi at the university of pennsylvaniahe has achieved stateofthe art results by applying me to parsing partofspeech tagging and sentenceboundary detection other recent work has applied me to language modeling machine translation and reference resolution me was first applied to named entity recognition at the muc7 conference by and note that partofspeech tagging is in many ways a very similar task to that of namedentity recognitionratnaparkhi tagger is similar to mene in that his features look at the surrounding twoword lexical context but his system makes less use of dictionarieson the other hand his system looks at word suffixes and prefixes in the case of unknown words which is something we have not tried with mene and looks at its own output by looking at its previous two tags when making its decisionwe do this implicitly through our requirement that the futures we output be consistent but we found that an attempt to do this more directly by building a consistency feature directly into the model had no effect on our resultsat the muc7 conference there were two other interesting systems using statistical techniques from the language technology groupuniversity of edinborough and bbn comparisons with the ltg system are difficult since it was a hybrid model in which the text was passed through a fivestage process only three of which involved maximum entropy and over half of the system recall came from the two nonstatistical phasesthe ltg system demonstrated superior performance on the formal run relative to the meneproteus hybrid system but it is not clear whether their advantage came from superior handcoded rules or superior statistical techniques because their system is not as easily broken down into separate components as is meneproteusit is also possible that tighter system integration between the statistical and handcoded components was responsible for some of ltg relative advantage but note that meneproteus appears to have an advantage over ltg in terms of portabilitywe are currently experimenting with porting mene to japanese for instance and expect that it could be combined with a preexisting japanese handcoded system but it is not clear that this could be done with the ltg systemnevertheless one of our avenues for future research is to look at tighter multisystem integration methods which will not compromise men e essential portabilitytable 4 gives a comparison of bbn hmmbased identifinder and nyu mene and meneproteus systems on different training and test setswe are not sure why meneproteus was hurt more badly by the evaluationtime switch from aviation disaster articles to missuerocket launch articles but suspect that it may have been due to identifinder greater quantity and quality of training databbn used 790000 words of training data to our 321000the quality advantage may have come from selecting sentences from a larger corpus for their annotators to tag which were chosen so as to increase the variety of training datawhen meneonly and identifinder are compared training on the same number of articles and testing on withindomain data identifinder still has an edgewe speculate that this is due to the dynamic updating of identifinder vocabulary during decoding when person or organization names are recognized which gives the system a sort of longdistance reference resolution which is lacking in menein additionbbn hmmbased system implictly predicts named entities based on consecutive pairs of words rather than based on single words as is done in mene because each type of name has its own bigram language modelin the decoding process the viterbi algorithm chooses the sequence of names which yields the highest joint probability of names words and features associated with each wordin comparing the maximum entropy and hmmbased approaches to named entity recognition we are hopeful that me will turn out to be the better method in the endwe think it is possible that some of identifinder current advantage can be neutralized by simply adding the justmentioned features to meneon the other hand we have a harder time seeing how some of mene strengths can be integrated into an hmmbased systemit is not clear for instance how a wide variety of dictionaries could be added to identifinder or whether the system could be combined with a handcoded system as was done with our system and the one from ltgmene is a very new and we feel still immature systemwork started in october 1997 and the system described above was not in place until midfebruary1998we believe that we can push the score of the meneonly system higher by incorporating longrange referenceresolution on mene outputwe are also missing a large number of acronyms which could be picked up by dynamically building them from entities which mene had tagged elsewhere and then pulling that data in as a new class of featurethe other key element missing from the current system is a set of general compound features which as discussed above would require the use of a more sophisticated feature selection algorithmall three of these elements are present in systems such as isquest and their absence from mene probably explains much of the reason why the meneonly system failed to perform at the stateoftheartwe intend to add all of these elements to mene in the near future to test this hypothesisnevertheless we believe that we have already demonstrated some very useful resultsmene is highly portable as we have already demonstrated with our result on uppercase english text and even in its current state its results are already comparable to that of the only other purely statistical english ne system which we are aware of as shown with our result on running mene with only the lexical features that it learns from the training corpus porting mene can be done with very little effort if appropriate training data is providedit is not even necessary to provide it with dictionaries to generate an acceptable resultwe are working on a port to japanese ne to further demonstrate mene flexibilityhowever we believe that the results on combining mene with other systems are some of the most intriguingwe would hypothesize that given sufficient training data any handcoded system would benefit from having its output passed to mene as a final stepmene also opens up new avenues for collaboration whereby different organizations could focus on different aspects of the problem of ne recognition with the maximum entropy system acting as an arbitratormene also offers the prospect of achieving very high performance with very little effortsince mene starts out with a fairly high base score just on its own we speculate that a mene user could then construct a handcoded system which only focused on mene weaknesses while skipping the areas in which mene is already strongfinally one can imagine a user acquiring licenses to several different ne systems generating some training data and then combining it all under a menelike systemwe have shown that this approach can yield performance which is competitive with that of a human taggerwe would like to thank troy straszheim for writing the viterbi search routine used in this work
W98-1118
exploiting diverse knowledge sources via maximum entropy in named entity recognitionthis paper describes a novel statistical namedentity recognition system built around a maximum entity frameworkby working within the framework of maximum entropy theory and utilizing a flexible objectbased architecture the system is able to make use of an extraordinarily diverse range of knowledge sources in making its tagging decisionsthese knowledge sources include capitalization features lexical features features indicating the current section of text and dictionaries of single or multiword termsthe purely statistical system contains no handgenerated patterns and achieves a result comparable with the best statistical systemshowever when combined with other handcoded systems the system achieves scores that exceed the highest comparable scores thusfar published
a statistical approach to anaphora resolution this paper presents an algorithm for identifying pronominal anaphora and two experiments based upon this algorithm we incorporate multiple anaphora resolution factors into a statistical framework specifically the distance between the pronoun and the proposed antecedent gendernumberanimaticity of the proposed antecedent governing head information and noun phrase repetition we combine into a single probability that enables to identify the referent our first experiment shows the relative contribution of each source of information and demonstrates a success rate 829 for all sources combined experiment investigates a method for unsupervised learning of gendernumberanimaticity information we present some experiments illustrating the accuracy of the method and note that with this information added our pronoun resolution method achieves 842 accuracy we present a statistical method for determining pronoun anaphorathis program differs from earlier work in its almost complete lack of handcrafting relying instead on a very small corpus of penn wall street journal treebank text that has been marked with coreference informationthe first sections of this paper describe this program the probabilistic model behind it its implementation and its performancethe second half of the paper describes a method for using the aforementioned program to learn automatically the typical gender of english words information that is itself used in the pronoun resolution programin particular the scheme infers the gender of a referent from the gender of the pronouns that refer to it and selects referents using the pronoun anaphora programwe present some typical results as well as the more rigorous results of a blind evaluation of its outputthere are many factors both syntactic and semantic upon which a pronoun resolution system relies does a detailed study on factors in anaphora resolutionwe first discuss the training features we use and then derive the probability equations from themthe first piece of useful information we consider is the distance between the pronoun and the candidate antecedentobviously the greater the distance the lower the probabilitysecondly we look at the syntactic situation in which the pronoun finds itselfthe most well studied constraints are those involving reflexive pronounsone classical approach to resolving pronouns in text that takes some syntactic factors into consideration is that of hobbs this algorithm searches the parse tree in a lefttoright breadthfirst fashion that obeys the major reflexive pronoun constraints while giving a preference to antecedents that are closer to the pronounin resolving intersentential pronouns the algorithm searches the previous sentence again in lefttoright breadthfirst orderthis implements the observed preference for subject position antecedentsnext the actual words in a proposed nounphrase antecedent give us information regarding the gender number and animaticity of the proposed referentfor example marie giraud carries historical significance as one of the last women to be executed in franceshe became an abortionist because it enabled her to here it is helpful to recognize that quotmariequot is probably female and thus is unlikely to be referred to by quothequot or quotitquotgiven the words in the proposed antecedent we want to find the probability that it is the referent of the pronoun in questionwe collect these probabilities on the training data which are marked with reference linksthe words in the antecedent sometimes also let us test for number agreementgenerally a singular pronoun cannot refer to a plural noun phrase so that in resolving such a pronoun any plural candidates should be ruled outhowever a singular noun phrase can be the referent of a plural pronoun as illustrated by the following example quoti think if i tell via corn i need more time they will take cosby across the streetquot says the general manager of a network affiliateit is also useful to note the interaction between the head constituent of the pronoun p and the antecedentfor example a japanese company might make television picture tubes in japan assemble the tv sets in malaysia and export them to indonesiahere we would compare the degree to which each possible candidate antecedent could serve as the direct object of quotexportquotthese probabilities give us a way to implement selectional restrictiona canonical example of selectional restriction is that of the verb quoteatquot which selects food as its direct objectin the case of quotexportquot the restriction is not as clearcutnevertheless it can still give us guidance on which candidates are more probable than othersthe last factor we consider is referents mention countnoun phrases that are _mentioned repeatedly are preferredthe training corpus is marked with the number of times a referent has been mentioned up to that point in the storyhere we are concerned with the probability that a proposed antecedent is correct given that it has been repeated a certain number of timesin effect we use this probability information to identify the topic of the segment with the belief that the topic is more likely to be referred to by a pronounthe idea is similar to that used in the centering approach where a continued topic is the highestranked candidate for pronominalizationgiven the above possible sources of information we arrive at the following equation where f denotes a function from pronouns to their antecedents where a is a random variable denoting the referent of the pronoun p and a is a proposed antecedentin the conditioning events h is the head constituent above piir is the list of candidate antecedents to be considered t is the type of phrase of the proposed antecedent 1 is the type of the head constituent sp describes the syntactic structure in which p appears cr specifies the distance of each antecedent from p and m is the number of times the referent is mentionednote that w d and m are vector quantities in which each entry corresponds to a possible antecedentwhen viewed in this way a can be regarded as an index into these vectors that specifies which value is relevant to the particular choice of antecedentthis equation is decomposed into pieces that correspond to all the above factors but are more statistically manageablethe decomposition makes use of bayes theorem and is based on certain independence assumptions discussed belowequation is simply an application of bayes rulethe denominator is eliminated in the usual fashion resulting in equation selectively applying the chain rule results in equations and in equation the term p is the same for every antecedent and is thus removedequation follows when we break the last component of into two probability distributionsin equation we make the following independence assumptions p then we combine so and dc into one variable dh hobbs distance since the hobbs algorithm takes both the syntax and distance into accountsince_ 147 is a vector we need to normalize p to obtain the probability of each element in the vectorit is reasonable to assume that the antecedents in w are independent of each other in other words p p thus where now we arrive at the final equation for computing the probability of each proposed antecedent we obtain p by running the hobbs algorithm on the training datasince the training corpus is tagged with reference information the probability p is easily obtainedin building a statistical parser for the penn treebank various statistics have been collected p p if i a then we have p p p p to get the probability for each candidate we divide the above product by two of which are p and pto avoid the sparsedata problem the heads h are clustered according to how they behave in pthe probability of wa is then computed on the basis of h cluster cour corpus also contains referents repetition information from which we can directly compute pthe four components in equation can be estimated in a reasonable fashionthe system computes this product and returns the antecedent wo for a pronoun p that maximizes this probabilitymore formally we want the program to return our antecedent function f wherewe use a small portion of the penn wall street journal treebank as our training corpusfrom this data we collect the three statistics detailed in the following subsectionsthe hobbs algorithm makes a few assumptions about the syntactic trees upon which it operates that are not satisfied by the treebank trees that form the substrate for our algorithmmost notably the hobbs algorithm depends on the existence of an n parsetree node that is absent from the penn treebank treeswe have implemented a slightly modified version of hobbs algorithm for the treebank parse treeswe also transform our trees under certain conditions to meet hobbs assumptions as much as possiblewe have not however been able to duplicate exactly the syntactic structures assumed by hobbsonce we have the trees in the proper form we run hobbs algorithm repeatedly for each pronoun until it has proposed n candidatesthe ith candidate is regarded as occurring at quothobbs distancequot dh ithen the probability p is simply we use i x i to denote the number of times x is observed in our training setafter we have identified the correct antecedents it is a simple counting procedure to compute p where wo is in the correct antecedent for the pronoun p when there are multiple relevant words in the antecedent we apply the likelihood test designed by dunning on all the words in the candidate npgiven our limited data the dunning test tells which word is the most informative call it w and we then use p the referents range from being mentioned only once to begin mentioned 120 times in the training examplesinstead of computing the probability for each one of them we group them into quotbucketsquot so that ma is the bucket for the number of times that a is mentionedwe also observe that the position of a pronoun in a story influences the mention count of its referentin other words the nearer the end of the story a pronoun occurs the more probable it is that its referent has been mentioned several timeswe measure position by the sentence number jthe method to compute this probability is to reduce the notational loadafter collecting the statistics on the training examples we run the program on the test datafor any pronoun we collect n candidate antecedents proposed by hobbs algorithmit is quite possible that a word appears in the test data that the program never saw in the training data and fow which it hence has no p probabilityin this case i wain the antecedent for p i we simply use the prior probability of the pronoun pfrom the parser project mentioned earlier we obtain the probability pthe four probabilities are multiplied togetherthe procedure is repeated for each proposed np in w and the one with the highest combined probability is selected as the antecedentthe algorithm has two modulesone collects the statistics on the training corpus required by equation and the other uses these probabilities to resolve pronouns in the test corpusour data consists of 93931 words and contains 2477 pronouns 1371 of which are singular the corpus is manually tagged with reference indices and referents repetition numbersthe result presented here is the accuracy of the program in finding antecedents for he she and it and their various forms the cases where quotitquot is merely a dummy subject in a cleft sentence or has conventional unspecified referents are excluded from computing the precision we performed a tenway crossvalidation where we reserved 10 of the corpus for testing and used the remaining 90 for trainingour preliminary results are shown in the last line of table 1we are also interested in finding the relative importance of each probability in pronoun resolutionto this end we ran the program quotincrementallyquot each time incorporating one more probabilitythe results are shown in table 1 the last column of table 1 contains the pvalues for testing the statistical significance of each improvementdue to relatively large differences between treebank parse trees and hobbs trees our hobbs implementation does not yield as high an accuracy as it would have if we had had perfect hobbs tree representationssince the hobbs algorithm serves as the base of our scheme we expect the accuracy to be much higher with more accurately transformed treeswe also note that the very simple model that ignores syntax and takes the last mentioned nounphrase as the referent performs quite a bit worse about 43 correctthis indicates that syntax does play a very important role in anaphora resolutionwe see a significant improvement after the word knowledge is added to the programthe p probability gives the system information about gender and animaticitythe contribution of this factor is quite significant as can be seen from table 1the impact of this probability can be seen more clearly from another experiment in which we tested the program on the training datahere the program can be thought of having quotperfectquot genderanimaticity knowledgewe obtained a success rate of 893although this success rate overstates the effect it is a clear indication that knowledge of a referent gender and animaticity is essential to anaphora resolutionwe hoped that the knowledge about the governing constituent would like gender and animaticity make a large contributionto our surprise the improvement is only about 22this is partly because selection restrictions are not clearcut in many casesalso some head verbs are too general to restrict the selection of any npexamples are quotisquot and quothasquot which appear frequently in wall street journal these verbs are not quotselectivequot enough and the associated probability is not strong enough to rule out erroneous candidatessparse data also causes a problem in this statisticconsequently we observe a relatively small enhancement to the systemthe mention information gives the system some idea of the story focusthe more frequently an entity is repeated the more likely it is to be the topic of the story and thus to be a candidate for pronominalizationour results show that this is indeed the casereferences by pronouns are closely related to the topic or the center of the discoursenp repetition is one simple way of approximately identifying the topicthe more accurately the topic of a segment can be identified the higher the success rate we expect an anaphora resolution system can achievethe importance of gender information as revealed in the previous experiments caused us to consider automatic methods for estimating the probability that nouns occurring in a large corpus of english text deonote inanimate masculine or feminine thingsthe method described here is based on simply counting cooccurrences of pronouns and noun phrases and thus can employ any method of analysis of the text stream that results in referentpronoun pairs for another application in which no explicit indicators are available in the streamwe present two very simple methods for finding referentpronoun pairs and also give an application of a salience statistic that can indicate how confident we should be about the predictions the method makesfollowing this we show the results of applying this method to the 21millionword 1987 wall street journal corpus using two different pronoun reference strategies of varying sophistication and evaluate their performance using honorifics as reliable gender indicatorsthe method is a very simple mechanism for harvesting the kind of gender information present in discourse fragments like quotkim sleptshe slept for a long timequot even if kim gender was unknown before seeing the first sentence after the second sentence it is knownthe probability that a referent is in a particular gender class is just the relative frequency with which that referent is referred to by a pronoun p that is part of that gender classthat is the probability of a referent ref being in gender class gc is p i refs to refwith p e gci i el refs to ref with p e gci in this work we have considered only three gender classes masculine feminine and inanimate which are indicated by their typical pronouns he she and ithowever a variety of pronouns indicate the same class plural pronouns like quottheyquot and quotusquot reveal no gender information about their referent and consequently are not useful although this might be a way to learn pluralization in an unsupervised mannerin order to gather statistics on the gender of referents in a corpus there must be some way of identifying the referentsin attempting to bootstrap lexical information about referents gender we consider two strategies both completely blind to any kind of semanticsone of the most naive pronoun reference strategies is the quotprevious nounquot heuristicon the intuition pronouns closely follow their referents this heuristic simply keeps track of the last noun seen and submits that noun as the referent of any pronouns followingthis strategy is certainly simpleminded but as noted earlier it achieves an accuracy of 43in the present system a statistical parser is used simply as a taggerthis apparent parser overkill is a control to ensure that the partofspeech tags assigned to words are the same when we use the previous noun heuristic and the hobbs algorithm to which we wish to compare the previous noun methodin fact the only partofspeech tags necessary are those indicating nouns and pronounsobviously a much superior strategy would be to apply the anaphoraresolution strategy from previous sections to finding putative referentshowever we chose to use only the hobbs distance portion thereofwe do not use the quotmentionquot probabilities p as they are not given in the unmarked textnor do we use the genderanimiticity information gathered from the much smaller handmarked text both because we were interested in seeing what unsupervised learning could accomplish and because we were concerned with inheriting strong biases from the limited handmarked datathus our second method of finding the pronounnoun cooccurrences is simply to parse the text and then assume that the nounphrase at hobbs distance one is the antecedentgiven a pronoun resolution method and a corpus the result is a set of pronounreferent pairsby collating by referent and abstracting away to the gender classes of pronouns rather than individual pronouns we have the relative frequencies with which a given referent is referred to by pronouns of each gender classwe will say that the gender class for which this relative frequency is the highest is the gender class to which the referent most probably belongshowever any syntaxonly pronoun resolution strategy will be wrong some of the time these methods know nothing about discourse boundaries intentions or realworld knowledgewe would like to know therefore whether the pattern of pronoun references that we observe for a given referent is the result of our supposed quothypothesis about pronoun referencequot that is the pronoun reference strategy we have provisionally adopted in order to gather statistics or whether the result of some other unidentified processthis decision is made by ranking the referents by loglikelihood ratio termed salience for each referentthe likelihood ratio is adapted from dunning and uses the raw frequencies of each pronoun class in the corpus as the null hypothesis pr as well as pr from equation 9hp1 ref gc making the unrealistic simplifying assumption that references of one gender class are completely independent of references for another classes the likelihood function in this case is just the product over all classes of the probabilities of each class of reference to the power of the number of observations of this classwe ran the program on 21 million words of wall street journal textone can judge the program informally by simply examining the results and determining if the program gender decisions are correct figure 1 shows the 43 noun phrases with the highest salience figures an examination of these show that all but three are correctas a measure of the utility of these results we also ran our pronounanaphora program with these statistics addedthis achieved an accuracy rate of 842this is only a small improvement over what was achieved without the datawe believe however that there are ways to improve the accuracy of the learning method and thus increase its influence on pronoun anaphora resolutionfinally we attempted a fully automatic direct test of the accuracy of both pronoun methods for gender determinationto that end we devised a more objective test useful only for scoring the subset of referents that are names of peoplein particular we assume that any nounphrase with the honorifics quotmrsquot or quotmsquot may be confidently assigned to gender classes he she and she respectivelythus we compute precision as follows precision r attrib as he a mr e r i r attrib as she a mrs or ms e r i mr mrs or ms e r here r varies over referent types not tokensthe precision score computed over all phrases containing any of the target honorifics are 660 for the lastnoun method and 703 for the hobbs methodthere are several things to note about these resultsfirst as one might expect given the already noted superior performance of the hobbs scheme over lastnoun hobbs also performs better at determining gendersecondly at first glancethe 703 accuracy of the hobbs method is disappointing only slightly superior to the 653 accuracy of hobbs at finding correct referentsit might have been hoped that the statistics would make things considerably more accuratein fact the statistics do make things considerably more accuratefigure 2 shows average accuracy as a function of number of references for a given referentit can be seen that there is a significant improvement with increased referent countthe reason that the average over all referents is so low is that the counts on referents obey zipf law so that the mode of the distribution on counts is onethus the 703 overall accuracy is a mix of relatively high accuracy for referents with counts greater than one and relatively low accuracy for referents with counts of exactly onethe literature on pronoun anaphora is too extensive to summarize so we concentrate here on corpusbased anaphora researchaone and bennett present an approach to an automatically trainable anaphora resolution systemthey use japanese newspaper articles tagged with discourse information as training examples for a machinelearning algorithm which is the c45 derisiontree algorithm by quinlan they train their decision tree using pairs together with a set of feature vectorsamong the 66 features are lexical syntactic semantic and positional featurestheir machine learningbased resolver is trained using decision trees with 1971 anaphoras and they report an average success rate of 748mitkov describes an approach that uses a set of factors as constraints and preferencesthe constraints rule out implausible candidates and the preferences emphasize the selection of the most likely antecedentthe system is not entirely quotstatisticalquot in that it consists of various types of rulebased knowledge syntactic semantic domain discourse and heuristica statistical approach is present in the discourse module only where it is used to determine the probability that a noun phrase is the center of a sentencethe system also contains domain knowledge including the domain concepts specific list of subjects and verbs and topic headingsthe evaluation was conducted on 133 paragraphs of annotated computer science textthe results show an accuracy of 83 for the 512 occurrences of itlappin and leass report on a approach that relies on salience measures derived from syntactic structure and a dynamic model of attentional statethe system employs various constraints for nppronoun noncoreference within a sentenceit also uses person number and gender features for ruling out anaphoric dependence of a pronoun on an npthe algorithm has a sophisticated mechanism for assigning values to several salience parameters and for computing global salience valuesa blind test was conducted on manual text containing 360 pronoun occurrences the algorithm successfully identified the antecedent of the pronoun in 86 of these pronoun occurrencesthe addition of a module that contributes statistically measured lexical preferences to the range of factors the algorithm considers improved the performance by 2we have presented a statistical method for pronominal anaphora that achieves an accuracy of 842the main advantage of the method is its essential simplicityexcept for implementing the hobbs referentordering algorithm all other system knowledge is imbedded in tables giving the various component probabilities used in the probability modelwe believe that this simplicity of method will translate into comparative simplicity as we improve the methodsince the research described herein we have thought of other influences on anaphora resolution and their statistical correlateswe hope to include some of them in future workalso as indicated by the work on unsupervised learning of gender information there is a growing arsenal of learning techniques to be applied to statistical problemsconsider again the three highsalience words to which our unsupervised learning program assigned incorrect gender quothusbandquot quotwifequot and quotyearsquot we suspect that had our pronounassignment method been able to use the topic information used in the complete method these might well have been decided correctlythat is we suspect that quothusbandquot for example was decided incorrectly because the topic of the article was the woman there was a mention of her quothusbandquot but the article kept on talking about the woman and used the pronoun quotshequot while our simple program got confused a program using better statistics might not havethis too is a topic for future researchthe authors would like to thank mark johnson and other members of the brown nlp group for many useful ideas and nsf and onr for support
W98-1119
a statistical approach to anaphora resolutionthis paper presents an algorithm for identifying pronominal anaphora and two experiments based upon this algorithmwe incorporate multiple anaphora resolution factors into a statistical framework specifically the distance between the pronoun and the proposed antecedent gendernumberanimaticity of the proposed antecedent governing head information and noun phrase repetitionwe combine them into a single probability that enables us to identify the referentour first experiment shows the relative contribution of each source of information and demonstrates a success rate of 829 for all sources combinedthe second experiment investigates a method for unsupervised learning of gendernumberanimaticity informationwe present some experiments illustrating the accuracy of the method and note that with this information added our pronoun resolution method achieves 842 accuracywe add annotation of the antecedents of definite pronouns to treebankwe implement a hobbs distance feature which encodes the rank assigned to a candidate antecedent for a pronoun by hobbs seminal syntaxbased pronoun resolution algorithmwe count the number of times a discourse entities has been mentioned in the discourse alreadyour probabilistic approach combines three factors the result of the hobbs algorithm mention count dependent on the position of the sentence in the article and the probability of the antecedent occurring in the local context of the pronounwe describe a supervised probabilistic pronoun resolution algorithm which is based on complete syntactic information
experiments using stochastic search for text planning marcu has characterised an important and difficult problem in text planning given a set of facts to convey and a set of rhetorical relations that can be used to link them together how can one arrange this material so as to yield the best possible text we describe experiments with a number of heuristic search methods for this task this paper presents some initial experiments using stochastic search methods for aspects of text planningthe work was motivated by the needs of the ilex system for generating descriptions of museum artefacts mellish et al 98we present results on examples semiautomatically generated from datastructures that exist within ilexforming a set of facts about a piece of jewellery into a structure that yields a coherent text is a nontrivial problemrhetorical structure theory mann and thompson 87 claims that a text is coherent just in case it can be analysed hierarchically in terms of relations between text spansmuch work in nlg makes the assumption that constructing something like an rs tree is a necessary step in the planning of a textthis work takes as its starting point marcu marcu 97 excellent formalisation of rst and the problem of building legal rst trees and for the purposes of this paper the phrase quottext planningquot will generally denote the task characterised by himin this task one is given a set of facts all of which should be included in a text and a set of relations between facts some of which can be included in the textthe task is to produce a legal rs tree using the facts and some relations following the original work on rst and assumptions that have been commonly made in subsequent work we will assume that there is a fixed set of possible relations each relation has a nucleus and a satellite each relation may be indicated by a distinctive quotcue phrasequot with the nucleus and satellite being realised in some fashion around iteach relation has applicability conditions which can be tested between two atomic factsfor two complex text spans a relation holds exactly when that relation holds between the nuclei of those spansrelations can thus hold between text spans of arbitrary sizefigure 1 shows an example of the form of the input that is used for the experiments reported hereeach primitive quotfactquot is represented in terms of a subject verb and complement the quotsubjectquot is assumed to be the entity that the fact is quotaboutquotthe approaches reported here have not yet been linked to a realisation component and so the entities fact rel fact rel fact mentions fact mentions fact f1f2 are represented simply by canned phrases for readability relations are represented in terms of the relation name the nucleus and satellite facts and a list of precondition facts which need to have been assimilated before the relation can be used this example uses the definition of quotelaborationquot that we will be using consistently namely that one fact can elaborate another if they have an entity in common there seem to be three main approaches to controlling the search for a good rs tree one is to restrict what relations can appear in the nucleus and satellite of others this is a step towards creating quotschemasquot for larger pieces of textit can therefore be expected that it will produce very good results in restricted domains where limited text patterns are used but that it will be hard to extend it to freer text typesthe second idea is to use information about goals to limit possibilitiesthis is an element of hovy work but is more apparent in the planning work of moore and paris moore and paris 93this second approach will work well if there are strong goals in the domain which really can influence textual decisionsthis is not always the casefor instance in our ilex domain mellish et al 98 the system goal is something very general like quotsay interesting things about item x subject to length and coherence constraintsquotthe third approach most obviously exemplified by marcu 97 is to use some form of explicit search through possible trees guided by heuristics about tree qualitymarcu first of all attempts to find the best ordering of the factsfor every relation that could be indicated constraints are generated saying what the order of the two facts involved should be and that the facts should be adjacentthe constraints are weighted according to attributes of rhetorical relations that have been determined empiricallya standard constraint satisfaction algorithm is used to find the linear sequence such that the total weight of the satisfied constraints is maximalonce the sequence of facts is known a general algorithm marcu 96 is used to construct all possible rs trees based on those factsit is not clear how the best such tree is selected though clearly the adjacency and order constraintscould in principle be reapplied in some way to select a treewe are interested in further developing the ideas of marcu but seek to address the following problems and having weighted constraints seems to make matters worseenumerating all rs trees that can be built on a given sequence of facts also has combinatorical problemsmarcu approach may not be much better than one that builds all possible treesyet if there are enough relations to link any pair of facts the number of trees whose top nucleus are a specified fact grows from 336 to 5040 to 95040 as the number of facts grows from 5 to 6 to 7in our examples we have more like 2030 facts2as marcu points out the constraints on linear order only indirectly reflect requirements on the tree though in fact we will use the idea of planning via a linear sequence later we would like to experiment using measures of quality that are applied directly to the treeswe also have a number of factors that we would like to take account of in the evaluation building a good rs tree is a search problemstochastic search methods are a form of heuristic search that use the following generic algorithm use these to generate one or more new random variationsadd these to the set possibly removing less preferred items in order to keep the size constantexamples of stochastic search approaches are stochastic hillclimbing simulated annealing and evolutionary algorithmsthe approaches differ according to factors like the size of the population of possible solutions that is maintained the operations for generating new possibilities and any special mechanisms for avoiding local maximathey are similar to one another in that they are heuristic and they are quotanytimequotthat is such an algorithm can be stopped at any point and it will be able to yield at that point a result which is the best it has found so farthis is important for nlg applications where interface considerations mean that texts have to be produced within a limited timea key requirement for the use of any stochastic search approach is the ability to assess the quality of a possible solutionthus we are forced to confront directly the task of evaluating rst treeswe assign a candidate tree a score which is the sum of scores for particular features the tree may havea positive score here indicates a good feature and a negative one indicates a bad onewe cannot make any claims to have the best way of evaluating rs treesthe problem is far too complex and our knowledge of the issues involved so meagre that only a token gesture can be made at this pointwe offer the following evaluation scheme merely so that the basis of our experiments is clear and because we believe that some of the ideas are starting in the right directionhere are the features that we score for topic and interestingness we assume that the entity that the text is quotaboutquot is specified with the inputit is highly desirable that the quottop nucleusquot of the text be about this entityalso we prefer texts that use interesting relationswe score as follows 4 for each fact that will come textually between a satellite and its nucleus constraints on information ordering our relations have preconditions which are facts that should be conveyed before themwe score as follows 20 for an unsatisfied precondition for a relation focus movement we do not have a complex model of focus development through the text though development of such a model would be worthwhileas mckeown and others have done we prefer certain transitions over othersif consecutive facts mention the same entities or verb the prospects for aggregation are greater and this is usually desirablewe score as follows 9 for a fact not mentioning any previously mentioned entity 3 for a fact not mentioning any entity in the previous fact but whose subject is a previously mentioned entity 3 for a fact retaining the subject of the last fact as its subject 3 for a fact using the same verb as the previous one object introduction when an entity is first introduced as the subject of a fact it is usual for that to be a very general statement about the entitypreferring this introduces a mild schemalike influence to the systemwe score as follows 3 for the first fact with a given entity as subject having verb quotisquotusing the above evaluation metric for rs trees we have experimented with a range of stochastic search methodsspace does not permit us to discuss more than one initial experiment in this sectionin the next section we describe a couple of methods based on genetic algorithms which proved more productivethe subtree swapping approach produces new trees by swapping random subtrees in a candidate solutionit works as follows when two subtrees are swapped over in an rs tree some of the relations indicated in the tree no longer apply these are quotrepairedquot by in each case selecting the quotbestquot valid relation that really relates the top nuclei we investigated variations on this algorithm including having initial random balanced trees and focussing the subtree swapping on subtrees that contributed to bad scores but the above algorithm was the one that seemed most successfulfigure 2 shows an example text generated by subtree swappingnote that we have taken liberties in editing by hand the surface text for clarity coreference has been indicated by subscriptsthe ordering of the material and the use of rhetorical relations are the only things which are determined by the algorithmresults for subtree swapping are shown together with later results in figure 5 the most obvious feature of these results is the huge variability of the results which suggests that there are many local maxima in the search spacelooking at the texts produced we can see a number of problemsif there is only one way smoothly to include a fact in the text the chance of finding it by random subtree swapping is very lowthe same goes for fixing other local problems in the textthe introduction of quotthe previous jewelquot is an example of thisthis entity can only be introduced elegantly through the fact that it like the current item is encrusted with jewelsthe text is also still suffering from material getting between a satellite and its nucleusfor instance there is a relation between quotit is encrusted with jewelsquot and quotit has silver links encrusted asymmetricallyquot but this is weakened by the presence of quotand is an organic style jewelquot in the middlethe trouble is that subtree swapping needs incrementally to acquire all good features not present in whichever initial tree develops into the best solutionit can only acquire these features quotaccidentallyquot and the chances of stumbling on them are smalldifferent initial trees will contain different good fragments and it seems desirable to be able to combine the good parts of different solutionsthis motivates using some sort of crossover operation that can combine elements of two solutions into a new one goldberg 89but it is not immediately clear how crossover could work on two rs treesin particular two chosen trees will rarely have nontrivial subtrees with equal fringestheir way of breaking up the material may be so different that it is hard to imagine how one could combine elements of boththis jewel i is 720 cm longthe previous jewel has little diamonds scattered around its edges and has an encrusted bezeliti is encrusted with jewels iti features diamonds encrusted on a natural she willas a way of making a crossover operation conceivable our first step has been to reduce the planning problem to that of planning the sequential order of the facts we have done this by making certain restrictions on the rs trees that we are prepared to buildin particular we make the following assumptions with these assumptions an rs tree is characterised by the sequence of facts at its leavesindeed we have an algorithm that almost deterministically builds a tree from a sequence of facts according to these principlesthe algorithm for building a tree from a sequence essentially makes a tree that can be processed by a reader with minimal shortterm memorythe tree will be rightbranching and if the reader just remembers the last fact at any point then they can follow the connection between the text so far and the next fact2 interestingly marcu uses quotright skewquot to b disambiguate between alternative trees produced in rhetorical parsinghere we are setting it as a much harder constraintthe only 21n fact there is local leftbranching for relations whose satellite is presented firstsuch relations are often presented using embedded clauses in a way that signals the deviation from rightbranching clearly to the reader exception is quotjointquot relations which can join together texts of any size but since there is no real the first two assumptions above make fundamental use of the order in which facts will appear in the textfor simplicity we assume that every relation has a fixed order of nucleus and satellite the approach is controversial in that it takes into account realisation order in the criterion for a legal treeit is likely that the above assumptions will not apply equally well to all types of textstill they mean that the planning problem can be reduced to that of planning a sequencethe next experiments were an attempt to evaluate this ideathe genetic algorithm we used takes the following form notice that although the algorithm manipulates sequences the evaluation is one that operates on treesmutation is a unary operation which given one sequence generates a new onecrossover is binary in that it generates new solution based on two existing onesthe choice of mutation and crossover operations depends on how the sequences are internally represented and should facilitate the exchange of useful subparts of solutionstwo different representations have been tried so farthe relevant features are summarised in figure 3the ordinal representation michalewicz 92 assumes that there is an initial canonical sequence of facts a given sequence is represented by a sequence of numbers where the ith element indicates the position of the ith element of the sequence in that canonical sequence with all previous elements deletedso the ith element is always a number between 1 and n 1 i where n is the length of the sequencemutation is implemented by a change of a random element to a random legal valuecrossover is implemented by twopoint crossover the material between two random points of the sequences is swapped over yielding two new sequencesthe ordinal representation has been used extensively for tasks such as the travelling salesman problem and it has the advantage that the crossover operation is particularly simplein many ways this is a more obvious encoding though the operations are chosen to reflect the intuition that order and adjacency information should generally be maintained from old solution figure 4 shows an example text produced using the path encoding operations the same remarks about hand editing apply as beforefigure 5 summarises the results for subtree swapping and the two genetic algorithms on a set of examplesthese results summarise the mean and standard deviations of the scores of the system run 10 timesthe system was tried with a limit of 2000 and 4000 iterations around the main loop of the algorithmthese took about 2 and 4 minutes respectivelywith each example problem we have specified the number of facts the number of elaboration relations and the number of nonelaboration relationsnote that there is not a very clear basis for comparison between this jeweli is made from diamonds and yellow metalsiti was made by flockinger who was an english designerflockinger lived in london which is a citythis jeweli was made in londoniti is a necklaceiti is made from oxidized white metal pearls and opals hi is set with jewelsthis jeweli is encrusted with jewels it has silver links encrusted asymetrically with pearls and diamondsthis jeweli was made in 1976 hi is an organic style jewel and is 720 cm longiti draws on natural themes for inspiration iti uses natural pearlsindeed organic style jewels usually draw on natural themes for inspirationorganic style jewels usually have a coarse texture are usually made up of asymmetrical shapes and are usually encrusted with jewelsthe previous jeweli is encrusted with jewels it features diamonds encrusted on a natural she willit has little diamonds scattered around its edges and an encrusted bezel algorithms since each algorithm performs different operations during an quotiterationquotnevertheless since iterations take roughly the same amount of time one can get a rough idea of the relative performancethe example text is now in a single paragraph with a clear link from each sentence to the previous onesfrom the numerical results one can see that there is much less variability than beforethis is mainly because the rigid treebuilding constraints prevent really bad trees being built and so the worst results are less badthe results are also significantly better than for subtree swapping with the edgesensitive representation clearly winningit is necessary to be careful in evaluating these results which are only as good as the evaluation functionthis is certainly flawed in major waysthe texts are of a specific type there are only three of them and we have not used all rhetorical relationssome independent evaluation by human readers is imperative at this pointthe texts are especially limited by the fact that there is no account taken of the possibilities for aggregation embedding etc in the trees that are producednevertheless the approach looks promising enough that it is a real candidate to be used with the ilex systemfuture work needs to look at improving the characterisation of good trees and if possible introducing more natural crossovermutation operationsfuture work could also consider extending the scope of the algorithm to deal with aspects of content determination as well as structuringthe ilex project is supported by epsrc grant grk53321we acknowledge the valuable assistence of the national museums of scotland and the useful advice of andrew tuson
W98-1411
experiments using stochastic search for text planningmarcu has characterised an important and difficult problem in text planning given a set of facts to convey and a set of rhetorical relations that can be used to link them together how can one arrange this material so as to yield the best possible textwe describe experiments with a number of heuristic search methods for this taskwe investigate the problem of determining a discourse tree for a set of elementary speech acts which are partially constrained by rhetorical relationswe advocate genetic algorithms as an alternative to exhaustively searching for the optimal ordering of descriptions of museum artefacts
wordnet 2 a morphologically and semantically enhanced resource montemagni and structural patterns vs string patterns for extracting semantic from dictionaries in of 92 pp 546552 y ravin disambiguating and interpreting verb def wordnet has already been x ecogmzed as a valuable iesource in the human language technology and know ledge processing communities its applicability has been cited in mote than 200 papers and sstems have been implemented using wordnet a wordnet brbliogi apt is maintained at the universit3 of penns 11 ania in europe wordnet is being used to develop a multilingual database with basic semantic relations between words for several european languages capabilities wordnet was conceived as a machinereadable dictionary following psycholinguistic principles unlike standard alphabetical dictionaries which otganize vocabularies using mot phological similaiities wordnet structures lexical information in terms of word meanings wordnet maps word forms in word senses using the syntactic category as a parametei although it covers only foui patts of speech nouns verbs adjectives and ady erbs it encompasses a large majority of english words woids of the same syntactic categoiy that can be used to express the same meaning are grouped into a single synonym set called synset words with multiple meanings belong to multiple synsets an important part of the 99 643 synsets encoded in wordnet 1 6 contain word collocations thus representing complex nominals and relations are missing 6 since glosses were written manually sometimes theie is a lack of uniformity and consistency in the definitions the key idea in our project is to put to work the rich sourse of information contained in glosses that now can be used only by humans to read the definition of synsets for example wordnet 16 lists the concept cat true cat with the gloss currently from a concept like this only a few other concepts could be reached in extended wordnet the concept cat true cat will be related to 215 other concepts this level of information is rich enough to presume that the extended wordnet will work well as a knowledge base for commonsense reasoningmachine readable dictionaries have long been recognized as n aluable resources in computational linguistics in their paper ide and veroms projected a rather pessimistic outlook for the utility of mrds as knowledge sources a view that has impeded the enthusiasm of some researchers make a strong argument in favor of using mrds and shale then positive experience with using some dictionaries the mindnet project at microsoft aims at fully automating the development of a very large lexical knowledge base using two mrds the longman dictionary of contemporary english and the american heritage third edition many technical aspects of this project are rooted in the works of vanderwende and richardson there are several differences between gloss disambiguation and text disambiguation a major difference is that in our project we know the meaning of each gloss namely the synset to which a gloss applies second the glosses contain a definition comments and one or more examples we address the word sense disambiguation problem by using three complementary methods heuristics conceptual density and statistics on large corpora the first two methods rely entirely on the information contained in wordnet while the third one uses other corpora specifically the sources of knowledge available to us ate lexical information that includes part of speech position of words and lexical relations collocations and syntactic patterns synset to which a gloss belongs hypernyms of sy nset and their glosses synsets of poly semouns words and their glosses hypernyms of synsets of polysemous words and their glosses and so on method 1 classes of heuristics for word sense disambiguation a suitable technique for disambiguating dictionaries is to rely on heuristics able to cope with different sources of information work in tins area was done by ravin in a similar project at ibm and others we present now some of the heuristics used by us a way of explaining a concept is to specialize a more general concept it is likely that an explanation begins with a phrase whose head is one of its hypernyms and the features are expressed either as attributes in the same phrase or as phrases attached to the first phrase example the gloss of synset intrusion is it is likely that the syntactic parallelism of two words translates into semantic parallelism and the words may have a common hypernym or one is a hypernym of the other for adjectives the hyperny my is replaced by the similarity relation other heuristics in this class check whether or not two polysemous words belong to the same synset or one is a hyperny m of the other or if they belong to the same hierarchy example the gloss of interaction is in glosses comments and examples are meant to provide supplemental information it is possible to find the specialization or typical relation linking the comment to the preceding head phrase in one of the synsets of the head phrase example the gloss of the synset scuff scuffing is examples in wordnet provide collocatronal information of the words in synsets the intrinsic semantic tag of the word from the synset which is used in the example can occur in the same lexical relation in some other gloss carrying the semantic tag with it example synset penetration has the gloss nouns representing actions are nominalizations of some verbs if a verbal collocation contains a noun and is also a synonym of some morphologically related verb then it is likely to be the nommalization source the verb from the gloss of a synonym describing an action if not the source of the nominalization is likely to belong to the same hierarchy as the true nommalization source since they must share some properties example let s escape flight with the gloss a lexical relation using a word w both in the gloss of a sy nsct s and in some other gloss signals a property of w associated ni lth s in other cases when two relations w17w and zu1 wa are found in two glosses of wordnet and there are senses of w and wk that have a common hypernym it is likely that the correlation between w and the common hypernym is projected in both collocations example the gloss of the synset underground railroad is method 2 conceptual density method we have implemented a wsd system for free text that disambiguates multiple words simultaneously the method is based on measuring the number of common nouns shared by the verb and noun hierarchies and thus gets around the lack of connections problem as an example consider a verb noun pair of words denote with and the senses of the verb and the noun in dnet for each possible pan v n the conceptual density is computed as follows 1 extract all the glosses from the subhierarchy of th and determine the nouns from these glosses this constitutes the nouncontext of verb v each such noun is stored together with a weight w that indicates the level in the subhierarchy of the verb concept in whose gloss the noun was found 2 determine the glosses of the noun subhierarchy of nj and determine the nouns in them 3 compute the conceptual density cij of the common concepts between the nouns obtained at and the nouns obtained at using the metric ewk loy where 4 c ranks each pair v nj for all z and j vanants of this method work for other parts of speech pairs such as nounnoun nounverb verbverb verbnoun adjectivenoun and verbadverb this is a powerful method that works surprisingly well even for free text we hane tested the method on semcor the part of the brown corpus tagged with wordnet senses with this technique it is possible to rank the senses and to keep not only the first ranked sense but the second or thud ranked senses especially when the ranking is sufficiently close and there is another way to check the validity of the disambiguation as a last resort we can use a statistical approach to disambiguate those words that can not be done with any of the methods described so far consider a collocating wordword pair w1 w2 in which we consider that wi has been disambiguated already the disambiguation of w2 proceeds as follows woulquot or w1w21quot or quotw1wquot we have searched the internet using the altavista search engine the number of hits for each similarity list measures the relatedness of wi with each sense to and thus provides a ranking of the senses the following procedure was used to disambiguate 12762 words from 1000 randomly selected glosses step 1 identify and separate the monosemous words that have only one sense in wordnet step 2 apply method 1 heuristics to the remaining 6294 polysemous words method 1 provides correct disambiguation for 5475 words thus an accuracy of 87 out of the remaining 13 of the words 3 were disambiguated erroneously and 10 could not be done with the heuristics used the collect sense for each word was determined manually by a team of three students we hae found a few s3 nsets such as commemorate remember that have no links to any other synsets ie no h3 perny ms and no hypomy nis step 3 apply method 2 conceptual density to the 6294 polysemous words star ting fresh step 4 apply method 3 statistics to the 6294 words using altan ista on the internet step 5 the results obtained with method 1 and method 2 are combined that is take all the words that were disambiguated and in the case of conflict give priority to method 1 step 6 the results from step 5 are combined with the results given by method 3 and in the case of conflict give priority to results obtained in step 5 table 1 indicates the accuracy obtained at each step an overall accuracy of 94 was achieved our goal is to improve the technique to be able to disambiguate all words automatically these results must be seen against the background average rate of 59 39 correct sense assignment achieved when the first wordnet sense is assigned to each polysemous word this is considered the baseline performance level for wordsense disambiguation programs and is consistent with our own measurementsour extension of wordnet intends to serve as a lexicosemantic resource for a variety of nlp applications many of them requiring pragmatic and commonsense knowledge it is beneficial to transform the conceptual glosses in logical formulae approach to implement logical form transformations traditional lexicographic principles determine the discrimination of any conceptual definitions into a genus and the differentia our lfts implement the same distinction by always placing the genus predicate on the first position of the lft and the rest of the lft viewed as the definition differentia in the case when the subject or the object are present in the gloss they share the corresponding arguments with the actionstateevent predicate for example the lft of the gloss of supporter protagonist champion admirer booster friend is lft person n1 sz back v1 politician n2 the role of complements within a phrase is replicated in the lfts predicates geneiated from modifiers share the same arguments with the predicates corresponding to the phrase heads adjective piedicates share the same argument as the predicate corresponding to the noun they modify an exemplification is the lft of the gloss of art if act artefact which maps into object n1 sc manmade a1 similarly the argument of adverbial predicate is the argument marking the eventuality of the eventstateaction they modify for example the gloss of the verb synset hare is producing the lft run quickly under the same syntactic role by convention conjunctionpredicates have a variable number of arguments since they cover a variable number of predicates the first argument represents the quotresultquot of the logical operation induced by the conjunction the rest of the aiguments indicate the predicates covered by the conjunction as they are aiguments of those predicates as well we also geneiate medicates for every preposition encountered in the gloss the preposition predicates always have two arguments the first argument corresponding to the predicate of the head of the phi ase to which prepositional phi ase is attached whereas the second argument corresponds to the prepositional object sources of informationthe implementation of lfts relies on information provided by table 2 illusttates the tiansfot mations for the gloss of tennis lawn tennismany nlp problems iely on the recognition of the typical lexicosemantic telationships between linguistic concepts the lft codification met ely acknowledges the following syntaxbased relationships syntactic subjects syntactic objects prepositional attachments complex norninals and adjectivaladverbial adjuncts semantic interpretations of utterances as ell as discoui se piocessing require knowledge about the semantic or thematic relationships between concepts the semantic form transformations provide with constraintbased mappings of the syntaxbased relations covered in the lfts into binary thematic relations or semantic relations approach to implement semantic form transformations 1 the syntactic subject relations iecognized in the lfts by the predicative formula subjectverb can be mapped into a n anety of thematic relations the definition of the thematic relations is entirely based on infoimation internal to the wordnet database exptessed as constraints foi example all the subjects of verbs that are hyponyms of the verb because or have this concept as the genus of then glosses are defined to represent the tole of agents the syntactic object telations ate iecognized in the lfts 1 the predicative toimula verb noun the definition of the thematic relations in which syntactic objects can be mapped is expressed in terms of verb synsets the constraining verb synsets tepresent the uppermost hypernyms of all verbs that have syntactic objects in the wordnet glosses and belong to the same hierarchy or ate defined by gloss gent from the same hiei archy the prepositional predicates ale tiansfoimed into thematic i semantic relations when a wotdnet semantic relation holds between the arguments of a prepositional predicate that specific relation becomes the semantic transformation of the predicate for example the pp attachment sacrament of penance derived from the gloss of confession indicates a semantic kindof relation due to the fact that in wordnet penance is a hyponym of sacrament the transformation of complex nominal predicates into thematic or semantic constraints is done by first seeking a wordnet relation between the components of the predicate if such a relation is found predicate nn is transformed into the dominant wordnet semantic relation otherwise the nn predicate is transformed into a thematic relation the transformation of adjectival and adverbial adjuncts represented in the lfts as predicates sharing the same argument with the concepts they modify shall be connected to their modifiers through attribute relationssince the organization of wordnet divides the english vocabulary into four separate domainsnouns verbs adjectives and adverbs closely related concepts are often entered in more than one of these domains many of these relations can be identified in terms of derivational morphology e g the noun execution is derived from the verb execute and so is an example of a deverbal noun wordnet already contains some of this kind of derivational morphology deadjectival nouns are linked to their root adjectives deadjectival adverbs are linked to then root adjectnes and some denominal adjectives are linked to then root nouns in order to increase the connectivity of wordnet it would be desirable to include more such derivational morphology for example derivational relations between nouns and verbs should be particularly useful both deverbal nouns and denominal verbs such connections would facilitate the recognition that the same idea can be expressed in different ways e g that quothe summarized the bookquot and quothe gave a summary of the bookquot are effectively equivalent in meaning sometimes these morphological relations can be picked up from glosses as when disagreement is defined as but these are generally regarded as uninformative definitions and the reverse relation may not happen to occur since many of the words are polysemous morphological relations should not link words but synsets that have related meanings for example execute meaning should be linked to execution meaning and execute meaning should be linked to execution meaning etc and in cases where the concepts of the noun and verb are differente g womanize from womanno semantic link would need to be created
W99-0501
wordnet 2 a morphologically and semantically enhanced resourcethis paper presents an ongoing project intended to enhance wordnet morphologically and semanticallythe motivation for this work steams from the current limitations of wordnet when used as a linguistic knowledge basewe envision a software tool that automatically parses the conceptual defining glosses attributing partofspeech tags and phrasal bracketsthe nouns verbs adjectives and adverbs from every definition are then disambiguated and linked to the corresponding synsetsthis increases the connectivity between synsets allowing the retrieval of topically related conceptsfurthermore the tool transforms the glosses first into logical forms and then into semantic formsusing derivational morphology new links are added between the synsetswe propose a scheme for attaching sense tags to predicates within the framework of transforming wordnet glosses into a logical formthe extended wordnet is a publicly available version of wordnet in which each term occurring in a wordnet gloss is lemmatized and mapped to the synset in which it belongs
improved alignment models for statistical machine translation pp 315 in all experiments we use the following three error criteria wer the wer is computed as the minimum number of substitution insertion and deletion operations that have to be performed to convert the generated string into the target string this performance criterion is widely used in speech recognition per a shortcoming of the wer is the fact that it requires a perfect word order this is 26 2 for text and speech input error rate positionindependent word error rate and subjective sentence error rate withwithout preprocessing input preproc wer pervo sser singleword based approach text no 534 383 357 yes 560 412 353 speech no 678 501 548 yes 678 514 527 alignnient templates text no 495 353 315 yes 483 351 272 speech no 635 456 524 yes 628 456 503 particularly a problem for the verbmobil task where the word order of the german english sentence pair can be quite different as a result the word order of the automatically generated target sentence can be different from that of the target sentence but nevertheless acceptable so that the wer measure alone could be misleading in order to overcome this problem we introduce as additional measure the positionindependent word error rate this measure compares the words in the two senthe word order into account words that have no matching counterparts are counted as substitution errors depending on whether the translated sentence is longer or shorter than the target translation the remaining words result in either insertion or deletion errors in addition to substitution errors the per is guaranteed to be less than or equal to the wer sser for a more detailed analysis subjective judgments by test persons are necessary each translated sentence was judged by a human examiner according to an error scale from 00 to 10 a score of 00 means that the translation is semantically and syntactically correct a score of 05 means that a sentence is semantically correct but syntactically wrong and a score of 10 means that the sentence is semantically wrong the human examiner was offered the translated sentences of the two approaches at the same as a result we expect a better possibility of reproduction the results of the translation experiments using the singleword based approach and the alignment template approach on text input and on speech input are summarized in table 2 the results are shown with and without the use of domainspecific preprocessing the alignment template approach produces better translation results than the singleword based approach from this we draw the conclusion that it is important to model word groups in source and target language considering the recognition word error rate of 31 the degradation of about 20 by speech input can be expected the average translation time on an alpha workstation for a single sentence is about one second for the alignment template appreach and 30 seconds for the singleword based search procedure within the verbmobil project other translation modules based on rulebased examplebased and dialogueactbased translation are used we are not able to present results with these methods using our test corpus but in the current verbmobil prototype the preliminary evaluations show that the statistical methods produce comparable or better results than other systems an advantage of the systhat it is robust and always produces a translation result even if the input of the speech recognizer is quite incorrect 5 summary we have described two approaches to perform statistical machine translation which extend the baseline alignment models the singleword 27 based approach allows for the the possibility of onetomany alignments the alignment template approach uses two different alignment levels a phrase level alignment between phrases and a word level alignment between single words as a result the context of words has a greater influence and the changes in word order from source to target language can be learned explicitly an advantage of both methods is that they learn fully automatically by using a bilingual training corpus and are capable of achieving better translation results on a limiteddomain task than other examplebased or rulebased translation systems acknowledgment this work has been partially supported as part of the verbmobil project by the german federal ministry of education science research and technology and as part of the eutrans project by the by the european community the goal of machine translation is the translation of a text given in some source language into a target languagewe are given a source string f fifjfj which is to be translated into a target string ef eieelamong all possible target strings we will choose the string with the highest probability the argmax operation denotes the search problem ie the generation of the output sentence in the target languagepr is the language model of the target language whereas pr is the translation modelmany statistical translation models try to model wordtoword correspondences between source and target wordsthe model is often further restricted that each source word is assigned exactly one target wordthese alignment models are similar to the concept of hidden markov models in speech recognitionthe alignment mapping is j i aj from source position j to target position i a3the use of this alignment model raises major problems as it fails to capture dependencies between groups of wordsas experiments have shown it is difficult to handle different word order and the translation of compound nounsin this paper we will describe two methods for statistical machine translation extending the baseline alignment model in order to account for these problemsin section 2 we shortly review the singleword based approach described in with some recently implemented extensions allowing for onetomany alignmentsin section 3 we describe the alignment template approach which explicitly models shallow phrases and in doing so tries to overcome the above mentioned restrictions of singleword alignmentsthe described method is an improvement of resulting in an improved training and a faster search organizationthe basic idea is to model two different alignment levels a phrase level alignment between phrases and a word level alignment between single words within these phrasessimilar aims are pursued by but differently approachedin section 4 we compare the two methods using the verbmobil taskin this section we shortly review a translation approach based on the socalled monotonicity requirement our aim is to provide a basis for comparing the two different translation approaches presentedin eq pr is the language model which is a trigram language model in this casefor the translation model pr we make the assumption that each source word is aligned to exactly one target word for our model the probability of alignment al for position j depends on the previous alignment position a3_1 using this assumption there are two types of probabilities the alignment probabilities denoted by p and the lexicon probabilities denoted by pthe string translation probability can be rewritten for the training of the above model parameters we use the maximum likelihood criterion in the socalled maximum approximationwhen aligning the words in parallel texts we typically observe a strong localization effectin many cases although not always there is an even stronger restriction over large portions of the source string the alignment is monotonein this approach we first assume that the alignments satisfy the monotonicity requirementwithin the translation search we will introduce suitably restricted permutations of the source string to satisfy this requirementfor the alignment model the monotonicity property allows only transitions from a3_1 to ai with a jump width 6 6 a3 ai_i e ici 1 21theses jumps correspond to the following three cases new target word is generated 8 2 this case corresponds to skipping a word ie there is a word in the target string with no aligned word in the source stringthe possible alignments using the monotonicity assumption are illustrated in fig1monotone alignments are paths through this uniform trellis structureusing the concept of monotone alignments a search procedure can be formulated which is equivalent to finding the best path through a translation lattice where the following auxiliary quantity is evaluated using dynamic programming here e and e are qe probability of the best partial hypothesis where each source word is mapped to a target position a3 and a word ea at this positionfor a trigram language model the following dp recursion equation is evaluated p is the alignment probability for the three cases above pe i denoting the trigram language model e e equot equot are the four final words which are considered in the dynamic programming taking into account the monotonicity restriction and a trigram language modelthe dp equation is evaluated recursively to find the best partial path to each grid point no explicit length model for the length of the generated target string ef given the source string fi is used during the generation processthe length model is implicitly given by the alignment probabilitiesthe optimal translation is obtained by carrying out the following optimization where j is the length of the input sentence and is a symbol denoting the sentence endthe complexity of the algorithm for full search is j e4 where e is the size of the target language vocabularyhowever this is drastically reduced by beamsearchthe baseline alignment model does not permit that a source word is aligned with two or more target wordstherefore lexical correspondences like zahnarzttermin for dentist appointment because problems because a single source word must be mapped on two or more target wordsto solve this problem for the alignment in training we first reverse the translation direction i e english is now the source language and german is the target languagefor this reversed translation direction we perform the usual training and then check the alignment paths obtained in the maximum approximationwhenever a german word is aligned with a sequence of the adjacent english words this sequence is added to the english vocabulary as an additional entryas a result we have an extended english vocabularyusing this new vocabulary we then perform the stan23 extension to handle nonmonotonicity our approach assumes that the alignment is monotone with respect to the word order for the lion share of all word alignmentsfor the translation direction germanenglish the monotonicity constraint is violated mainly with respect to the verb groupin german the verb group usually consists of a left and a right verbal brace whereas in english the words of the verb group usually form a sequence of consecutive wordsfor our dp search we use a lefttoright beamsearch concept having been introduced in speech recognition where we rely on beamsearch as an efficient pruning technique in order to handle potentially huge search spacesour ultimate goal is speech translation aiming at a tight integration of speech recognition and translation the results presented were obtained by using a quasimonotone search procedure which proceeds from left to right along the position of the source sentence but allows for a small number of source positions that are not processed monotonicallythe word reorderings of the source sentence positions were restricted to the words of the german verb groupdetails of this approach will be presented elsewherea general deficiency of the baseline alignment models is that they are only able to model correspondences between single wordsa first countermeasure was the refined alignment model described in section 22a more systematic approach is to consider whole phrases rather than single words as the basis for the alignment modelsin other words a whole group of adjacent words in the source sentence may be aligned with a whole group of adjacent words in the target languageas a result the context of words has a greater influence and the changes in word order from source to target language can be learned explicitly31 the word level alignment alignment templates in this section we will describe how we model the translation of shallow phrasesthe key element of our translation model are the alignment templates_ an alignment template z is a triple which describes the alignment a between a source class sequence f and a target class sequence e the alignment a is represented as a matrix with binary valuesa matrix element with value 1 means that the words at the corresponding positions are aligned and the value 0 means that the words are not alignedif a source word is not aligned to a target word then it is aligned to the empty word eo which shall be at the imaginary position i 0this alignment representation is a generalization of the baseline alignments described in and allows for manytomany alignmentsthe classes used in p and e are automatically trained bilingual classes using the method described in and constitute a partition of the vocabulary of source and target languagethe class functions t and e map words to their classesthe use of classes instead of words themselves has the advantage of a better generalizationif there exist classes in source and target language which contain all towns it is possible that an alignment template learned using a special town can be generalized to all townsin fig2 an example of an alignment template is shownan alignment template z is applicable to a sequence of source words i if the alignment template classes and the classes of the source words are equal f f the application of the alignment template z constrains the target words6 to correspond to the target class sequence e e the application of an alignment template does not determine the target words but only constrains themfor the selection of words from classes we use a statistical model for p based on the lexicon probabilities of a statistical lexicon pwe assume a mixture alignment between the source and target language words constrained by the alignment matrix a in order to describe the phrase level alignment in a formal way we first decompose both the source sentence fij and the target sentence ef into a sequence of phrases in order to simplify the notation and the presentation we ignore the fact that there can be a large number of possible segmentations and assume that there is only one segmentationin the previous section we have described the alignment within the phrasesfor the alignment af between the source phrases ë and the target phrases fr we obtain the following equation for the phrase level alignment we use a firstorder alignment model p p which is in addition constrained to be a permutation of the k phrasesfor the translation of one phrase we introduce the alignment template as an unknown variable the probability p to apply an alignment template gets estimated by relative frequencies the probability p is decomposed by eqin this section we show how we obtain the parameters of our translation model by using a parallel training corpus rections f e and e f by applying the themalgorithmhowever we do not apply maximum approximation in training thereby obtaining slightly improved alignments2for each translation direction we calculate the viterbialignment of the translation models determined in the previous stepthus we get two alignment vectors crj1 and bf for each sentencewe increase the quality of the alignments by combining the two alignment vectors into one alignment matrix using the following methoda1 lj 1 j and a2 li 1 denote the set of links in the two viterbialignmentsin a first step the intersection a a1 n a2 is determinedthe elements within a are justified by both viterbialignments and are therefore very reliablewe now extend the alignment a iteratively by adding links occurring only in a1 or in a2 if they have a neighbouring link already in a or if neither the word fi nor the word ei are aligned in athe alignment has the neighbouring links and in the verbmobil task the precision of the baseline viterbi alignments is 833 percent with english as source language and 818 percent with german as source languageusing this heuristic we get an alignment matrix with a precision of 884 percent without loss in recall3we estimate a bilingual word lexicon p by the relative frequencies of the alignment determined in the previous step here na is the frequency that the word f is aligned to e and n is the frequency of e in the training corpus4we determine word classes for source and target languagea naive approach for doing this would be the use of monolingually optimized word classes in source and target languageunfortunately we can not expect that there is a direct correspondence between independently optimized classestherefore monolingually optimized word classes do not seem to be useful for machine translationwe determine correlated bilingual classes by using the method described in the basic idea of this method is to apply a maximumlikelihood approach to the joint probability of the parallel training corpusthe resulting optimization criterion for the bilingual word classes is similar to the one used in monolingual maximumlikelihood word clustering5we count all phrasepairs of the training corpus which are consistent with the alignment matrix determined in step 2a phrasepair is consistent with the alignment if the words within the source phrase are only aligned to words within the target phrasethus we obtain a count n of how often an alignment template occurred in the aligned training corpusthe probability of using an alignment template needed by eq is estimated by relative frequency fig3 shows some of the extracted alignment templatesthe extraction algorithm does not perform a selection of good or bad alignment templates it simply extracts all possible alignment templatesfor decoding we use the following search criterion this decision rule is an approximation to eq which would use the translation probability pusing the simplification it is easy to integrate translation and language model in the search process as both models predict target wordsas experiments have shown this simplification does not affect the quality of translation resultsto allow the influence of long contexts we use a classbased fivegram language model with backingoffthe search space denoted by eq is very largetherefore we apply two preprocessing steps before the translation of a sentence 1we_determine the set of all source phrases in f for which an applicable alignment template existsevery possible application of an alignment template to a subsequence of the source sentence is called alignment template instantiation2we now perform a segmentation of the input sentencewe search for a sequence of this is done efficiently by dynamic programmingbecause of the simplified decision rule it is used in eq p instead of pafterwards the actual translation process beginsit has a search organization along the positions of the target language stringin search we produce partial hypotheses each of which contains the following information a partial hypothesis is extended by appending one target wordthe set of all partial hypotheses can be structured as a graph with a source node representing the sentence start leaf nodes representing full translations and intermediate nodes representing partial hypotheseswe recombine partial hypotheses which cannot be distinguished by neither language model nor translation modelwhen the elements 1 5 of two partial hypotheses do not allow to distinguish between two hypotheses it is possible to drop the hypothesis with higher costs for the subsequent search processwe also use beamsearch in order to handle the huge search spacewe compare in beamsearch hypotheses which cover different parts of the input sentencethis makes the comparison of the costs somewhat problematictherefore we integrate an estimation of the remaining costs to arrive at a full translationthis can be done efficiently by determining in advance for each word in the source language sentence a lower bound for the costs of the translation of this wordtogether with the bitvector stored in a partial hypothesis it is possible to achieve an efficient estimation of the remaining coststhe quotverbmobil taskquot is a speech translation task in the domain of appointment scheduling travel planning and hotel reservationthe task is difficult because it consists of spontaneous speech and the syntactic structures of the sentences are less restricted and highly variablethe translation direction is from german to english which poses special problems due to the big difference in the word order of the two languageswe present results on both the text transcription and the speech recognizer output using the alignment template approach and the singleword based approachthe text input was obtained by manually transcribing the spontaneously spoken sentencesthere was no constraint on the length of the sentences and some of the sentences in the test corpus contain more than 50 wordstherefore for text input each sentence is split into shorter units using the punctuation marksthe segments thus obtained were translated separately and the final translation was obtained by concatenationin the case of speech input the speech recognizer along with a prosodic module produced socalled prosodic markers which are equivalent to punctuation marks in written languagethe experiments for speech input were performed on the singlebest sentence of the recognizerthe recognizer had a word error rate of 310considering only the real words without the punctuation marks the word error rate was smaller namely 203a summary of the corpus used in the experiments is given in table 1here the term word refers to fullform word as there is no morphological processing involvedin some of our experiments we use a domainspecific preprocessing which consists of a list of 803 and 458 wordjoinings and wordsplittings for word compounds numbers dates and proper namesto improve the lexicon probabilities and to account for unseen words we added a manually created germanenglish dictionary with 13 388 entriesthe classes used were constrained so that all proper names were included in a single classapart from this the classes were automatically trained using the described bilingual clustering methodfor each of the two languages 400 classes were usedfor the singleword based approach we used the manual dictionary as well as the preprocessing steps described aboveneither the translation model nor the language model used classes in this casein principal when reordering words of the source string words of the german verb group could be moved over punctuation marks although it was penalized by a constant costthe wer is computed as the minimum number of substitution insertion and deletion operations that have to be performed to convert the generated string into the target stringthis performance criterion is widely used in speech recognitiona shortcoming of the wer is the fact that it requires a perfect word orderthis is table 2 experiments for text and speech input word error rate positionindependent word error rate and subjective sentence error rate withwithout preprocessing particularly a problem for the verbmobil task where the word order of the germanenglish sentence pair can be quite differentas a result the word order of the automatically generated target sentence can be different from that of the target sentence but nevertheless acceptable so that the wer measure alone could be misleadingin order to overcome this problem we introduce as additional measure the positionindependent word error rate this measure compares the words in the two sentences without taking the word order into accountwords that have no matching counterparts are counted as substitution errorsdepending on whether the translated sentence is longer or shorter than the target translation the remaining words result in either insertion or deletion errors in addition to substitution errorsthe per is guaranteed to be less than or equal to the werfor a more detailed analysis subjective judgments by test persons are necessaryeach translated sentence was judged by a human examiner according to an error scale from 00 to 10a score of 00 means that the translation is semantically and syntactically correct a score of 05 means that a sentence is semantically correct but syntactically wrong and a score of 10 means that the sentence is semantically wrongthe human examiner was offered the translated sentences of the two approaches at the same timeas a result we expect a better possibility of reproductionthe results of the translation experiments using the singleword based approach and the alignment template approach on text input and on speech input are summarized in table 2the results are shown with and without the use of domainspecific preprocessingthe alignment template approach produces better translation results than the singleword based approachfrom this we draw the conclusion that it is important to model word groups in source and target languageconsidering the recognition word error rate of 31 the degradation of about 20 by speech input can be expectedthe average translation time on an alpha workstation for a single sentence is about one second for the alignment template appreach and 30 seconds for the singleword based search procedurewithin the verbmobil project other translation modules based on rulebased examplebased and dialogueactbased translation are usedwe are not able to present results with these methods using our test corpusbut in the current verbmobil prototype the preliminary evaluations show that the statistical methods produce comparable or better results than the other systemsan advantage of the system is that it is robust and always produces a translation result even if the input of the speech recognizer is quite incorrectwe have described two approaches to perform statistical machine translation which extend the baseline alignment modelsthe singleword based approach allows for the the possibility of onetomany alignmentsthe alignment template approach uses two different alignment levels a phrase level alignment between phrases and a word level alignment between single wordsas a result the context of words has a greater influence and the changes in word order from source to target language can be learned explicitlyan advantage of both methods is that they learn fully automatically by using a bilingual training corpus and are capable of achieving better translation results on a limiteddomain task than other examplebased or rulebased translation systemsthis work has been partially supported as part of the verbmobil project by the german federal ministry of education science research and technology and as part of the eutrans project by the by the european community
W99-0604
improved alignment models for statistical machine translationin this paper we describe improved alignment models for statistical machine translationthe statistical translation approach uses two types of information a translation model and a language modelthe language model used is a bigram or general mgram modelthe translation model is decomposed into a lexical and an alignment modelwe describe two different approaches for statistical translation and present experimental resultsthe first approach is based on dependencies between single words the second approach explicitly takes shallow phrase structures into account using two different alignment levels a phrase level alignment between phrases and a word level alignment between single wordswe present results using the verbmobil task which is a limiteddomain spokenlanguage taskthe experimental tests were performed on both the text transcription and the speech recognizer outputto obtain the best single alignment we use a posthoc algorithm to merge directional alignmentswe propose a heuristic where all the aligned phrase pairs satisfying the following criteria are extracted x and y consist of consecutive words of x and y and both have length at most k a is the alignment between words of x and y induced by a a contains at least one link and there are no links in a that have just one end in x or y
noun phrase coreference as clustering this paper introduces a new unsupervised algorithm for noun phrase coreference resolution it differs from existing methods in that it views coreference resolution as a clustering task in an evaluation on the muc6 coreference resolution corpus the algorithm achieves an fmeasure of 536 placing it firmly between the worst and best systems in the muc6 evaluation more importantly the clustering approach outperforms the only muc6 system to treat coreference resolution as a learning problem the clustering algorithm appears to provide a flexible mechanism for coordinating the application of contextindependent and contextdependent constraints and preferences for accurate partitioning of noun phrases into coreference equivalence classes many natural language processing applications require accurate noun phrase coreference resolution they require a means for determining which noun phrases in a text or dialogue refer to the same realworld entitythe vast majority of algorithms for noun phrase coreference combine syntactic and less often semantic cues via a set of handcrafted heuristics and filtersall but one system in the muc6 coreference performance evaluation for example handled coreference resolution in this mannerthis same reliance on complicated handcrafted algorithms is true even for the narrower task of pronoun resolutionsome exceptions exist howeverge et al present a probabilistic model for pronoun resolution trained on a small subset of the penn treebank wall street journal corpus dagan and itai develop a statistical filter for resolution of the pronoun quotitquot that selects among syntactically viable antecedents based on relevant subjectverbobject cooccurrencesaone and bennett and mccarthy and lehnert employ decision tree algorithms to handle a broader subset of general noun phrase coreference problemsthis paper presents a new corpusbased approach to noun phrase coreferencewe believe that it is the first such unsupervised technique developed for the general noun phrase coreference taskin short we view the task of noun phrase coreference resolution as a clustering taskfirst each noun phrase in a document is represented as a vector of attributevalue pairsgiven the feature vector for each noun phrase the clustering algorithm coordinates the application of contextindependent and contextdependent coreference constraints and preferences to partition the noun phrases into equivalence classes one class for each realworld entity mentioned in the textcontextindependent coreference constraints and preferences are those that apply to two noun phrases in isolationcontextdependent coreference decisions on the other hand consider the relationship of each noun phrase to surrounding noun phrasesin an evaluation on the muc6 coreference resolution corpus our clustering approach achieves an fmeasure of 536 placing it firmly between the worst and best systems in the muc6 evaluationmore importantly the clustering approach outperforms the only muc6 system to view coreference resolution as a learning problem the resolve system employs decision tree induction and achieves an fmeasure of 47 on the muc6 data setfurthermore our approach has a number of important advantages over existing learning and nonlearning methods for coreference resolution as a result we believe that viewing noun phrase coreference as clustering provides a promising framework for corpusbased coreference resolutionthe remainder of the paper describes the details of our approachthe next section provides a concrete specification of the noun phrase coreference resolution tasksection 3 presents the clustering algorithmevaluation of the approach appears in section 4qualitative and quantitative comparisons to related work are included in section 5it is commonly observed that a human speaker or author avoids repetition by using a variety of noun phrases to refer to the same entitywhile human audiences have little trouble mapping a collection of noun phrases onto the same entity this task of noun phrase coreference resolution can present a formidable challenge to an nlp systemfigure 1 depicts a typical coreference resolution system which takes as input an arbitrary document and produces as output the appropriate coreference equivalence classesthe subscripted noun phrases in the sample output constitute two noun phrase coreference equivalence classes class js contains the five noun phrases that refer to john simon and class pc contains the two noun phrases that represent prime corpthe figure also visually links neighboring coreferent noun phrasesthe remaining noun phrases have no coreferent nps and are considered singleton equivalence classeshandling the js class alone requires recognizing coreferent nps in appositive and genitive constructions as well as those that occur as proper names possessive pronouns and definite npsour approach to the coreference task stems from the observation that each group of coreferent noun phrases defines an equivalence classltherefore it is natural to view the problem as one of partitioning or clustering the noun phrasesintuitively all of the noun phrases used to describe a specific concept will be quotnearquot or related in some way ie their conceptual quotdistancequot will be smallgiven a description of each noun phrase and a method for measuring the distance between two noun phrases a clustering algorithm can then group noun phrases together noun phrases with distance greater than a clustering radius r are not placed into the same partition and so are not considered coreferentthe subsections below describe the noun phrase representation the distance metric and the clustering algorithm in turngiven an input text we first use the empire noun phrase finder to locate all noun phrases in the textnote that empire identifies only base noun phrases ie simple noun phrases that contain no other smaller noun phrases within themfor example chief financial officer of prime corp is too complex to be a base noun phraseit contains two base noun phrases chief financial officer and prime corp each noun phrase in the input text is then represented as a set of 11 features as shown in table 1this noun phrase representation is a first approximation to the feature vector that would be required for accurate coreference resolutionall feature values are automatically generated and therefore are not always perfectin particular we use very simple heuristics to approximate the behavior of more complex feature value computations individual wordsthe words contained in the noun phrase are stored as a featurehead nounthe last word in the noun phrase is considered the head nounpositionnoun phrases are numbered sequentially starting at the beginning of the documentpronoun typepronouns are marked as one of nominative accusative possessive or ambiguous all other noun phrases obtain the value none for this featurearticleeach noun phrase is marked indefinite definite or noneappositivehere we use a simple overly restrictive heuristic to determine whether or not the noun phrase is in a appositive construction if the noun phrase is surrounded by commas contains an article and is immediately preceded by another noun phrase then it is marked as an appositivenumberif the head noun ends in an then the noun phrase is marked plural otherwise it is considered singularexpressions denoting money numbers or percentages are also marked as pluralproper nameproper names are identified by looking for two adjacent capitalized words optionally containing a middle initialsemantic classhere we use wordnet to obtain coarse semantic information for the head nounthe head noun is characterized as one of time city animal human or objectif none of these classes pertains to the head noun its immediate parent in the class hierarchy is returned as the semantic class egpayment for the head noun pay in np6 of table 1a separate algorithm identifies numbers money and companysgendergender is determined using wordnet and a list of common first namesanimacynoun phrases classified as human or animal are marked anim all other nps are considered inanimnext we define the following distance metric between two noun phrases where f corresponds to the np feature set described above incompatibility f is a function that returns a value between 0 and 1 inclusive and indicates the degree of incompatibility of f for n pi and np and w1 denotes the relative importance of compatibility wrt feature f the incompatibility functions and corresponding weights are listed in table 22 in general weights are chosen to represent linguistic knowledge about coreferenceterms with a weight of oo represent filters that rule out impossible antecedents two noun phrases can never corefer when they have incompatible values for that term featurein the current version of our system the number proper name semantic class gender and animacy features operate as coreference filtersconversely terms with a weight of oo force coreference between two noun phrases with compatible values for that term featurethe appositive and wordssubstring terms operate in this fashion in the current distance metricterms with a weight of are the clustering radius threshold implement a preference that two nps not be coreferent if they are incompatible wrt that term featureas will be explained below however two such nps can be merged into the same equivalence class by the clustering algorithm if there is enough other evidence that they are similar sufficiently close to bothall other terms obtain weights selected using the development corpusalthough additional testing is required our current results indicate that these weights are sensitive to the distance metric but probably not to the corpuswhen computing a sum that involves both oo and oo we choose the more conservative route and the oo distance takes precedence an example of where this might occur is in the following sentence i reardon steel co manufactures several thousand tons of 2 steel each weekhere npi subsumes np2 giving them a distance of oo via the word substring term however npi semantic class is company and np2 class is object generating a distance of oo via the semantic class featuretherefore dist oo and the two noun phrases are not considered coreferentthe coreference distance metric is largely contextindependent in that it determines the distance between two noun phrases using very little if any of their intervening or surrounding contextthe clustering algorithm described below is responsible for coordinating these local coreference decisions across arbitrarily long contexts and thus implements a series of contextdependent coreference decisionsthe clustering algorithm is given in figure 2because noun phrases generally refer to noun phrases that precede them we start at the end of the document and work backwardseach noun phrase is compared to all preceding noun phrasesif the distance between two noun phrases is less than the clustering radius r then their classes are considered for possible mergingtwo coreference equivalence classes can be merged unless there exist any incompatible nps in the classes to be mergedit is useful to consider the application of our algorithm to an excerpt from a document i the chairman spoke with 2 ms white yesterday3 he the noun phrase instances for this fragment are shown in table 3initially npi np2 and np3 are all singletons and belong to coreference classes c2 and c3 respectivelywe begin by considering np3dist oo due to a mismatch on gender so they are not considered for possible mergingnext we calculate the distance from npi to np3pronouns are not expected to match when the words of two noun phrases are compared so there is no penalty here for word mismatchesthe penalty for their difference in position is dependent on the length of the documentfor illustration assume that this is less than r thus dist r they will not be considered coreferent and the resulting equivalence classes will be the chairman he ms whiteotherwise the distance is r according to the distance measure but as long as that distance is not oo np can be in the same class as npkthe distance measure operates on two noun phrases in isolation but the clustering algorithm can and does make use of intervening np information intervening noun phrases can form a chain that links otherwise distant npsby separating contextindependent and recall levels are fairly lowthe quotadjustedquot figures of table 4 reflect this upper bound on recallconsidering only coreference links between base noun phrases the clustering algorithm obtains a recall of 724 on the dryrun and 759 on the formal evaluationanother source of error is inaccurate and inadequate np feature vectorsour procedure for computing semantic class values for example is responsible for many errors it sometimes returns incorrect values and the coarse semantic class distinctions are often inadequatewithout a better named entity finder computing feature vectors for proper nouns is difficultother errors result from a lack of thematic and grammatical role informationthe lack of discourserelated topic and focus information also limits system performancein addition we currently make no special attempt to handle reflexive pronouns and pleonastic quotitquotlastly errors arise from the greedy nature of the clustering algorithmnoun phrase np is linked to every preceding noun phrase n13 that is compatible and within the radius r and that link can never be undonewe are considering three possible ways to make the algorithm less aggressively greedyfirst for each np3 instead of considering every previous noun phrase the algorithm could stop on finding the first compatible antecedentsecond for each npj the algorithm could rank all possible antecedents and then choose the best one and link only to that onelastly the algorithm could rank all possible coreference links and then proceed through them in ranked order thus progressing from the links it is most confident about to those it is less certain offuture work will include a more detailed error analysisexisting systems for noun phrase coreference resolution can be broadly characterized as learning and nonlearning approachesall previous attempts to view coreference as a learning problem treat coreference resolution as a classification task the algorithms classify a pair of noun phrases as coreferent or notboth mlr and resolve for example apply the c45 decision tree induction algorithm to the taskas supervised learning algorithms both systems require a fairly large amount of training data that has been annotated with coreference resolution informationour approach on the other hand uses unsupervised learning4 and requires no training data5 in addition both mlr and resolve require an additional mechanism to coordinate the collection of pairwise coreference decisionswithout this mechanism it is possible that the decision tree classifies np i and np i as coreferent and np i and npk as coreferent but np i and npk as not coreferentin an evaluation on the muc6 data set resolve achieves an fmeasure of 47the muc6 evaluation also provided results for a large number of nonlearning approaches to coreference resolutiontable 6 provides a comparison of our results to the best and worst of these systemsmost implemented a series of linguistic constraints similar in spirit to those employed in our systemthe main advantage of our approach is that all constraints and preferences are represented neatly in the distance metric allowing for simple modification of this measure to incorporate new knowledge sourcesin addition we anticipate being able to automatically learn the weights used in the distance metricthere is also a growing body of work on the narrower task of pronoun resolutionazzam et al for example describe a focusbased approach that incorporates discourse information when resolving pronounslappin and leass make use of a series of filters to rule out impossible antecedents many of which are similar to our ooincompatibilitiesthey also make use of more extensive syntactic information and thus require a fuller parse of the input textge et al present a supervised probabilistic algorithm that assumes a full parse of the input textdagan and itai present a hybrid fullparseunsupervised learning approach that focuses on resolving quotitquotdespite a large corpus their approach suffers from sparse data problems but works well when enough relevant data is availablelastly cardie presents a casebased learning approach for relative pronoun disambiguationour clustering approach differs from this previous work in several waysfirst because we only require the noun phrases in any input text we do not require a full syntactic parsealthough we would expect increases in performance if complex noun phrases were used our restriction to base nps does not reflect a limitation of the clustering algorithm but rather a selfimposed limitation on the preprocessing requirements of the approachsecond our approach is unsupervised and requires no annotation of training data nor a large corpus for computing statistical occurrencesfinally we handle a wide array of noun phrase coreference beyond just pronoun resolutionwe have presented a new approach to noun phrase coreference resolution that treats the problem as a clustering taskin an evaluation on the muc6 coreference resolution data set the approach achieves very promising results outperforming the only other corpusbased learning approach and producing recall and precision scores that place it firmly between the best and worst coreference systems in the evaluationin contrast to other approaches to coreference resolution ours is unsupervised and offers several potential advantages over existing methods no annotated training data is required the distance metric can be easily extended to account for additional linguistic information as it becomes available to the nlp system and the clustering approach provides a flexible mechanism for combining a variety of constraints and preferences to impose a partitioning on the noun phrases in a text into coreference equivalence classesnevertheless the approach can be improved in a number of waysadditional analysis and evaluation on new corpora are required to determine the generality of the approachour current distance metric and noun phrase instance representation are only first and admittedly very coarse approximations to those ultimately required for handling the wide variety of anaphoric expressions that comprise noun phrase coreferencewe would also like to make use of cues from centering theory and plan to explore the possibility of learning the weights associated with each term in the distance metricour methods for producing the noun phrase feature vector are also overly simplisticnevertheless the relatively strong performance of the technique indicates that clustering constitutes a powerful and natural approach to noun phrase coreference resolutionthis work was supported in part by nsf grant iri9624639 and a national science foundation graduate fellowshipwe would like to thank david pierce for his formatting and technical advice
W99-0611
noun phrase coreference as clusteringthis paper introduces a new unsupervised algorithm for noun phrase coreference resolutionit differs from existing methods in that it views coreference resolution as a clustering taskin an evaluation on the muc6 coreference resolution corpus the algorithm achieves an fmeasure of 536 placing it firmly between the worst and best systems in the muc6 evaluationmore importantly the clustering approach outperforms the only muc6 system to treat coreference resolution as a learning problemthe clustering algorithm appears to provide a flexible mechanism for coordinating the application of contextindependent and contextdependent constraints and preferences for accurate partitioning of noun phrases into coreference equivalence classeswe combine the use of wordnet with proper name gazetteers in order to obtain information on the compatibility of coreferential nps in their clustering algorithmapproaches to coreference resolution that rely only on clustering can easily enforce transitivitywe use pairwise np distances to cluster document mentionsour system uses the node distance in wordnet as one component in the distance measure that guides their clustering algorithmcoreference resolution is performed in two phases a binary classification phase in which the likelihood of coreference for each pair of noun phrases is assessed and a partitioning phase in which the clusters of mutually coreferring nps are formed maximizing some global criterion
language independent named entity recognition combining morphological and contextual evidence identifying and classifying personal geographic institutional or other names in a text is an important task for numerous applications this paper describes and evaluates a languageindependent bootstrapping algorithm based on iterative learning and reestimation of contextual and morphological patterns captured in hierarchically smoothed trie models the algorithm learns from unannotated text and achieves competitive performance when trained on a very short labelled name list with no other required languagespecific information tokenizers or tools the ability to determine the named entities in a text has been established as an important task for several natural language processing areas including information retrieval machine translation information extraction and language understandingfor the 1995 message understanding conference a separate named entity recognition task was developed and the best systems achieved impressive accuracy what should be underlined here is that these systems were trained for a specific domain and a particular language typically making use of handcoded rules taggers parsers and semantic lexiconsindeed most named entity recognizers that have been published either use tagged text perform syntactical and morphological analysis or use semantic information for contextual clueseven the systems that do not make use of extensive knowledge about a particular language such as nominator still typically use large data files containing lists of names exceptions personal and organizational identifiersour aim has been to build a maximally languageindependent system for both namedentity identification and classification using minimal information about the source languagethe applicability of aistyle algorithms and supervised methods is limited in the multilingual case because of the cost of knowledge databases and manually annotated corporatherefore a much more suitable approach is to consider an themstyle bootstrapping algorithmin terms of world knowledge the simplest and most relevant resource for this task is a database of known namesfor each entity class to be recognized and tagged it is assumed that the user can provide a short list of unambiguous examples of course the more examples provided the better the results but what we try to prove is that even with minimal knowledge good results can be achievedadditionally some basic particularities of the language should be known capitalization allowable word separators and a few frequent exceptions although such information can be utilised if present it is not required and no other assumptions are made in the general modelthe algorithm relies on both word internal and contextual clues as relatively independent evidence sources that drive the bootstrapping algorithmthe first category refers to the morphological structure of the word and makes use of the paradigm that for certain classes of entities some prefixes and suffixes are good indicatorsfor example knowing that quotmariaquot quotmarinelaquot and quotmaricicaquot are feminine first names in romanian the same classification may be a good guess for quotmarianaquot based on common prefixsuffixes are typically even more informative for example quotescuquot is an almost perfect indicator of a last name in romanian the same applies to quotwskiquot in polish quotovicquot and quotivicquot in serbocroatian quotsonquot in english etcsuch morphological information is automatically learned during bootstrappingcontextual patterns are also clearly crucial to named entity identification and classification especially for names that do not follow a typical morphological pattern for their word class are of foreign origin or polysemous clearly in many cases the context for only one occurrence of a new word and its morphological information is not enough to make a decisionbut as noted in katz a newly introduced entity will be repeated quotif not for breaking the monotonous effect of pronoun use then for emphasis and clarityquotmoreover he claims that the number of instances of the new entity is not associated with the document length but with the importance of the entity with regard to the subjectdiscoursewe will use this property in conjunction with the one sense per discourse tendency noted by gale church and yarowsky who showed that words strongly tend to exhibit only one sense in a documentdiscourseby gathering contextual information about the entity from each of its occurrences in the text and using morphological clues as well we expect to classify entities more effectively than if they are considered in isolation especially those that are very important with regard to the subjectwhen analyzing large texts a segmentation phase should be considered so that all the instances of a name in a segment have a high probability of belonging to the same class and thus the contextual information for all instances within a segment can be used jointly when making a decisionsince the precision of the segmentation is not critical a language independent segmentation system like the one presented by amithay richmond and smith is adequately reliable for this taskthere are two basic alternatives for handling a textthe first one is to tokenize it and classify the individual tokens or group of tokensthis alternative works for languages that use word separators where a relatively simple set of separator patterns can adequately tokenize the textthe second alternative is to classify entities simply with respect to a given starting and ending character position without knowing the word boundaries but just the probability of a boundary given the neighboring contextsthis second alternative works for languages like chinese where no separators between the words are typically usedsince for the first class of languages we can define a priori probabilities for boundaries that will match the actual separators this second approach represents a generalization of the one using tokenized texthowever the first method in which the text is tokenized presents the advantage that statistics for both tokens and types can be kept and as the results show the statistics for types seem to be more reliable than those for tokensusing the second method there is no single definition of quottypequot given that there are multiple possible boundaries for each token instance but there are ways to gather statistics such as considering what we may call quotprobable typesquot according to the boundary probabilities or keeping statistics on sistrings some other advantages and disadvantages of the two methods will be discussed belowbefore describing the algorithm we will present a brief overview of some of its goals three important concepts are used in our model 21 trie structures are used for both morphological and contextual information tries provide an effective efficient and flexible data structure for storing both contextual and morphological patterns and statisticsfirst they are very compact representationssecond they support a natural hierarchical smoothing procedure for distributional class statisticswe consider characterbased tries in which each node contains a probability distribution the distribution stored at each node contain the probability of each name class given the history ending at that nodeeach distribution also has two standard classes named quotquestionablequot and quotnonentityquotto simplify the notations we will refer to a start and end point bounded portion of text being analyzed as a tokentwo tries are used for context and two for internal morphological patterns of tokensfigure 1 shows an example of a morphological prefix trie which stores the characters of tokens from left to right from given starting points suffix tries have equivalent structure but reversed directionthe left and right context tries have the same structure as well but the list of links refers now to the tokens which have the particular context represented by the path from the root to the current nodefor right context the letters are introduced in the trie in normal order for left context they are considered in the reversed order similarly nodes of the context tries contain links to the tokens that occurred in the particular contexts defined by the pathstwo bipartite graph structures are created in this way by these linksfor reasons that will be explained later raw counts are kept for the distributionsthe probability of a tokencontext as being in or indicating a class is computed along the whole path from the root to the terminal node of the tokencontextin this way effective smoothing is realized for rare tokens or contextsconsidering a tokencontext formed from characwhere ai e 01 and e ai 1 it is reasonable to expect that smaller lambdas should correspond to smaller indices or even that ai rosia montana judetul alba david botar a intrat in legenda datorita unor intimplari dea dreptul penibile relatate in quotevenimentul zileiquotpractic primul gospodar al celei mai bogate comune in aur din muntii apuseni este mai tot timpul beatcrita drept pentru care la oficierea unei casatorii a sarutat mina mirelui a strins mina miresei si a intocmit certificat de deces in locul celui de casatorierecent andrei paunescu fiul poetului a intentionat sa achizitioneze gospodaria unei bucurestence care se stabilise de o vreme in rosia montana la primarie ins turmentatul primar 1a trimis pe fiul lui adrian paunescu sai cumpere ceva de baut pentru a se putea concentra indeajuns asupra hirtiilor tranzactiei imobiliarethere are two ways to start this stage either by tokenizing the text or considering it in raw formwhen tokenization is used each token is inserted in the two morphological tries one that keeps the letters of the tokens in the normal order another that keeps the letter in the reverse orderfor each letter on the path the raw distributions are changed by adding the a priori probability of the token belonging to each class for example in the case of indoeuropean languages if the token starts with an uppercase letter we add 1 full count to the quotquestionablequot sum as this entity is initially fully ambiguousif the token starts with lowercase in this case we add the bulk of the probability mass 5 to quotnonentityquot and the remainder to quotquestionablequot other languagespecific orthographic clues could potentially affect this initial probability mass assignmentwhen no tokenization is applied we have to consider possible starting and ending pointstherefore the strings introduced in the prefix morphological trie and the ones introduced in the suffix trie may differthe left context of each token is introduced letters in reverse order in the left context trie with pointers to the token in the morphlogical prefix trie the right context of each token is introduced in normal order in the right context trie keeping pointers to the token in the suffix triethe distributions along the paths are modified according to the a priori distribution of the targeted tokenthis stage is the core bootstrapping phase of the algorithmin essence as contextual models become better estimated they identify additional named entities with increasing confidence allowing reestimation and improvement of the internal morphological modelsthe additional training data that this yields allows the contextual models to be augmented and reestimated and the cycle continues until convergenceone approach to this bootstrapping process is to use a standard continuous them family of algorithms the proposed approach outlined below is a discrete variant that is much less computationally intensive and has the advantage of distinguishing between unknown probability distributions and those which are simply evenly distributedthe approach is conservative in that it only utilizes the class estimations for newly classified data in the retraining process if the class probability passes a confidence threshold as defined belowthe concept of confidence threshold can be captured through the following definitions of dominant and semidominantlet us consider a discrete finite probability distribution p we say that p has a dominant if there is an i in 1n such that pi 05 or in other words if we say that p has an asemidominant with respect to an event k where c 1 if it does not have k as dominant and there exist i in 1n such that a few comments about these definitions are necessary it can be easily observed that not every distribution has a dominant even though it has a maximum valuethe second definition of asemidominant makes sense if we consider a particular event k that is not relevant by removing this event and normalizing the rest of the values we obtain a new distribution having an adominantthe core of stage 2 is the bootstrapping procedurethe known names are inserted sequentially into the morphological tries modifying the probability distributions of the nodes on the paths accordingly if the new distribution in one of the nodes on the path of a known token gains a dominant then the effect of this change is propagated by reestimating other node distributions given this changeeach distribution on the context paths in which that token occurred in the text is modified by subtracting from the quotquestionablequot mass a quantity proportional to the number of times the respective token was found in that context and adding it to the dominantposition massfor the newly obtained distributions that gained a dominant in the context trie the bootstrapping procedure is called for all tokens that occurred in that context and so on recursivelyhere it is very important that we consider raw distributions and not normalize themfor example if word quotmarianaquot occurs x times with the right context quotmergequot and the distribution for quotniarianaquot has now been identified with the dominant quotfirst namequot then x units from the quotquestionablequot mass can be moved to quotfirst namequot mass along the path of quotmergequot in the right context trieif semidominants are used instead of dominants then we have to account for the fact that the semidominants may change over time so the probability mass must be moved either from quotquestionablequot position or previous semidominant position if a semidominant state has been reached beforeit may be easily observed that stage 2 has a sequential characteristic because the updating is done after reading each name incrementallywhen using dominants the order does not affect the process because of the fact that once a dominant state is reached it cannot change to another dominant state in the future in the case of semidominants the data ordering in the training file does influence the learning procedurethe more conservative strategy of using dominants rather then semidominants has on the other hand the disadvantage of cancelling or postponing the utilisation of many wordsfor example if both quotquestionablequot and quotfirst namequot have 49 of the mass then subsequent reestimation iterations are not initiated for this data even though the alternative name classes are very unlikelyconsidering those advantages and disadvantages we used the less conservative semidominant approach as the default modelin this stage the text is reanalysed sequentially and for each token a decision is madehere the bipartite structure of the two pairs of tries has a central role during stage 2 the left context and prefix tries interact with each other and so do the right context and suffix tries but there is no interference between the two pairs during the bootstrapping stagetherefore for each instance of a token in the text four classifiers are available a different one given by each triethe decision with regard to the presence of an entity and its classification is made by combining themcomparative trials indicate that higher performance is achieved by initially having the classifiers voteresults indicate that the most accurate classifications are obtained from the two independently bootstrapped morphological tries if the two agree then the corresponding class is returnedotherwise agreement is tested between other paired independent classifiers if no agreement is found then a simple linear combination of all four is considered for the decisionthis approach yields 6 higher fmeasure than the simple interpolation of classifiers for the default parametersstage 4 the newly classified tokens and contexts are saved for future use as potential seed data in subsequent namedentity classification on new textsthe basic measures for evaluation of this work are precision and recallprecision represents the percentage of the entities that the system recognized which are actually correctrecall represents the percentage of the correct named entities in the text that the system identifiedboth measures are incorporated in the fmeasure f 2prap rit would be inappropriate to compare the results of a language independent system with the ones designed for only one languageas day and palmer observed quotthe fact that existing systems perform extremely well on mixedcase english newswire corpora is certainly related to the years of research and organized evaluations on this specific task in this languageit is not clear what resources are required to adapt systems to new languagesquot it is important to mention that the fmeasure for the human performance on this task is about 96 our experiments on romanian text were consistent with this figurein order to obtain a baseline performance for this method we considered the performance of a system that tags only the examples found in one of the the original training wordlistswe consider this to be a plausible lower bound measure if the training words have not been selected from the test textday and palmer showed that a baseline fmeasure score for the enamex task varies from 212 for english to 732 for chineseit is important to mention that when they computed these figures they trained their language independent system on large annotated corpora the fact that the precision obtained by the baseline approach is not 100 indicates that the seed training names for each class are not completely unambiguous and that a certain degree of ambiguity is generally unavoidable another significant performance measure is forced classification accuracy where the entities have been previously identified in the text and the only task is selecting their name classto obtain baseline performance for this measure we considered a system that uses the original training word labels if there is an exact match with all other entities labeled with a default quotlast namequot tag the most common class in all languages studiedthe baseline accuracy was measured at 6118 for romaniansystem accuracies range from 7712 to 9176 on this same datathe results shown in table 3 were obtained for a romanian text having 12320 words from which 438 were entities using a training seed set of 300 names the baseline measures and default system are as described abovein configuration the based parameters of the system have been optimized for romanian using greedy search on an independent development test set yielding a slight increase in fmeasureconfiguration used the default parameters but the more conservative quotdominantquot criterion was utilized clearly favoring precision at the expense of recallconfiguration which is relevant for the enamex task represents the performance of the system when classes quotfirst namequot and quotlast namequot are combined into quotpersonquot configuration shows contrastive performance when using standard continuous them smoothing on the same data and data structurestable 4 shows system performance for 5 fairly diverse languages romanian english greek turkish and hindithe initial 4 rows provide some basic details on the training data available for each languagenote that when annotators were generating the lists of 150300 seed words they had access to a development test from which to extract samples but they were not constrained to this text and could add additional ones from memoryfurthermore it was quite unpredictable how many contexts would actually be found for a given word in the development texts as some appeared several times and many did not appear at allthus the total number of contextual matches for the seed words was quite variable from 113249 and difficult to controlit is also the case that not all additional contexts bring comparable new benefit as many secondary instances of the same word in a given related document collection tend to have similar or identical surrounding contexts to the first instance so in general it is quite difficult to control the actual training information content just by the number of raw seed word types that are annotatedfor each of these languages 5 levels of information sources are evaluatedthe baseline case is as previously described for table 3the contextonly case restricts system training to the two contextual tries ignoring the prefixsuffix morphological informationthe morphology only case in contrast restricts the system to only the two morphological modelsthese can be estimated from the 3 training wordlists but without an independent source of information via which bootstrapping can iterate there is no available path by which these models can learn the behaviour of previously unseen affixes and conquer new territorythus the model is entirely static on just the initial training datafor the same reasons the context only model is also staticin this case there is a possible bootstrapping path using alternating left and right context to expand coverage to new contexts but this tends to be not robust and was not pursuedinterestingly recall for morphology only is typically much higher than in the context only casethe reason for this is that the morphology models are full hierarchically smoothed character tries rather than word token tries and hence have much denser initial statistics for small training data sets proving greater partial matching potential for previously unseen wordsin an effort to test the contribution of the full iterative boostrapping the quotcontext and morphology onlyquot results are based on the combination of all 4 tries but without any bootstrappingthus they are trained exclusively on the 150300 training examplesperformance for the combined sources is in all cases greater than for the morphology or context source used alonefurthermore the full iterative bootstrapping clearly yields substantial improvement over the static models almost exclusively in the form of increased recall crosslanguage analysis yields further insightfirst recall is much higher for the 4 languages in which case is explicitly marked and is a clue for named entity identification than for a language like hindi where there are no case distinctions and hence any word could potentially be a named entitya language such as german would be roughly in the middle where lowercase words have low probability as named entities but capitalized words are highly ambiguous between common and proper nounsbecause approximately 96 of words in the hindi text are not named entities without additional orthographic clues the prior probability for quotnonentityquot is so strong that the morphological or contextual evidence in favor of one of the named entity classes must be very compelling to overcome this biaswith only 50 training words per context this is difficult and in the face of such strong odds against any of the named entity classes the conservative nature of the learning algorithm only braves an entity label for 38 more words than the baseline modelin contrast its performance on entity classification rather than identification measured by forced choice accuracy in labelling the given entities is comparable to all the other languages with 79 accuracy relative to the 62 baseline2 figure 3 demonstrates that the performance of the algorithm is highly sensitive to the size of the training databased on romanian the first graph shows that as the size of the raw text for bootstrapping increases fmeasure performance increases roughly logrithmically due almost exclusively to increases in precisionthis is a very encouraging trend as the web and other online sources provides virtually unlimited raw text in most major languages and substantial online text for virtually all languagesso extrapolating far beyond the 10k word level is relatively low cost and very feasiblethe second graph shows that fmeasure performance also increases roughly logrithmically with the total length of the seed wordlists in the range 40300this increase is due entirely to improved recall which doubles over this small rangethis trend suggests that there is considerable benefit to be gained by additional human annotation or seed wordlist acquisition from existing online lexiconshowever relative to case of raw text acquisition such additional annotations tend to be much costlier and there is a clear costbenefit tradeoff to further investment in annotationin summary however these evaluation results are satisfying in that they show clear and consistent trends across several diverse languages show clear trends for improvement as training resources grow and show that comparable classification results can be achieved on this diversity of languagesfor future work natural next steps include incorporating a language independent word segmentation phase like the one proposed by amitay richmond and smith to improve the performance on large textsdifferent statistics can be precomputed for different languages and language families and stored in external filesfor example the a priori probability of a named entity given the set of characteristics of its representation in the text such as position capitalization and relative position of other entities a further step is the implementation of a supervised active learning system based on the present algorithm in which the most relevant words for future disambiguation is presented to the user to be classified and the feedback used for bootstrappingthe selection of candidate examples for tagging would be based on both the unassigned probability mass and the frequency of occurrenceactive learning strategies are a natural path for efficiently selecting contexts for human annotationthis paper has presented an algorithm for the minimally supervised learning of named entity recognizers given short name lists as seed data the algorithm uses hierarchically smoothed trie structures for modeling morphological and contextual probabilities effectively in a language independent framework overcoming the need for fixed token boundaries or history lengthsthe combination of relatively independent morphological and contextual evidence sources in an iterative bootstrapping framework converges upon a successful named entity recognizer achieving a competitive 705754 fmeasure when applied to romanian textfixed kway classification accuracy on given entities ranges between 7379 on 5 diverse languages for a difficult firstnamelastnameplace partition and approaches 92 accuracy for the simpler personplace discriminationthese results were achieved using only unannotated training texts with absolutely no required languagespecific information tokenizers or other tools and requiring no more than 15 minutes total human effort in training the observed robust and consistent performance and very rapid low cost rampup across 5 quite different languages shows the potential for further successful and diverse applications of this work to new languages and domainsthe authors would like to thank eric brill radu florian shankar kumar murat saraclar dimitra vergyri and jun wu for both their feedback on this work and their help in annotating the namedentity data for the languages studied
W99-0612
language independent named entity recognition combining morphological and contextual evidenceidentifying and classifying personal geographic institutional or other names in a text is an important task for numerous applicationsthis paper describes and evaluates a languageindependent bootstrapping algorithm based on iterative learning and reestimation of contextual and morphological patterns captured in hierarchically smoothed trie modelsthe algorithm learns from unannotated text and achieves competitive performance when trained on a very short labelled name list with no other required languagespecific information tokenizers or toolswe consider one sense per discourse phenomenon the tendency of terms to have a fixed meaning within a single documentwe take a characterlevel approach to named entity recognition using prefix and suffix triesthe bootstrapping stage uses the initial or current entity assignments to estimate the class conditional distributions for both entities and contexts along their trie paths and then reestimates the distributions of the contextsentitycandidates to which they are linked recursively until all accessible nodes are reached
unsupervised models for named entity classification this paper discusses the use of unlabeled examples for the problem of named entity classification a large number of rules is needed for coverage of the domain suggesting that a fairly large number of labeled examples should be required to train a classi however we show that the use of data can reduce the requirements for supervision to just 7 simple quotseedquot rules the approach gains leverage from natural redundancy in the data for many namedentity instances both the spelling of the name and the context in which it appears are sufficient to determine its type we present two algorithms the first method uses a similar algorithm to that of with modifications motivated by the second algorithm extends ideas from boosting algorithms designed for supervised learning tasks to the framework suggested by many statistical or machinelearning approaches for natural language problems require a relatively large amount of supervision in the form of labeled training examplesrecent results have suggested that unlabeled data can be used quite profitably in reducing the need for supervisionthis paper discusses the use of unlabeled examples for the problem of named entity classificationthe task is to learn a function from an input string to its type which we will assume to be one of the categories person organization or locationfor example a good classifier would identify mrs frank as a person steptoe johnson as a company and honduras as a locationthe approach uses both spelling and contextual rulesa spelling rule might be a simple lookup for the string or a rule that looks at words within a string a contextual rule considers words surrounding the string in the sentence in which it appears the task can be considered to be one component of the muc named entity task supervised methods have been applied quite successfully to the full muc namedentity task at first glance the problem seems quite complex a large number of rules is needed to cover the domain suggesting that a large number of labeled examples is required to train an accurate classifierbut we will show that the use of unlabeled data can drastically reduce the need for supervisiongiven around 90000 unlabeled examples the methods described in this paper classify names with over 91 accuracythe only supervision is in the form of 7 seed rules the key to the methods we describe is redundancy in the unlabeled datain many cases inspection of either the spelling or context alone is sufficient to classify an examplefor example in says mr cooper a vice president of both a spelling feature and a contextual feature are strong indications that mr cooper is of type personeven if an example like this is not labeled it can be interpreted as a quothintquot that mr and president imply the same categorythe unlabeled data gives many such quothintsquot that two features should predict the same label and these hints turn out to be surprisingly useful when building a classifierwe present two algorithmsthe first method builds on results from and describes an algorithm for wordsense disambiguation that exploits redundancy in contextual features and gives impressive performanceunfortunately yarowsky method is not well understood from a theoretical viewpoint we would like to formalize the notion of redundancy in unlabeled data and set up the learning task as optimization of some appropriate objective function offer a promising formulation of redundancy also prove some results about how the use of unlabeled examples can help classification and suggest an objective function when training with unlabeled examplesour first algorithm is similar to yarowsky but with some important modifications motivated by the algorithm can be viewed as heuristically optimizing an objective function suggested by empirically it is shown to be quite successful in optimizing this criterionthe second algorithm builds on a boosting algorithm called adaboost the adaboost algorithm was developed for supervised learningadaboost finds a weighted combination of simple classifiers where the weights are chosen to minimize a function that bounds the classification error on a set of training examplesroughly speaking the new algorithm presented in this paper performs a similar search but instead minimizes a bound on the number of examples on which two classifiers disagreethe algorithm builds two classifiers iteratively each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagreethere has been additional recent work on inducing lexicons or other knowledge sources from large corpora describes a system for extracting pairs from the world wide web using an approach that bootstraps from an initial seed set of examples describe a method for extracting parts of objects from wholes from a large corpus using handcrafted patterns describes a method for extracting hyponyms from a corpus describe a bootstrapping approach for acquiring nouns in particular categories the approach builds from an initial seed set for a category and is quite similar to the decision list approach described in more recently describe a method they term quotmutual bootstrappingquot for simultaneously constructing a lexicon and contextual extraction patternsthe method shares some characteristics of the decision list algorithm presented in this paper was brought to our attention as we were preparing the final version of this paper971746 sentences of new york times text were parsed using the parser of 1 word sequences that met the following criteria were then extracted as named entity examples whose head is a singular noun for example take says maury cooper a vice president at spin this case maury cooper is extractedit is a sequence of proper nouns within an np its last word cooper is the head of the np and the np has an appositive modifier whose head is a singular noun 2the np is a complement to a preposition which is the head of a ppthis pp modifies another np whose head is a singular nounfor example fraud related to work on a federally funded sewage plant in georgia in this case georgia is extracted the np containing it is a complement to the preposition in the pp headed by in modifies the np a federally funded sewage plant whose head is the singular noun plantin addition to the namedentity string a contextual predictor was also extractedin the appositive case the contextual predictor was the head of the modifying appositive in the second case the contextual predictor was the preposition together with the noun it modifies from here on we will refer to the namedentity string itself as the spelling of the entity and the contextual predicate as the contexthaving found pairs in the parsed data a number of features are extractedthe features are used to represent each example for the learning algorithmin principle a feature could be an arbitrary predicate of the pair for reasons that will become clear features are limited to querying either the spelling or context alonethe following features were used fullstringx the full string contains if the spelling contains more than one word this feature applies for any words that the string contains and contains allcapl this feature appears if the spelling is a single word which is all capitals allcap2 this feature appears if the spelling is a single word which is all capitals or full periods and contains at least one period nonalphax appears if the spelling contains any characters other than upper or lower case lettersin this case nonalpha is the string formed by removing all upperlower case letters from the spelling contextx the context for the entitythethe first unsupervised algorithm we describe is based on the decision list method from before describing the unsupervised case we first describe the supervised version of the algorithm input to the learning algorithm n labeled examples of the form y is the label of the ith example xi is a set of mi features x1 xi2 xim associated with the ith exampleeach xii is a member of x where x is a set of possible featuresoutput of the learning algorithm a function hxxy 0 1 where h is an estimate of the conditional probability p of seeing label y given that feature x is presentalternatively h can be thought of as defining a decision list of rules x y ranked by their quotstrengthquot hthe label for a test example with features x is then defined as in this paper we define h as the following function of counts seen in training data count is the number of times feature x is seen with label y in training data count eyey count a is a smoothing parameter and k is the number of possible labelsin this paper k 3 and we set a 01equation 2 is an estimate of the conditional probability of the label given the feature p2 we now introduce a new algorithm for learning from unlabeled examples which we will call dlcotrain the 2 describes the use of more sophisticated smoothing methodsit is not clear how to apply these methods in the unsupervised case as they required crossvalidation techniques for this reason we use the simpler smoothing method shown here input to the unsupervised algorithm is an initial quotseedquot set of rulesin the named entity domain these rules were each of these rules was given a strength of 09999the following algorithm was then used to induce new rules let count be the number of times feature x is seen with some known label in the training datafor each label take the n contextual rules with the highest value of count whose unsmoothed3 strength is above some threshold pmin pmn was fixed at 095 in all experiments in this paperthus at each iteration the method induces at most n x k rules where k is the number of possible labels step 3otherwise label the training data with the combined spellingcontextual decision list then induce a final decision list from the labeled examples where all rules are added to the decision listwe can now compare this algorithm to that of the core of yarowsky algorithm is as follows where h is defined by the formula in equation 2 with counts restricted to training data examples that have been labeled in step 2set the decision list to include all rules whose strength is above some threshold pminthere are two differences between this method and the dlcotrain algorithm spelling and contextual features alternating between labeling and learning with the two types of featuresthus an explicit assumption about the redundancy of the features that either the spelling or context alone should be sufficient to build a classifier has been built into the algorithmto measure the contribution of each modification a third intermediate algorithm yarowskycautious was also testedyarowskycautious does not separate the spelling and contextual features but does have a limit on the number of rules added at each stagethe first modification cautiousness is a relatively minor changeit was motivated by the observation that the algorithm added a very large number of rules in the first few iterationstaking only the highest frequency rules is much quotsaferquot as they tend to be very accuratethis intuition is born out by the experimental resultsthe second modification is more important and is discussed in the next sectionan important reason for separating the two types of features is that this opens up the possibility of theoretical analysis of the use of unlabeled examples describe learning in the following situation x x1 x x2 where x1 and x2 correspond to two different quotviewsquot of an examplein the named entity task x1 might be the instance space for the spelling features x2 might be the instance space for the contextual featuresby this assumption each element x e x can also be represented as e x1 x x2thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classificationnow assume we have n pairs drawn from x1 x x2 where the first m pairs have labels whereas for i m 1n the pairs are unlabeledin a fully supervised setting the task is to learn a function f such that for all i 1m f yzin the cotraining case argue that the task should be to induce functions ii and f2 such that so ii and 12 must correctly classify the labeled examples and must agree with each other on the unlabeled examplesthe key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem give an example that illustrates just how powerful the second constraint can beconsider the case where ixi 1x21 n and n is a quotmediumquot sized number so that it is feasible to collect 0 unlabeled examplesassume that the two classifiers are quotrote learnersquot that is 11 and 12 are defined through lookup tables that list a label for each member of x1 or x2the problem is a binary classification problemthe problem can be represented as a graph with 2n vertices corresponding to the members of x1 and x2each unlabeled pair is represented as an edge between nodes corresponding to x1i and x2i in the graphan edge indicates that the two features must have the same labelgiven a sufficient number of randomly drawn unlabeled examples we will induce two completely connected components that together span the entire grapheach vertex within a connected component must have the same label in the binary classification case we need a single labeled example to identify which component should get which label go on to give pac results for learning in the cotraining casethey also describe an application of cotraining to classifying web pages the method halves the error rate in comparison to a method using the labeled examples alonelimitations of while the assumptions of are useful in developing both theoretical results and an intuition for the problem the assumptions are quite limitedin particular it may not be possible to learn functions fi for i m 1n either because there is some noise in the data or because it is just not realistic to expect to learn perfect classifiers given the features used for representationit may be more realistic to replace the second criteria with a softer one for example suggest the alternative alternatively if ii and 12 are probabilistic learners it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learnersthe question of what soft function to pick and how to design algorithms which optimize it is an open question but appears to be a promising way of looking at the problemthe dlcotrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraintsat each iteration the algorithm increases the number of rules while maintaining a high level of agreement between the spelling and contextual decision listsinspection of the data shows that at n 2500 the two classifiers both give labels on 44281 of the unlabeled examples and give the same label on 9925 of these casesso the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agreein the next section we present an alternative approach that builds two classifiers while attempting to satisfy the above constraints as much as possiblethe algorithm called coboost has the advantage of being more general than the decisionlist learning alinput x e 2x yi 1 initialize di 1mfort 1tthis section describes an algorithm based on boosting algorithms which were previously developed for supervised machine learning problemswe first give a brief overview of boosting algorithmswe then discuss how we adapt and generalize a boosting algorithm adaboost to the problem of named entity classificationthe new algorithm which we call coboost uses labeled and unlabeled data and builds two classifiers in parallel probably approximately correct modelthis section describes adaboost which is the basis for the coboost algorithmadaboost was first introduced in gave a generalization of adaboost which we will use in this paperfor a description of the application of adaboost to various nlp problems see the paper by abney schapire and singer in this volumethe input to adaboost is a set of training examples each xt e 2x is the set of features constituting the ith examplefor the moment we will assume that there are only two possible labels each y is in 1 1adaboost is given access to a weak learning algorithm which accepts as input the training examples along with a distribution over the instancesthe distribution specifies the relative weight or importance of each example typically the weak learner will attempt to minimize the weighted error on the training set where the distribution specifies the weightsthe weak learner for twoclass problems computes a weak hypothesis h from the input space into the reals where the sign4 of h is interpreted as the predicted label and the magnitude i hi is the confidence in the prediction large numbers for i hi indicate high confidence in the prediction and numbers close to zero indicate low confidencethe weak hypothesis can abstain from predicting the label of an instance x by setting h 0the final strong hypothesis denoted 1 is then the sign of a weighted sum of the weak hypotheses 1 sign where the weights at are determined during the run of the algorithm as we describe belowpseudocode describing the generalized boosting algorithm of schapire and singer is given in figure 1note that zt is a normalization constant that ensures the distribution dti sums to 1 it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth roundthe normalization factor plays an important role in the adaboost algorithmschapire and singer show that the training error is bounded above by thus in order to greedily minimize an upper bound on training error on each iteration we should search for the weak hypothesis ht and the weight at that minimize zin our implementation we make perhaps the simplest choice of weak hypothesiseach ht is a function that predicts a label on examples containing a particular feature xt while abstaining on other examples the prediction of the strong hypothesis can then be written as we now briefly describe how to choose ht and at at each iterationour derivation is slightly different from the one presented in as we restrict at to be positivezt can be written as follows following the derivation of schapire and singer providing that w w_ equ is minimized by setting since a feature may be present in only a few examples w_ can be in practice very small or even 0 leading to extreme confidence valuesto prevent this we quotsmoothquot the confidence by adding a small value e to both w and w_ giving at plugging the value of at from equ and ht into equ gives in order to minimize zt at each iteration the final algorithm should choose the weak hypothesis which has values for w and w_ that minimize equ with w w_we now describe the coboost algorithm for the named entity problemfollowing the convention presented in earlier sections we assume that each example is an instance pair of the from where xj e 2x3 j e 2in the namedentity problem each example is a pairthe first m pairs have labels yi whereas for i m 1 n the pairs are unlabeledwe make the assumption that for each example both xi and x22 alone are sufficient to determine the label yithe learning task is to find two classifiers 2x1 1 1 12 2x2 1 1 such that f2 yt for examples i 1 m and f1 f2 as often as possible on examples i m 1 n to achieve this goal we extend the auxiliary function that bounds the training error to be defined over unlabeled as well as labeled instancesdenote by g3 et crithl j e 12 the unthresholded stronghypothesis signwe define the following function if zco is small then it follows that the two classifiers must have a low error rate on the labeled examples and that they also must give the same label on a large number of unlabeled instancesto see this note thai the first two terms in the above equation correspond to the function that adaboost attempts to minimize in the standard supervised setting with one term for each classifierthe two new terms force the two classifiers to agree as much as possible on the unlabeled examplesput another way the minimum of equ is at 0 when 1 vi sign sign 2 ig3l oo and 3 sign yi for i 1 m in fact zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examplesformally let el be the number of classification errors of the first learner on the training data and let eco be the number of unlabeled examples on which the two classifiers disagreethen it can be verified that we can now derive the coboost algorithm as a means of minimizing zcothe algorithm builds two classifiers in parallel from labeled and unlabeled dataas in boosting the algorithm works in roundseach round is composed of two stages each stage updates one of the classifiers while keeping the other classifier fixeddenote the unthresholded classifiers after t 1 rounds by git1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixedwe first define quotpseudolabelsquotyt as follows yi t sign is maintained between themthe test accuracy more or less asymptotesunlabeled examples in the namedentity classification problem can reduce the need for supervision to a handful of seed rulesin addition to a heuristic based on decision list learning we also presented a boostinglike framework that builds on ideas from the method uses a quotsoftquot measure of the agreement between two classifiers as an objective function we described an algorithm which directly optimizes this functionwe are currently exploring other methods that employ similar ideas and their formal propertiesfuture work should also extend the approach to build a complete named entity extractor a method that pulls proper names from text and then classifies themthe contextual rules are restricted and may not be applicable to every example but the spelling rules are generally applicable and should have good coveragethe problem of quotnoisequot items that do not fall into any of the three categories also needs to be addressed
W99-0613
unsupervised models for named entity classificationthis paper discusses the use of unlabeled examples for the problem of named entity classificationa large number of rules is needed for coverage of the domain suggesting that a fairly large number of labeled examples should be required to train a classifierhowever we show that the use of unlabeled data can reduce the requirements for supervision to just 7 simple seed rulesthe approach gains leverage from natural redundancy in the data for many namedentity instances both the spelling of the name and the context in which it appears are sufficient to determine its typewe present two algorithmsthe first method uses a similar algorithm to that of with modifications motivated by the second algorithm extends ideas from boosting algorithms designed for supervised learning tasks to the framework suggested by we extend the use of classifiers that have mutual constraints by adding terms to adaboost which force the classifiers to agree
exploiting diversity in natural language processing combining parsers three stateoftheart statistical parsers are combined to produce more accurate parses as well as new bounds on achievable treebank parsing accuracy two general approaches are presented and two combination techniques are described for each approach both parametric and nonparametric models are explored the resulting parsers surpass the best previously published performance results for the penn treebank the natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problemsthe machine learning community has been in a similar situation and has studied the combination of multiple classifiers their theoretical finding is simply stated classification error rate decreases toward the noise rate exponentially in the number of independent accurate classifiersthe theory has also been validated empiricallyrecently combination techniques have been investigated for part of speech tagging with positive results in both cases the investigators were able to achieve significant improvements over the previous best tagging resultssimilar advances have been made in machine translation speech recognition and named entity recognition the corpusbased statistical parsing community has many fast and accurate automated parsing systems including systems produced by collins charniak and ratnaparkhi these three parsers have given the best reported parsing results on the penn treebank wall street journal corpus we used these three parsers to explore parser combination techniqueswe are interested in combining the substructures of the input parses to produce a better parsewe call this approach parse hybridizationthe substructures that are unanimously hypothesized by the parsers should be preserved after combination and the combination technique should not foolishly create substructures for which there is no supporting evidencethese two principles guide experimentation in this framework and together with the evaluation measures help us decide which specific type of substructure to combinethe precision and recall measures used in evaluating treebank parsing treat each constituent as a separate entity a minimal unit of correctnesssince our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combinationone hybridization strategy is to let the parsers vote on constituents membership in the hypothesized setif enough parsers suggest that a particular constituent belongs in the parse we include itwe call this technique constituent votingwe include a constituent in our hypothesized parse if it appears in the output of a majority of the parsersin our particular case the majority requires the agreement of only two parsers because we have only threethis technique has the advantage of requiring no training but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomenaanother technique for parse hybridization is to use a naïve bayes classifier to determine which constituents to include in the parsethe development of a naïve bayes classifier involves learning how much each parser should be trusted for the decisions it makesour original hope in combining these parsers is that their errors are independently distributedthis is equivalent to the assumption used in probability estimation for naïve bayes classifiers namely that the attribute values are conditionally independent when the target value is givenfor this reason naïve bayes classifiers are wellmatched to this problemin equations 1 through 3 we develop the model for constructing our parse using naïve bayes classificationc is the union of the sets of constituents suggested by the parsers r is a binary function returning t precisely when the constituent c e c should be included in the hypothesismi is a binary function returning t when parser i suggests constituent c should be in the parsethe hypothesized parse is then the set of constituents that are likely to be in the parse according to this modelthe estimation of the probabilities in the model is carried out as shown in equation 4here no counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argumentunder certain conditions the constituent voting and naïve bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing bracketsthere are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent setlemma if the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituentsil1proof assume a pair of crossing constituents appears in the output of the constituent voting technique using k parserscall the crossing constituents a and ba receives a votes and b receives b voteseach of the constituents must have received at least 1 votes from the k parsers so a i1 and 2 2k1 b ri5111let s a bnone of the parsers produce parses with crossing brackets so none of them votes for both of the assumed constituentshence s 2n11 k a contradiction similarly when the naïve bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 05 to be accepted there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesisin general the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsersthere is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parsesone can trivially create situations in which strictly binarybranching trees are combined to create a tree with only the root node and the terminal nodes a completely flat structurethis drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentencesfor example we may have semantic information associated with the productions in a grammarif the parse contains productions from outside our grammar the machine has no direct method for handling them we have developed a general approach for combining parsers when preserving the entire structure of a parse tree is importantthe combining algorithm is presented with the candidate parses and asked to choose which one is bestthe combining technique must act as a multiposition switch indicating which parser should be trusted for the particular sentencewe call this approach parser switchingonce again we present both a nonparametric and a parametric technique for this taskfirst we present the nonparametric version of parser switching similarity switching the intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in commonwe pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similaritiesthis is the parse that is closest to the centroid of the observed parses under the similarity metricthe probabilistic version of this procedure is straightforward we once again assume independence among our various member parsersfurthermore we know one of the original parses will be the hypothesized parse so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in section 21we model each parse as the decisions made to create it and model those decisions as independent eventseach decision determines the inclusion or exclusion of a candidate constituentthe set of candidate constituents comes from the union of all the constituents suggested by the member parsersthis is summarized in equation 5the computation of pfr11mi m k has been sketched before in equations 1 through 4in this case we are interested in finding the maximum probability parse ri and mi is the set of relevant parsing decisions made by parser i ri is a parse selected from among the outputs of the individual parsersit is chosen such that the decisions it made in including or excluding constituents are most probable under the models for all of the parsersthe three parsers were trained and tuned by their creators on various sections of the wsj portion of the penn treebank leaving only sections 22 and 23 completely untouched during the development of any of the parserswe used section 23 as the development set for our combining techniques and section 22 only for final testingthe development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentencesa sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsersthe standard measures for evaluating penn treebank parsing performance are precision and recall of the predicted constituentseach parse is converted into a set of constituents represented as a tuples the set is then compared with the set generated from the penn treebank parse to determine the precision and recallprecision is the portion of hypothesized constituents that are correct and recall is the portion of the treebank constituents that are hypothesizedfor our experiments we also report the mean of precision and recall which we denote by i2 and fmeasurefmeasure is the harmonic mean of precision and recall 2prit is closer to the smaller value of precision and recall when there is a large skew in their valueswe performed three experiments to evaluate our techniquesthe first shows how constituent features and context do not help in deciding which parser to trustwe then show that the combining techniques presented above give better parsing accuracy than any of the individual parsersfinally we show the combining techniques degrade very little when a poor parser is added to the setit is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situationsfor example one parser could be more accurate at predicting noun phrases than the other parsersnone of the models we have presented utilize features associated with a particular constituent to influence parser preferencethis is not an oversightfeatures and context were initially introduced into the models but they refused to offer any gains in performancewhile we cannot prove there are no such useful features on which one should condition trust we can give some insight into why the features we explored offered no gainbecause we are working with only three parsers the only situation in which context will help us is when it can indicate we should choose to believe a single parser that disagrees with the majority hypothesis instead of the majority hypothesis itselfthis is the only important case because otherwise the simple majority combining technique would pick the correct constituentone side of the decision making process is when we choose to believe a constituent should be in the parse even though only one parser suggests itwe call such a constituent an isolated constituentif we were working with more than three parsers we could investigate minority constituents those constituents that are suggested by at least one parser but which the majority of the parsers do not suggestadding the isolated constituents to our hypothesis parse could increase our expected recall but in the cases we investigated it would invariably hurt our precision more than we would gain on recallconsider for a set of constituents the isolated constituent precision parser metric the portion of isolated constituents that are correctly hypothesizedwhen this metric is less than 05 we expect to incur more errors than we will remove by adding those constituents to the parsewe show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemesin table 1 we see with very few exceptions that the isolated constituent precision is less than 05 when we use the constituent label as a featurethe counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development setin the cases where isolated constituent precision is larger than 05 the affected portion of the hypotheses is negligiblesimilarly figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituentin each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituentsagain we notice that the isolated constituent precision is larger than 05 only in those partitions that contain very few samplesfrom this we see that a finergrained model for parser combination at least for the features we have examined will not give us any additional powerthe results in table 2 were achieved on the development setthe first two rows of the table are baselinesthe first row represents the average accuracy of the three parsers we combinethe second row is the accuracy of the best of the three parsersthe next two rows are results of oracle experimentsthe parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching frameworkit is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentencesthe maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsersanother way to interpret this is that less than 5 of the correct constituents are missing from the hypotheses generated by the union of the three parsersthe maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridizationwe do not show the numbers for the bayes models in table 2 because the parameters involved were established using this setthe precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser and constituent voting is significantly better than parser switching in precision4 constituent voting gives the highest accuracy for parsing the penn treebank reported to datetable 3 contains the results for evaluating our systems on the test set all of these systems were run on data that was not seen during their developmentthe difference in precision between similarity and bayes switching techniques is significant but the difference in recall is notthis is the first set that gives us a fair evaluation of the bayes models and the bayes switching model performs significantly better than its nonparametric counterpartthe constituent voting and naïve bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parserstable 4 shows how much the bayes switching technique uses each of the parsers on the test setparser 3 the most accurate parser was chosen 71 of the time and parser 1 the least accurate parser was chosen 16 of the timeties are rare in bayes switching because the models are finegrained many estimated probabilities are involved in each decisionin the interest of testing the robustness of these combining techniques we added a fourth simple nonlexicalized pcfg parserthe pcfg was trained from the same sections of the penn treebank as the other three parsersit was then tested on section 22 of the treebank in conjunction with the other parsersthe results of this experiment can be seen in table 5the entries in this table can be compared with those of table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parseras seen by the drop in average individual parser performance baseline the introduced parser does not perform very wellthe average individual parser accuracy was reduced by more than 5 when we added this new parser but the precision of the constituent voting technique was the only result that decreased significantlythe bayes models were able to achieve significantly higher precision than their nonparametric counterpartswe see from these results that the behavior of the parametric techniques are robust in the presence of a poor parsersurprisingly the nonparametric switching technique also exhibited robust behaviour in this situationwe have presented two general approaches to studying parser combination parser switching and parse hybridizationfor each experiment we gave an nonparametric and a parametric technique for combining parsersall four of the techniques studied result in parsing systems that perform better than any previously reportedboth of the switching techniques as well as the parametric hybridization technique were also shown to be robust when a poor parser was introduced into the experimentsthrough parser combination we have reduced the precision error rate by 30 and the recall error rate by 6 compared to the best previously published resultcombining multiple highlyaccurate independent parsers yields promising resultswe plan to explore more powerful techniques for exploiting the diversity of parsing methodswe would like to thank eugene charniak michael collins and adwait ratnaparkhi for enabling all of this research by providing us with their parsers and helpful commentsthis work was funded by nsf grant iri9502312both authors are members of the center for language and speech processing at johns hopkins university
W99-0623
exploiting diversity in natural language processing combining parsersthree stateoftheart statistical parsers are combined to produce more accurate parses as well as new bounds on achievable treebank parsing accuracytwo general approaches are presented and two combination techniques are described for each approachboth parametric and nonparametric models are exploredthe resulting parsers surpass the best previously published performance results for the penn treebankwe improve the best parser fmeasure of 897 to 913 using naive bayes voting on the penn treebank constituent structures regarding the system combination study we propose two parser combination schemes one that selects an entire tree from one of the parsers and one that builds a new tree by selecting constituents suggested by the initial treeswe perform parse selection by maximizing the expected precision of the selected parse with respect to the set of parses being combined
detecting text similarity over short passages exploring linguistic feature combinations via machine learning we present a new composite similarity metric that combines information from multiple linguistic indicators to measure semantic distance between pairs of small textual units several potential features are investigated and an optimal combination is selected via machine learning we discuss a more restrictive definition of similarity than traditional documentlevel and information retrievaloriented notions of similarity and motivate it by showing its relevance to the multidocument text summarization problem results from our system are evaluated against standard information retrieval techniques establishing that the new method is more effective in identifying closely related textual units 1 research goals in this paper we focus on the problem of detecttwo small textual units contain common information as a necessary step towards extracting such common information and constructing thematic groups of text units across multiple documents pieces of text has many applications most research in this area has centered on detecting similarity between documents willet 1988 similarity between a query and a document salton 1989 or between a query and a segment of a document callan 1994 while effective techniques have been developed for document clustering and classification which depend on interdocument similarity measures these techniques mostly on shared words or occasionally collocations of words smeaton 1992 when larger units of text are compared overlap may be sufficient to detect similarity but when the units of text are small simple surface matching of words and phrases is less likely to succeed since the number of potential matches is smaller our task differs from typical text matching applications not only in the smaller size of the text units compared but also in its overall goal our notion of similarity is more restrictive than topical similaritywe provide a detailed definition in the next section we aim to recover small textual units from a of documents so that each text unit within a set describes the same action syswhich is fully implemented is motivated by the need for determining similarity between small pieces of text across documents that potentially span different topics during multidocument summarization it serves as the first component of a domainindependent multisummarization system which generates a through reformulation barzilay al by combining information from these similar text passages we address concerns of sparse data and the narrower than topical definition of similarity by exploring several linguistic features in addition to shared words or collocations as indicators of similarity our include linked noun phrases wordnet synonyms and similar verbs we also define comover pairs of features provide an effective method for aggregating the feature values into a similarity measure using machine learning and present results 203 on a manually annotated corpus of 10345 pairs of compared paragraphs our new features and especially the composite ones are shown to outperform traditional techniques such as tfidf buckley 1985 salton 1989 for determining similarity over small text units 2 definition of similarity similarity is a complex concept which has been widely discussed in the linguistic philosophical and information theory communities for example frawley 1992 discusses all semantic typing in terms of two mechanisms the detection of similarity and difference jackendoff 1983 argues that standard semantic relations such as synonymy paraphrase redundancy and entailment all result from judgments of likeness whereas antonymy contradiction and inconsistency derive from judgments of difference losee 1998 reviews notions of similarity and their impact on information retrieval techniques for our task we define two text units as similar if they share the same focus on a common concept actor object or action in addition the common actor or object must perform or be subjected to the same action or be the subject of the same description for example figure 1 shows three input text fragments taken from the tdt pilot corpus all from the same topic on the forced landing of a yous helicopter in north korea we consider units and in figure 1 to be similar because they both focus on the same event with the same primary participant on the other hand unit in figure 1 is not similar to either or although all three refer to a helicopter the primary focus in is on the emergency landing rather than the loss of contact we discuss an experimental validation of our similarity definition in section 52 after we introduce the corpus we use in our experiments 3 related work although there is related empirical research on determining text similarity primarily in the information retrieval community there are two major differences between the goals of this earlier work and the problem we address in this an oh58 helicopter carrying a crew of two was on a routine training orientation when contact was lost at about 1130 am saturday quotthere were two people on boardquot said bacon quotwe lost radar contact with the helicopter about 915 est quot an oh58 yous military scout helicopter made an emergency landing in north korea at about 915 pm est friday the defense department said figure 1 input text units paper first the notion of similarity as defined in the previous section is more restrictive than the traditional definition of similarity anderberg 1973 willet 1988 standard notions of similarity generally involve the creation of a vector or profile of characteristics of a text fragment and then computing on the basis of frequencies the distance between vectors to determine conceptual distance salton and buckley 1988 salton 19891 features typically include stemmed words although sometimes multiword units and collocations have been used smeaton 1992 as well as typological characteristics such as thesaural features the distance between vectors for one text and another then determines closeness or similarity van rijsbergen 1979 in some cases the texts are represented as vectors of sparse ngrams of word occurrences and learning is applied over those vectors schapire and singer 1999 but since our definition of similarity is oriented to the smallsegment goal we make more finegrained distinctions thus a set of passages that would probably go into the same class by standard ir criteria would be further separated by our methods second we have developed a method that functions over pairs of small units of text so the size of the input text to be compared is different this differs from documenttodocument 204 or querytodocument comparison a closely related problem is that of matching a query to the relevant segment from a longer document callan 1994 kaszkiel and zobel 1998 which primarily involves determining which segment of a longer document is relevant to a query whereas our focus is on which segments are similar to each other in both cases we have less data to compare and thus have to explore additional or more informative indicators of similarity 4 methodology compute a feature vector over a pair of texunits where features are either of one characteristic or consisting of pairs of primitive features 41 primitive features draw on a number of linguistic approaches to text analysis and are based on both single words and simplex noun phrases each of these syntactic and semantic several variations we thus consider following potential matches between text units word cooccurrence sharing a single word between text units variations of this feature restrict matching to cases where the parts of speech of the words also match or relax it to cases where just the stems of the two words are identical matching noun phrases the linkit tool wacholder 1998 to identify simplex noun phrases and match those that share the same head wordnet synonyms provides sense information placing in sets of synonyms match words that appear in the same synset variations on this feature restrict the words considered to a specific partofspeech class common semantic classes for verbs levin 1993 semantic classes for verbs have been found to be useful for determining document type and text similarity klavans and kan 1998 we match two verbs that share the same semantic class shared proper nouns nouns are using the set abal variations on proper noun matching include restricting the proper noun type to a person place or an organization subcategories are also extracted entity finder in order to normalize for text length and frequency effects we experimented with two types of optional normalization of feature values the first is for text length where each feature value is normalized by the of the textual units in the pair for of textual units feature values are divided by length x length this operation removes potential bias in favor longer text the second type of normalization we examined was based on the relative frequency of occurrence of each primitive this is motivated the fact that infrequently primiare likely to have higher impact on similarity than primitives which match more frequently we perform this normalization in manner similar to the idf part of every primitive element is associated with a value which is the number of textual units in which the primitive appeared in the corpus for a primitive element which compares single words this is the number of textual units which contain that word in the corpus for a noun phrase this is the number of textual units that contain noun phrases that share the same head and similarly for other primitive types we multiply each feature value by number of textual units number of textual units containing this primitive since each normalization is optional there are four variations for each primitive feature 42 composite features addition to the above that compare single items from each text unit we which combine pairs of primitive features composite features are defined by placing different types of restrictions on the participating primitive features 205 figure 2 a composite feature over word primitives with a restriction on order would count the pair quottwoquot and quotcontactquot as a match because they occur with the same relative order in both textual units an 01158 helicopter carrying a crew of orientation when c ntac as lost was on a routine training out 1130 am saturday quotthere weretwolpeople on boardquot said bacon quotwe lost radar with the helicopter about 915 est quot figure 3 a composite feature over word primitives with a restriction on distance would match on the pair quotlostquot and quotcontactquot because they occur within two words of each other in both textual units 0145theligopterjearryinga crew of two was on a routine training rientation when contact was ft t about 1130 am saturday quott ere were two people on boardquot said bacon quotwetradar contact with thelielico ter bout 915 est quot figure 4 a composite feature with restrictions on the primitives type one primitive must be a matching simplex noun phrase while the other primitive must be a matching verb the example shows a pair of textual units where this composite feature detects a valid match an 01158 helicopter carrying a crew of two was on a routine training orientation when vas li t about 1130 am saturday quotthere were two people on boardquot said bacon quotw with the helicopter about 915 est quot ordering pairs of primitive elements are required to have the same relative order in both textual units distance pairs of primitive elements are required to occur within a certain distance in both textual units the maximum distance between the primitive elements can vary as an additional parameter a distance of one matches rigid collocations whereas a distance of five captures related primitives within a region of the text unit smeaton 1992 smadja 1993 primitive element of the pair of primitive elements can be restricted to a specific primitive allowing more expressiveness in the composite features for example we can restrict one of the primitive features to be a simplex noun phrase and the other to be a verb then two noun phrases one from each text unit must match according to the rule for matching simplex noun phrases and two verbs must match according to the rule for verbs see figure this particular combination loosely approximates grammatical relations eg matching subjectverb pairs verbs can also be matched by the first this type of normalization also uses equation but averages the normalization values of each primitive in the composite feature 43 learning a classifier for each pair of text units we compute a vector of primitive and composite feature values to determine whether the units match overall we employ a machine learning algorithm rip per cohen 1996 a widely used and effective rule induction system ripper is trained over a corpus of manually marked pairs of units we discuss the specifics of our corpus and of the annotation process in the next session we experwith varying ripper ratio measures the cost of a false positive relative to that of a false negative and thus controls the relative weight of precision versus recall this is an important step in dealing with the sparse data problem most text units are not similar given our restrictive definition and thus positive instances are rare 5 results 51 the evaluation corpus for evaluation we use a set of articles already classified into topical subsets which we obtained from the reuters part of the 1997 pilot topic detection and tracking corpus the tdt corpus developed by nist and darpa is a collection of 16000 news articles from reuters and cnn where many of the articles and transcripts have been manually grouped into 25 categories each of which corresponds a single event the reuters part of the corpus we selected five of the larger categories and extracted all articles assigned to them from several randomly chosen days for a total of 30 articles since paragraphs in news stories tend to be shorttypically one or two sentencesin this study we use paragraphs as our small text units although sentences would also be a possibility in total we have 264 text units and 10345 comparisons between units as comparisons are made between all pairs of paragraphs from the same topic the total number of comparisons is equal to 2 the number of paragraphs in all selected articles from topical category i training of our machine learning component was done by threefold crossvalidation ransplitting the pairs paragraphs into three equallysized subsets in each of the three runs two of these subsets were used for training and one for testing to create a reference standard the entire collection of 10345 paragraph pairs was marked for by two reviewers who were given definition and detailed instructions each reindependently marked each paragraphs as similar or not similar subsequently the two reviewers jointly examined cases where was disagreement discussed reasons reconciled the differences 52 experimental validation of the in order to independently validate our definiof similarity we performed additional experiments in the first we asked three addijudges to determine a ransample 40 paragraph pairs high agreement between judges would indicate that our definition of similarity reflects an objective reality and can be mapped unambiguously to an operational procedure for marking text units as similar or not at the same time it would also validate the judgments between text units that we use for our experiments this task judges were given opportuprovide reasons for claiming similarity or dissimilarity and comments on the task were for future analysis three additional 207 judges agreed with the manually marked and standardized corpus on 976 of the comparisons unfortunately approximately 97 of the comparisons in both our model and the subsequent validation experiment receive the value quotnot similarquot this large percentage is due to our finegrained notion of similarity and is parallel to happens in randomly sampled collections since in that case most documents will not be relevant to any given query nevertheless we can account for the high probability of interreviewer agreement expected by chance 097097 09418 by referring to the kappa statistic cohen 1960 carletta 1996 the kappa statistic is defined as pa po k the probability that two reviewers agree in practice and po is the probability that they would agree solely by chance in our case 09418 and indicating that the observed agreement by the is indeed if po is estimated from the particular sample used in this experiment rather than from our entire corpus it would be only 09 producing a value of 076 in addition to this validation experiment that used randomly sampled pairs of paragraphs we performed a balanced experiment by randomly selecting 50 of the dissimilar pairs and 50 of the similar pairs in a manner that guaranteed generation an independent pairs in this subset were rated for similarity by two additional independent reviewers who agreed on their decisions 91 of the time versus 50 expected chance in this case thus we feel confident in the reliability of our annotation is always between 0 and 1 with 0 indicating no better agreement than expected by chance and 1 indicating perfect agreement guarantee independence pairs of paragraphs were randomly selected for inclusion in the sample a pair 13 already been included in the sample process and can use the annotated corpus to assess the performance of our similarity measure and compare it to measures proposed earlier in the information retrieval literature 53 performance comparisons we compare the performance of our system to three other methods first we use standard tfidf a method that with various alterations remains at the core of many information retrieval and text matching systems salton and buckley 1988 salton 1989 we compute the total frequency of words in each text unit we also compute the number of units each word appears in in our training set then each text unit is represented as a vector of tfidf scores calculated as similarity between text units is measured by the cosine of the angle between the corresponding two vectors a further cutoff point is selected to convert similarities to hard decisions of quotsimilarquot or quotnot similarquot different cutoffs result in different tradeoffs between recall and precision second we compare our method against a standard widely available information retrieval system developed at cornell university buckley smart utilizes a modified tfidf measure plus stemming and a fairly sizable stopword list third we use as a baseline method the default selection of the most frequent category ie quotnot similarquot while this last method cannot be effectively used to identify similar paragraphs it offers a baseline for the overall accuracy of any more sophisticated technique for this task 54 experimental results our system was able to recover 366 of the similar paragraphs with 605 precision as shown in table 1 in comparison the unmodiobtained only 326 precision when recall is 391 ie close to our system recall and only 208 recall at precision of 622 comparable to our classifier used version 110 of smart released in july 1992 log number of units 208 recall precision accuracy machine learning over linguistic indicators 366 605 988 tfidf 300 474 972 smart 291 483 971 default choice 0 undefined 975 table 1 experimental results for different similarity metrics for comparison purposes we list the average recall precision and accuracy obtained by tfidf and smart at the two points in the precisionrecall curve identified for each method in the text precision smart offered only a small improvement over the base tfidf implementation and significantly underperformed our method obtaining 341 precision at recall of 367 and 215 recall at 624 precision the default method of always marking a pair as dissimilar obtains of course 0 recall and undefined precision figure 5 illustrates the difference between our system and straight tfidf at different points of the precisionrecall spectrum when overall accuracy is considered the numbers are much closer together 988 for our approach 966 and 978 for tfidf on the two pr points mentioned for that method 965 and for smart again at the two pr points mentioned for smart and 975 for the default nevertheless since the challenge of identifying sparsely occurring similar small text units is our goal the accuracy measure and the baseline technique of classifying everything as not similar are included only for reference but do tests of significance cannot be performed for cmnparing these values since paragraphs appear in multiple comparisons and consequently the comparisons are not independent figure 5 precisionrecall graph comparing our using line with squares versus tfidf not reflect our task 6 analysis and discussion of feature performance we computed statistics on how much each feature helps in identifying similarity summarized in table 2 primitive features are named acto the type of the feature composite feature names indicate the restrictions applied to primitives for example the composite fea a pair of matching primitives to occur within a relative distance of four words if the composite feature also restricts the types of the primitives in the pair the name of the restricting primitive feature is added to the composite feature name for exthe feature named distance 5 requires one member of the pair to be a verb and the relative distance between the primitives to be at most five the second column in table 2 shows whether the feature value has been normalized accordto its overall while the third column indicates the actual threshold used in decisions assuming that only this feature is used for clas the fourth column shows the applicathat feature that is the percentage of results reported in table 2 include our first norstep that accounts for the difference in length of text units 209 feature name normalized threshold applicability recall precision any word yes 0360 22 314 418 0505 167 754 noun yes 0150 81 432 159 0275 15 209 370 proper noun yes 0200 02 20 308 verb no 0775 106 197 16 simplex np yes 0150 57 355 186 0275 27 101 446 0350 07 37 692 semantic class of verbs no 0875 01 20 34 wordnet yes 0250 54 41 23 distance 2 yes 0075 47 249 157 distance 3 yes 0250 05 102 556 distance 4 yes 0275 19 146 500 distance 5 yes 0200 19 224 534 order distance 5 yes 0200 15 204 407 noun distance 5 yes 0175 19 212 319 verb distance 5 yes 0200 03 73 667 no 0850 06 110 563 table 2 statistics for a selected subset of features performance measures are occasionally given multiple times for the same feature and normalization option highlighting the effect of different decision thresholds paragraph pairs for which this feature would apply finally the fifth and sixth columns show the recall and precision on identifying similar paragraphs for each independent feature note that some features have low applicability over the entire corpus but target the hardtofind similar pairs resulting in significant gains in recall and precision table 2 presents a selected subset of primitive and composite features in order to demonstrate our results for example it was not surprising to observe that the most effective primitive feain determining similarity are word np other primitives as not as effective independently this is to be expected since nouns name objects entities and concepts and frequently exhibit more sense constancy in contrast verbs are functions and tend to shift senses in a more fluid fashion depending on context furthermore our technique does not label phrasal verbs which are a major source of verbal ambiguity in english whereas primitive features viewed independently might not have a directly visible effect on identifying similarity when used in composite features they lead to some novel results the pronounced case of this is for the composite feature distance can help identify similarity effectively as seen in table 2 this composite feature approximates verbargument and verbcollocation relations which are strong indicators of similarity at the same time the more restrictive a feature is the fewer occurrences of that feature appear in the training set this suggests that we could consider adding additional features suggested by current results in order to further refine and improve our similarity identification algorithm 7 conclusion and future work we have presented a new method to detect similarity between small textual units which combines primitive and composite features using machine learning we validated our similarity definition using human judges applied 210 our method to a substantial number of paragraph pairs from news articles and compared results to baseline and standard information retrieval techniques our results indicate that our method outperforms the standard techniques for detecting similarity and the system has been successfully integrated into a larger multipledocument summarization system mckeown et we are currently working on incorporating a clustering algorithm in order to give as output a set of textual units which are mutually similar rather than just pairwise similar future work includes testing on textual units of different size comparing with additional techniques proposed for document similarity in the information retrieval and computational linguistics literature and extending the feature set to incorporate other types of linguistic information in the statistical learning method acknowledgments we are grateful to regina barzilay hongyan jing kathy mckeown shimei pan and yoram singer for numerous discussions of earlier versions of this paper and for their help with setting up and running ripper and smart this research has been supported in part by an nsf stimulate grant iri961879 any opinions findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the national science foundation in this paper we focus on the problem of detecting whether two small textual units contain common information as a necessary step towards extracting such common information and constructing thematic groups of text units across multiple documentsidentifying similar pieces of text has many applications most research in this area has centered on detecting similarity between documents willet 1988 similarity between a query and a document salton 1989 or between a query and a segment of a document callan 1994while effective techniques have been developed for document clustering and classification which depend on interdocument similarity measures these techniques mostly rely on shared words or occasionally collocations of words smeaton 1992when larger units of text are compared overlap may be sufficient to detect similarity but when the units of text are small simple surface matching of words and phrases is less likely to succeed since the number of potential matches is smallerour task differs from typical text matching applications not only in the smaller size of the text units compared but also in its overall goalour notion of similarity is more restrictive than topical similaritywe provide a detailed definition in the next sectionwe aim to recover sets of small textual units from a collection of documents so that each text unit within a given set describes the same actionour system which is fully implemented is further motivated by the need for determining similarity between small pieces of text across documents that potentially span different topics during multidocument summarizationit serves as the first component of a domainindependent multidocument summarization system mckeown et al 1999 which generates a summary through text reformulation barzilay et al1999 by combining information from these similar text passageswe address concerns of sparse data and the narrower than topical definition of similarity by exploring several linguistic features in addition to shared words or collocations as indicators of text similarityour primitive features include linked noun phrases wordnet synonyms and semantically similar verbswe also define composite features over pairs of primitive featureswe then provide an effective method for aggregating the feature values into a similarity measure using machine learning and present results on a manually annotated corpus of 10345 pairs of compared paragraphsour new features and especially the composite ones are shown to outperform traditional techniques such as tfidf buckley 1985 salton 1989 for determining similarity over small text unitssimilarity is a complex concept which has been widely discussed in the linguistic philosophical and information theory communitiesfor example frawley 1992 discusses all semantic typing in terms of two mechanisms the detection of similarity and differencejackendoff 1983 argues that standard semantic relations such as synonymy paraphrase redundancy and entailment all result from judgments of likeness whereas antonymy contradiction and inconsistency derive from judgments of differencelosee 1998 reviews notions of similarity and their impact on information retrieval techniquesfor our task we define two text units as similar if they share the same focus on a common concept actor object or actionin addition the common actor or object must perform or be subjected to the same action or be the subject of the same descriptionfor example figure 1 shows three input text fragments taken from the tdt pilot corpus all from the same topic on the forced landing of a yous helicopter in north koreawe consider units and in figure 1 to be similar because they both focus on the same event with the same primary participant on the other hand unit in figure 1 is not similar to either or although all three refer to a helicopter the primary focus in is on the emergency landing rather than the loss of contactwe discuss an experimental validation of our similarity definition in section 52 after we introduce the corpus we use in our experimentsalthough there is related empirical research on determining text similarity primarily in the information retrieval community there are two major differences between the goals of this earlier work and the problem we address in this an oh58 helicopter carrying a crew of two was on a routine training orientation when contact was lost at about 1130 am saturday paperfirst the notion of similarity as defined in the previous section is more restrictive than the traditional definition of similarity anderberg 1973 willet 1988standard notions of similarity generally involve the creation of a vector or profile of characteristics of a text fragment and then computing on the basis of frequencies the distance between vectors to determine conceptual distance salton and buckley 1988 salton 19891features typically include stemmed words although sometimes multiword units and collocations have been used smeaton 1992 as well as typological characteristics such as thesaural featuresthe distance between vectors for one text and another then determines closeness or similarity van rijsbergen 1979in some cases the texts are represented as vectors of sparse ngrams of word occurrences and learning is applied over those vectors schapire and singer 1999but since our definition of similarity is oriented to the smallsegment goal we make more finegrained distinctionsthus a set of passages that would probably go into the same class by standard ir criteria would be further separated by our methodssecond we have developed a method that functions over pairs of small units of text so the size of the input text to be compared is differentthis differs from documenttodocument or querytodocument comparisona closely related problem is that of matching a query to the relevant segment from a longer document callan 1994 kaszkiel and zobel 1998 which primarily involves determining which segment of a longer document is relevant to a query whereas our focus is on which segments are similar to each otherin both cases we have less data to compare and thus have to explore additional or more informative indicators of similaritywe compute a feature vector over a pair of textual units where features are either primitive consisting of one characteristic or composite consisting of pairs of primitive featuresour features draw on a number of linguistic approaches to text analysis and are based on both single words and simplex noun phrases each of these morphological syntactic and semantic features has several variationswe thus consider the following potential matches between text units identified using the alembic tool set aberdeen et al 1995variations on proper noun matching include restricting the proper noun type to a person place or an organization in order to normalize for text length and frequency effects we experimented with two types of optional normalization of feature valuesthe first is for text length where each feature value is normalized by the size of the textual units in the pairthus for a pair of textual units a and b the feature values are divided by this operation removes potential bias in favor of longer text unitsthe second type of normalization we examined was based on the relative frequency of occurrence of each primitivethis is motivated by the fact that infrequently matching primitive elements are likely to have a higher impact on similarity than primitives which match more frequentlywe perform this normalization in a manner similar to the idf part of tfidf salton 1989every primitive element is associated with a value which is the number of textual units in which the primitive appeared in the corpusfor a primitive element which compares single words this is the number of textual units which contain that word in the corpus for a noun phrase this is the number of textual units that contain noun phrases that share the same head and similarly for other primitive typeswe multiply each feature value by log total number of textual units number of textual units containing this primitive since each normalization is optional there are four variations for each primitive featurein addition to the above primitive features that compare single items from each text unit we use composite features which combine pairs of primitive featurescomposite features are defined by placing different types of restrictions on the participating primitive features an 01158 helicopter carrying a crew of orientation when c ntac as lost itive elements can be restricted to a specific primitive allowing more expressiveness in the composite featuresfor example we can restrict one of the primitive features to be a simplex noun phrase and the other to be a verb then two noun phrases one from each text unit must match according to the rule for matching simplex noun phrases and two verbs must match according to the rule for verbs see figure 41 this particular combination loosely approximates grammatical relations eg matching subjectverb pairssince these restrictions can be combined many different composite features can be defined although our empirical results indicate that the most successful tend to include a distance constraintas we put more restrictions on a composite feature the fewer times it occurs in the corpus however some of the more restrictive features are most effective in determining similarityhence there is a balance between the discriminatory power of these features and their applicability to a large number of casescomposite features are normalized as primitive features are this type of normalization also uses equation but averages the normalization values of each primitive in the composite featurefor each pair of text units we compute a vector of primitive and composite feature valuesto determine whether the units match overall we employ a machine learning algorithm ripper cohen 1996 a widely used and effective rule induction systemripper is trained over a corpus of manually marked pairs of units we discuss the specifics of our corpus and of the annotation process in the next sessionwe experiment with varying ripper loss ratio which measures the cost of a false positive relative to that of a false negative and thus controls the relative weight of precision versus recallthis is an important step in dealing with the sparse data problem most text units are not similar given our restrictive definition and thus positive instances are rarefor evaluation we use a set of articles already classified into topical subsets which we obtained from the reuters part of the 1997 pilot topic detection and tracking corpusthe tdt corpus developed by nist and darpa is a collection of 16000 news articles from reuters and cnn where many of the articles and transcripts have been manually grouped into 25 categories each of which corresponds to a single event using the reuters part of the corpus we selected five of the larger categories and extracted all articles assigned to them from several randomly chosen days for a total of 30 articlessince paragraphs in news stories tend to be shorttypically one or two sentencesin this study we use paragraphs as our small text units although sentences would also be a possibilityin total we have 264 text units and 10345 comparisons between unitsas comparisons are made between all pairs of paragraphs from the same topic the total number of comparisons is equal to 2 where ni is the number of paragraphs in all selected articles from topical category itraining of our machine learning component was done by threefold crossvalidation randomly splitting the 10345 pairs of paragraphs into three equallysized subsetsin each of the three runs two of these subsets were used for training and one for testingto create a reference standard the entire collection of 10345 paragraph pairs was marked for similarity by two reviewers who were given our definition and detailed instructionseach reviewer independently marked each pair of paragraphs as similar or not similarsubsequently the two reviewers jointly examined cases where there was disagreement discussed reasons and reconciled the differencesin order to independently validate our definition of similarity we performed two additional experimentsin the first we asked three additional judges to determine similarity for a random sample of 40 paragraph pairshigh agreement between judges would indicate that our definition of similarity reflects an objective reality and can be mapped unambiguously to an operational procedure for marking text units as similar or notat the same time it would also validate the judgments between text units that we use for our experiments in this task judges were given the opportunity to provide reasons for claiming similarity or dissimilarity and comments on the task were logged for future analysisthe three additional judges agreed with the manually marked and standardized corpus on 976 of the comparisonsunfortunately approximately 97 of the comparisons in both our model and the subsequent validation experiment receive the value quotnot similarquotthis large percentage is due to our finegrained notion of similarity and is parallel to what happens in randomly sampled ir collections since in that case most documents will not be relevant to any given querynevertheless we can account for the high probability of interreviewer agreement expected by chance 097097 09418 by referring to the kappa statistic cohen 1960 carletta 1996the kappa statistic is defined as where pa is the probability that two reviewers agree in practice and po is the probability that they would agree solely by chancein our case pa 0976 po 09418 and k 05876 indicating that the observed agreement by the reviewers is indeed significant2 if po is estimated from the particular sample used in this experiment rather than from our entire corpus it would be only 09 producing a value of 076 for k in addition to this validation experiment that used randomly sampled pairs of paragraphs we performed a balanced experiment by randomly selecting 50 of the dissimilar pairs and 50 of the similar pairs in a manner that guaranteed generation of an independent sample3 pairs in this subset were rated for similarity by two additional independent reviewers who agreed on their decisions 91 of the time versus 50 expected by chance in this case k 082thus we feel confident in the reliability of our annotation process and can use the annotated corpus to assess the performance of our similarity measure and compare it to measures proposed earlier in the information retrieval literaturewe compare the performance of our system to three other methodsfirst we use standard tfidf a method that with various alterations remains at the core of many information retrieval and text matching systems salton and buckley 1988 salton 1989we compute the total frequency of words in each text unitwe also compute the number of units each word appears in in our training set then each text unit is represented as a vector of tfidf scores calculated as similarity between text units is measured by the cosine of the angle between the corresponding two vectors a further cutoff point is selected to convert similarities to hard decisions of quotsimilarquot or quotnot similarquot different cutoffs result in different tradeoffs between recall and precisionsecond we compare our method against a standard widely available information retrieval system developed at cornell university smart buckley 19854 smart utilizes a modified tfidf measure plus stemming and a fairly sizable stopword listthird we use as a baseline method the default selection of the most frequent category ie quotnot similarquotwhile this last method cannot be effectively used to identify similar paragraphs it offers a baseline for the overall accuracy of any more sophisticated technique for this taskour system was able to recover 366 of the similar paragraphs with 605 precision as shown in table 1in comparison the unmodified tfidf approach obtained only 326 precision when recall is 391 ie close to our system recall and only 208 recall at precision of 622 comparable to our classifier ilarity metricsfor comparison purposes we list the average recall precision and accuracy obtained by tfidf and smart at the two points in the precisionrecall curve identified for each method in the text precisionsmart offered only a small improvement over the base tfidf implementation and significantly underperformed our method obtaining 341 precision at recall of 367 and 215 recall at 624 precisionthe default method of always marking a pair as dissimilar obtains of course 0 recall and undefined precisionfigure 5 illustrates the difference between our system and straight tfidf at different points of the precisionrecall spectrumwhen overall accuracy is considered the numbers are much closer together 988 for our approach 966 and 978 for tfidf on the two pr points mentioned for that method above 965 and 976 for smart again at the two pr points mentioned for smart earlier and 975 for the default baseline5 nevertheless since the challenge of identifying sparsely occurring similar small text units is our goal the accuracy measure and the baseline technique of classifying everything as not similar are included only for reference but do not reflect our taskwe computed statistics on how much each feature helps in identifying similarity summarized in table 2primitive features are named according to the type of the feature composite feature names indicate the restrictions applied to primitivesfor example the composite feature distance restricts a pair of matching primitives to occur within a relative distance of four wordsif the composite feature also restricts the types of the primitives in the pair the name of the restricting primitive feature is added to the composite feature namefor example the feature named verb distance 5 requires one member of the pair to be a verb and the relative distance between the primitives to be at most fivethe second column in table 2 shows whether the feature value has been normalized according to its overall rarity6 while the third column indicates the actual threshold used in decisions assuming that only this feature is used for classificationthe fourth column shows the applicability of that feature that is the percentage of multiple times for the same feature and normalization option highlighting the effect of different decision thresholds paragraph pairs for which this feature would apply finally the fifth and sixth columns show the recall and precision on identifying similar paragraphs for each independent featurenote that some features have low applicability over the entire corpus but target the hardtofind similar pairs resulting in significant gains in recall and precisiontable 2 presents a selected subset of primitive and composite features in order to demonstrate our resultsfor example it was not surprising to observe that the most effective primitive features in determining similarity are any word simplex np and noun while other primitives such as verb were not as effective independentlythis is to be expected since nouns name objects entities and concepts and frequently exhibit more sense constancyin contrast verbs are functions and tend to shift senses in a more fluid fashion depending on contextfurthermore our technique does not label phrasal verbs which are a major source of verbal ambiguity in englishwhereas primitive features viewed independently might not have a directly visible effect on identifying similarity when used in composite features they lead to some novel resultsthe most pronounced case of this is for verb which in the composite feature verb distance 5 can help identify similarity effectively as seen in table 2this composite feature approximates verbargument and verbcollocation relations which are strong indicators of similarityat the same time the more restrictive a feature is the fewer occurrences of that feature appear in the training setthis suggests that we could consider adding additional features suggested by current results in order to further refine and improve our similarity identification algorithmwe have presented a new method to detect similarity between small textual units which combines primitive and composite features using machine learningwe validated our similarity definition using human judges applied our method to a substantial number of paragraph pairs from news articles and compared results to baseline and standard information retrieval techniquesour results indicate that our method outperforms the standard techniques for detecting similarity and the system has been successfully integrated into a larger multipledocument summarization system mckeown et al 1999we are currently working on incorporating a clustering algorithm in order to give as output a set of textual units which are mutually similar rather than just pairwise similarfuture work includes testing on textual units of different size comparing with additional techniques proposed for document similarity in the information retrieval and computational linguistics literature and extending the feature set to incorporate other types of linguistic information in the statistical learning methodwe are grateful to regina barzilay hongyan jing kathy mckeown shimei pan and yoram singer for numerous discussions of earlier versions of this paper and for their help with setting up and running ripper and smartthis research has been supported in part by an nsf stimulate grant iri961879any opinions findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the national science foundation
W99-0625
detecting text similarity over short passages exploring linguistic feature combinations via machine learningwe present a new composite similarity metric that combines information from multiple linguistic indicators to measure semantic distance between pairs of small textual unitsseveral potential features are investigated and an optimal combination is selected via machine learningwe discuss a more restrictive definition of similarity than traditional documentlevel and information retrievaloriented notions of similarity and motivate it by showing its relevance to the multidocument text summarization problemresults from our system are evaluated against standard information retrieval techniques establishing that the new method is more effective in identifying closely related textual unitsat the level of short passages or sentences we go beyond ngram taking advantage of wordnet synonyms as well as ordering and distance between shared words
cascaded grammatical relation assignment in this paper we discuss cascaded memory based grammatical relations assignment in the first stages of the cascade we find chunks of several types and label them with their adverbial function in the last stage we assign grammatical relations to pairs of chunks we studied the effect of adding several levels to this cascaded classifier and we found that even the less performing chunkers enhanced the performance of the relation finder when dealing with large amounts of text finding structure in sentences is often a useful preprocessing steptraditionally full parsing is used to find structure in sentenceshowever full parsing is a complex task and often provides us with more information then we needfor many tasks detecting only shallow structures in a sentence in a fast and reliable way is to be preferred over full parsingfor example in information retrieval it can be enough to find only simple nps and vps in a sentence for information extraction we might also want to find relations between constituents as for example the subject and object of a verbin this paper we discuss some memorybased shallow parsing techniques to find labeled chunks and grammatical relations in a sentenceseveral mb modules have been developed in previous work such as a pos tagger a chunker and a grammatical relation assigner the questions we will answer in this paper are can we reuse these modules in a cascade of classifierswhat is the effect of cascadingwill errors at a lower level percolate to higher modulesrecently many people have looked at cascaded andor shallow parsing and or assignmentabney is one of the first who proposed to split up parsing into several cascadeshe suggests to first find the chunks and then the dependecies between these chunkscrefenstette describes a cascade of finitestate transducers which first finds noun and verb groups then their heads and finally syntactic functionsbrants and skut describe a partially automated annotation tool which constructs a complete parse of a sentence by recursively adding levels to the tree use cascaded processing for full parsing with good resultsargamon et at applied memorybased sequence learning to np chunking and subjectobject identificationhowever their subject and object finders are independent of their chunker drawing from this previous work we will explicitly study the effect of adding steps to the grammatical relations assignment cascadethrough experiments with cascading several classifiers we will show that even using imperfect classifiers can improve overall performance of the cascaded classifierwe illustrate this claim on the task of finding grammatical relations to verbs in textthe cr assigner uses several sources of information step by step such as several types of xp chunks and adverbial functions assigned to these chunks since not all of these entities are predicted reliably it is the question whether each source leads to an improvement of the overall gr assignmentin the rest of this paper we will first briefly describe memorybased learning in section 2in section 31 we discuss the chunking classifiers that we later use as steps in the cascadesection 32 describes the basic gr classifiersection 33 presents the architecture and results of the cascaded gr assignment experimentswe discuss the results in section 4 and conclude with section 5memorybased learning keeps all training data in memory and only abstracts at classification time by extrapolating a class from the most similar item in memoryin recent work daelemans et at have shown that for typical natural language processing tasks this approach is at an advantage because it also quotremembersquot exceptional lowfrequency cases which are useful to extrapolate frommoreover automatic feature weighting in the similarity metric of an mb learner makes the approach wellsuited for domains with large numbers of features from heterogeneous sources as it embodies a smoothingbysimilarity method when data is sparse we have used the following mbl algorithms test item and each memory item is defined as the number of features for which they have a different value ib1ig ib1 with information gain to weight the cost of a feature value mismatch during comparisonigtree in this variant a decision tree is created with features as tests and ordered according to the information gain of the features as a heuristic approximation of the computationally more expensive ib1 variantsfor more references and information about these algorithms we refer to for other memorybased approaches to parsing see and in this section we describe the stages of the cascadethe very first stage consists of a memorybased partofspeech tagger for which we refer to the next three stages involve determining boundaries and labels of chunkschunks are nonrecursive nonoverlapping constituent parts of sentences first we simultaneously chunk sentences into np vp prep adjp and apvpchunksas these chunks are nonoverlapping no words can belong to more than one chunk and thus no conflicts can ariseprepchunks are the prepositional part of pps thus excluding the nominal partthen we join a prepchunk and one or more coordinated npchunks into a ppchunkfinally we assign adverbial function labels to all chunksin the last stage of the cascade we label several types of grammatical relations between pairs of words in the sentencethe data for all our experiments was extracted from the penn treebank ii wall street journal corpus for all experiments we used sections 0019 as training material and 2024 as test materialsee section 4 for results on other traintest set splittingsfor evaluation of our results we use the precision and recall measuresprecision is the percentage of predicted chunksrelations that are actually correct recall is the percentage of correct chunksrelations that are actually foundfor convenient comparisons of only one value we also list the fo1 value preerec with 3 1 in the first experiment described in this section the task is to segment the sentence into chunks and to assign labels to these chunksthis process of chunking and labeling is carried out by assigning a tag to each word in a sentence lefttorightramshaw and marcus first assigned a chunk tag to each word in the sentence i for inside a chunk 0 for outside a chunk and b for inside a chunk but the preceding word is in another chunkas we want to find more than one kind of chunk we have to further differentiate the job tags as to which kind of chunk the word is inwith the extended job tag set at hand we can tag the sentence after having found prep np and other chunks we collapse preps and nps to pps in a second stepwhile the gr assigner finds relations between vps and other chunks the pp chunker finds relations between prepositions and nps 2 in a way similar to or assignment in the last chunkinglabeling step we assign adverbial functions to chunksthe classes are the adverbial function labels from the treebank loc tmp dir prp mnr ext or quot2 for none of the formertable 1 gives an overview of the results of the chunkinglabeling experiments using the following algorithms determined by validation on the train set ibiig for xpchunking and igtree for ppchunking and advfuncs assignmentin grammatical relation assignment we assign a gr to pairs of words in a sentencein our 2pps containing anything else than nps are not searched for mentsnpvp adjp advp and prepchunks are found simultaneously but for convenience precision and recall values are given separately for each type of chunk experiments one of these words is always a verb since this yields the most important grsthe other word is the head of the phrase which is annotated with this grammatical relation in the treebanka preposition is the head of a pp a noun of an np and so ondefining relations to hold between heads means that the algorithm can for example find a subject relation between a noun and a verb without necessarily having to make decisions about the precise boundaries of the subject npsuppose we had the postagged sentence shown in figure 1 and we wanted the algorithm to decide whether and if so how miller is related to the first verb organizedwe then construct an instance for this pair of words by extracting a set of feature values from the sentencethe instance contains information about the verb and the focus a feature for the word form and a feature for the pos of bothit also has similar features for the local context of the focusexperiments on the training data suggest an optimal context width of two elements to the left and one to the rightin the present case elements are words or punctuation signsin addition to the lexical and the local context information we include superficial information about clause structure the first feature indicates the distance from the verb to the focus counted in elementsa negative distance means that the focus is to the left of the verbthe second feature contains the number of other verbs between the verb and the focusthe third feature is the number of intervening commasthe features were chosen by manual 67 89 and 1213 describe the context words features 1011 the focus wordempty contexts are indicated by the value quotquot for all featuresquotfeature engineeringquottable 2 shows the complete instance for millerorganized in row 5 together with the other first four instances for the sentencethe class is mostly quotquot to indicate that the word does not have a direct grammatical relation to organizedother possible classes are those from a list of more than 100 different labels found in the treebankthese are combinations of a syntactic category and zero one or more functions egnpsbj for subject npprd for predicative object np for direct object3 pploc for locative pp adjunct pplocclr for subcategorised locative pp etceteraaccording to their information gain values features are ordered with decreasing importance as follows 1113 10 1 2 8 12 9 6 4 7 3 5intuitively this ordering makes sensethe most important feature is the pos of the focus because this determines whether it can have a gr to a verb at all and what kind of relation is possiblethe pos of the following word is important because eg a noun followed by a noun is probably not the head of an np and will therefore not have a direct gr to the verbthe word itself may be important if it is eg a preposition a pronoun or a clearly temporallocal adverbfeatures 1 and 2 give some indication of the complexity of the structure intervening between the focus and the verbthe more complex this structure the lower the probability that the focus and the verb are relatedcontext further away is less important than near contextto test the effects of the chunking steps from section 31 on this task we will now construct instances based on more structured input text like that in figure 2this time the focus is described by five features instead of two for the additional information which type of chunk it is in what the preposition is if it is in a pp chunk and what the adverbial function is if anywe still have a context of two elements left one right but elements are now defined to be either chunks or words outside any chunk or punctuationeach chunk in the context is represented by its last word by the pos of the last word and by the type of chunkthe distance feature is adapted to the new definition of element too and instead of counting intervening verbs we now count intervening vp chunksfigure 3 shows the first five instances for the sentence in figure 2class valuequot quot again means the focus is not directly related to the verbquot according to their information gain values features are ordered in decreasing importance as follows 16 15 12 14 11 2 1 19 10 9 13 18 6 17 8 4 7 3 5comparing this to the earlier feature ordering we see that most of the new features are distance and intervening vps and commasfeatures 4 and 5 show the verb and its posfeatures 68 911 and 1719 describe the context wordschunks features 1216 the focus chunkempty contexts are indicated by the quotquot for all features very important thereby justifying their introductionrelative to the other quotoldquot features the structural features 1 and 2 have gained importance probably because more structure is available in the input to representin principle we would have to construct one instance for each possible pair of a verb and a focus word in the sentencehowever we restrict instances to those where there is at most one other verbvp chunk between the verb and the focus in case the focus precedes the verb and no other verb in case the verb precedes the focusthis restriction allows for example for a relative clause on the subject in the training data 979 of the related pairs fulfill this condition experiments on the training data showed that increasing the admitted number of intervening vp chunks slightly increases recall at the cost of precisionhaving constructed all instances from the test data and from a training set with the same level of partial structure we first train the igtree algorithm and then let it classify the test instancesthen for each test instance that was classified with a grammatical relation we check whether the same verbfocuspair appears with the same relation in the gr list extracted directly from the treebankthis gives us the precision of the classifierchecking the treebank list versus the classified list yields we have already seen from the example that the level of structure in the input text can influence the composition of the instanceswe are interested in the effects of different sorts of partial structure in the input data on the classification performance of the final classifiertherefore we ran a series of experimentsthe classification task was always that of finding grammatical relations to verbs and performance was always measured by precision and recall on those relations the amount of structure in the input data variedtable 4 shows the results of the experimentsin the first experiment only pos tagged input is usedthen np chunks are addedother sorts of chunks are inserted at each subsequent stepfinally the adverbial function labels are addedwe can see that the more structure we add the better precision and recall of the grammatical relations get precision increases from 607 to 748 recall from 413 to 679this in spite of the fact that the added information is not always correct because it was predicted for the test material on the basis of the training material by the classifiers described in section 31as we have seen in table 1 especially adjp and advp chunks and adverbial function labels did not have very high precision and recallthere are three ways how two cascaded modules can interact the first module can add information on which the later module can base its decisionsthis is the case between the adverbial functions finder and the relations finderthe former adds an extra informative feature to the instances of the latter cf column two of table 4 the first module can restrict the number of decisions to be made by the second onethis is the case in the combination of the chunking steps and the relations finderwithout the chunker the relations finder would have to decide for every word whether it is the head of a constituent that bears a relation to the verbwith the churlker the relations finder has to make this decision for fewer words namely only for those which are the last word in a chunk resp the preposition of a pp chunkpractically this reduction of the number of decisions as can be seen in the third column of table 4 the first module can reduce the number of elements used for the instances by counting one chunk as just one context elementwe can see the effect in the feature that indicates the distance in elements between the focus and the verbthe more chunks are used the smaller the average absolute distance all three effects interact in the cascade we describethe pp chunker reduces the number of decisions for the relations finder introduces an extra feature and changes the context as we already noted above precision and recall are monotonically increasing when adding more structurehowever we note large differences such as np chunks which increase fs_i by more than 10 and vp chunks which add another 68 whereas advps and adjps yield hardly any improvementthis may partially be explained by the fact that these chunks are less frequent than the former twopreps on the other hand while hardly reducing the average distance or the number of instances improve 131 by nearly 1pps yield another 11what may come as a surprise is that adverbial functions again increase f31 by nearly 2 despite the fact that f1 for this advftjnc assignment step was not very highthis result shows that cascaded modules need not be perfect to be usefulup to now we only looked at the overall resultstable 4 also shows individual fp_1 values for four selected common grammatical relations subject np direct object np locative pp adjunct and temporal pp adjunctnote that the steps have different effects on the different relations adding nps increases fpi by 113 for subjects resp162 for objects but only 39 resp37 for locatives and temporalsadverbial functions are more important for the two adjuncts than for the two complements argamon et al report f131 for subject and object identification of respectively 865 and 830 compared to 818 and 810 in this papernote however that argamon et al do not identify the head of subjects subjects in embedded clauses or subjects and objects related to the verb only through a trace which makes their task easierfor a detailed comparison of the two methods on the same task see that paper also shows that the chunking method proposed here performs about as well as other methods and that the influence of tagging errors on chunking is less than 1to study the effect of the errors in the lower modules other than the tagger we used quotperfectquot test data in a last experiment ie data annotated with partial information taken directly from the treebankthe results are shown in table 5we see that later modules suffer from errors of earlier modules fp_1 of pp chunking is 92 but could have previous modules in the cascade vs on quotperfectquot input for pps this means perfect pos tags and chunk labelsboundaries for advfunc additionally perfect pp chunks for gr assignment also perfect advfunc labels been 979 if all previous chunks would have been correct for adverbial functions the difference is 35for grammatical relation assignment the last module in the cascade the difference is not surprisingly the largest 79 for chunks only 123 for chunks and advfuncsthe latter percentage shows what could maximally be gained by further improving the chunker and advfuncs finderon realistic data a realistic advfuncs finder improves cr assigment by 19on perfect data a perfect advfuncs finder increases performance by 63in this paper we studied cascaded grammatical relations assignmentwe showed that even the use of imperfect modules improves the overall result of the cascadein future research we plan to also train our classifiers on imperfectly chunked materialthis enables the classifier to better cope with systematic errors in train and test materialwe expect that especially an improvement of the adverbial function assignment will lead to better or assignmentfinally since cascading proved effective for gr assignment we intend to study the effect of cascading different types of xp chunkers on chunking performancewe might eg first find adjp chunks then use that chunker output as additional input for the np chunker then use the combined output as input to the vp chunker and so onother chunker orderings are possible toolikewise it might be better to find different grammatical relations subsequently instead of simultaneously
W99-0629
cascaded grammatical relation assignmentin this paper we discuss cascaded memorybased grammatical relations assignmentin the first stages of the cascade we find chunks of several types and label them with their adverbial function in the last stage we assign grammatical relations to pairs of chunkswe studied the effect of adding several levels to this cascaded classifier and we found that even the less performing chunkers enhanced the performance of the relation finderwe achieve 712 fscore for grammatical relation assignment on automatically tagged and chunked text after training on about 40000 wall street journal sentences