Summary
stringlengths
55
2.67k
Text
stringlengths
45
130k
Id
stringlengths
8
8
tnt a statistical partofspeech taggertrigramsntags is an efficient statistical partofspeech taggercontrary to claims found elsewhere in the literature we argue that a tagger based on markov models performs at least as well as other current approaches including the maximum entropy frameworka recent comparison has even shown that tnt performs significantly better for the tested corporawe describe the basic model of tnt the techniques used for smoothing and for handling unknown wordsfurthermore we present evaluations on two corporawe achieve the automated tagging of a syntacticstructurebased set of grammatical function tags including phrasechunk and syntacticrole modifiers trained in supervised mode from a tree bank of german
tnt a statistical partofspeech tagger trigramsntags is an efficient statistical partofspeech tagger contrary to claims found elsewhere in the literature we argue that a tagger based on markov models performs at least as well as other current approaches including the maximum entropy framework a recent comparison has even shown that tnt performs significantly better for the tested corpora we describe the basic model of tnt the techniques used for smoothing and for handling unknown words furthermore we present evaluations on two corpora a large number of current language processing systems use a partofspeech tagger for preprocessingthe tagger assigns a partofspeech tag to each token in the input and passes its output to the next processing level usually a parserfurthermore there is a large interest in partofspeech tagging for corpus annotation projects who create valuable linguistic resources by a combination of automatic processing and human correctionfor both applications a tagger with the highest possible accuracy is requiredthe debate about which paradigm solves the partofspeech tagging problem best is not finishedrecent comparisons of approaches that can be trained on corpora have shown that in most cases statistical aproaches yield better results than finitestate rulebased or memorybased taggers they are only surpassed by combinations of different systems forming a quotvoting taggerquotamong the statistical approaches the maximum entropy framework has a very strong positionnevertheless a recent independent comparison of 7 taggers has shown that another approach even works better markov models combined with a good smoothing technique and with handling of unknown wordsthis tagger tnt not only yielded the highest accuracy it also was the fastest both in training and taggingthe tagger comparison was organized as a quotblackbox testquot set the same task to every tagger and compare the outcomesthis paper describes the models and techniques used by tnt together with the implementationthe reader will be surprised how simple the underlying model isthe result of the tagger comparison seems to support the maxime quotthe simplest is the bestquothowever in this paper we clarify a number of details that are omitted in major previous publications concerning tagging with markov modelsas two examples and give good overviews of the techniques and equations used for markov models and partofspeech tagging but they are not very explicit in the details that are needed for their applicationwe argue that it is not only the choice of the general model that determines the result of the tagger but also the various quotsmallquot decisions on alternativesthe aim of this paper is to give a detailed account of the techniques used in tntadditionally we present results of the tagger on the negra corpus and the penn treebank the penn treebank results reported here for the markov model approach are at least equivalent to those reported for the maximum entropy approach in for a comparison to other taggers the reader is referred to tnt uses second order markov models for partofspeech taggingthe states of the model represent tags outputs represent the wordstransition probabilities depend on the states thus pairs of tagsoutput probabilities only depend on the most recent categoryto be explicit we calculate for a given sequence of words w1 of length t t1 tr are elements of the tagset the additional tags t_1 to and t71 are beginningofsequence and endofsequence markersusing these additional tags even if they stem from rudimentary processing of punctuation marks slightly improves tagging resultsthis is different from formulas presented in other publications which just stop with a quotloose endquot at the last wordif sentence boundaries are not marked in the input tnt adds these tags if it encounters one of as a tokentransition and output probabilities are estimated from a tagged corpusas a first step we use the maximum likelihood probabilities p which are derived from the relative frequencies for all t1 t2 t3 in the tagset and w3 in the lexiconn is the total number of tokens in the training corpuswe define a maximum likelihood probability to be zero if the corresponding nominators and denominators are zeroas a second step contextual frequencies are smoothed and lexical frequences are completed by handling words that are not in the lexicon trigram probabilities generated from a corpus usually cannot directly be used because of the sparsedata problemthis means that there are not enough instances for each trigram to reliably estimate the probabilityfurthermore setting a probability to zero because the corresponding trigram never occured in the corpus has an undesired effectit causes the probability of a complete sequence to be set to zero if its use is necessary for a new text sequence thus makes it impossible to rank different sequences containing a zero probabilitythe smoothing paradigm that delivers the best results in tnt is linear interpolation of unigrams bigrams and trigramstherefore we estimate a trigram probability as follows p are maximum likelihood estimates of the probabilities and a1 a2 a3 1 so p again represent probability distributionswe use the contextindependent variant of linear interpolation ie the values of the as do not depend on the particular trigramcontrary to intuition this yields better results than the contextdependent variantdue to sparsedata problems one cannot estimate a different set of as for each trigramtherefore it is common practice to group trigrams by frequency and estimate tied sets of ashowever we are not aware of any publication that has investigated frequency groupings for linear interpolation in partofspeech taggingall groupings that we have tested yielded at most equivalent results to contextindependent linear interpolationsome groupings even yielded worse resultsthe tested groupings included a one set of as for each frequency value and b two classes on the two ends of the scale as well as several groupings in between and several settings for partitioning the classesthe values of a1 a2 and a3 are estimated by deleted interpolationthis technique successively removes each trigram from the training corpus and estimates best values for the as from all other ngrams in the corpusgiven the frequency counts for uni bi and trigrams the weights can be very efficiently determined with a processing time linear in the number of different trigramsthe algorithm is given in figure 1note that subtracting 1 means taking unseen data into accountwithout this subtraction the model would overfit the training data and would generally yield worse resultscurrently the method of handling unknown words that seems to work best for inflected languages is a suffix analysis as proposed in tag probabilities are set according to the word endingthe suffix is a strong predictor for word classes eg words in the wall street journal part of the penn treebank ending in able are adjectives in 98 of the cases the rest of 2 are nouns the probability distribution for a particular suffix is generated from all words in the training set that share the same suffix of some predefined maximum lengththe term suffix as used here means quotfinal sequence of characters of a wordquot which is not necessarily a linguistically meaningful suffixprobabilities are smoothed by successive abstractionthis calculates the probability of a tag t given the last m letters i of an n letter word pthe sequence of increasingly more general contexts omits more and more characters of the suffix such that p p are used for smoothingthe recursion formula is set a a2 a3 0 foreach trigram t1t2t3 with f 0 depending on the maximum of the following three values for i m 0 using the maximum likelihood estimates p from frequencies in the lexicon weights oi and the initialization for the markov model we need the inverse conditional probabilities p which are obtained by bayesian inversiona theoretical motivated argumentation uses the standard deviation of the maximum likelihood probabilities for the weights 0 this leaves room for interpretationwe use the longest suffix that we can find in the training set but at most 10 charactersthis is an empirically determined choice2 we use a contextindependent approach for 0 as we did for the contextual weights ait turned out to be a good choice to set all 0 to the standard deviation of the unconditioned maximum likelihood probabilities of the tags in the training corpus ie we set for all i 0 m 1 using a tagset of s tags and the average this usually yields values in the range 003 0103 we use different estimates for uppercase and lowercase words ie we maintain two different suffix tries depending on the capitalization of the wordthis information improves the tagging results4 another freedom concerns the choice of the words in the lexicon that should be used for suffix handlingshould we use all words or are some of them better suited than othersaccepting that unknown words are most probably infrequent one can argue that using suffixes of infrequent words in the lexicon is a better approximation for unknown words than using suffixes of frequent wordstherefore we restrict the procedure of suffix handling to words with a frequency smaller than or equal to some threshold valueempirically 10 turned out to be a good choice for this thresholdadditional information that turned out to be useful for the disambiguation process for several corpora and tagsets is capitalization informationtags are usually not informative about capitalization but probability distributions of tags around capitalized words are different from those not capitalizedthe effect is larger for english which only capitalizes proper names and smaller for german which capitalizes all nounswe use flags ci that are true if wi is a capitalized word and false otherwisethese flags are added to the contextual probability distributionsinstead of and equations to are updated accordinglythis is equivalent to doubling the size of the tagset and using different tags depending on capitalizationthe processing time of the viterbi algorithm can be reduced by introducing a beam searcheach state that receives a 6 value smaller than the largest 6 divided by some threshold value 0 is excluded from further processingwhile the viterbi algorithm is guaranteed to find the sequence of states with the highest probability this is no longer true when beam search is addednevertheless for practical purposes and the right choice of 0 there is virtually no difference between the algorithm with and without a beamempirically a value of 0 1000 turned out to approximately double the speed of the tagger without affecting the accuracythe tagger currently tags between 30000 and 60000 tokens per second on a pentium 500 running linuxthe speed mainly depends on the percentage of unknown words and on the average ambiguity ratewe evaluate the tagger performance under several aspectsfirst of all we determine the tagging accuracy averaged over ten iterationsthe overall accuracy as well as separate accuracies for known and unknown words are measuredsecond learning curves are presented that indicate the performance when using training corpora of different sizes starting with as few as 1000 tokens and ranging to the size of the entire corpus an important characteristic of statistical taggers is that they not only assign tags to words but also probabilities in order to rank different assignmentswe distinguish reliable from unreliable assignments by the quotient of the best and second best assignmentsl all assignments for which this quotient is larger than some threshold are regarded as reliable the others as unreliableas we will see below accuracies for reliable assignments are much higherthe tests are performed on partitions of the corpora that use 90 as training set and 10 as test set so that the test data is guaranteed to be unseen during trainingeach result is obtained by repeating the experiment 10 times with different partitions and averaging the single outcomesin all experiments contiguous test sets are usedthe alternative is a roundrobin procedure that puts every 10th sentence into the test setwe argue that contiguous test sets yield more realistic results because completely unseen articles are taggedusing the roundrobin procedure parts of an article are already seen which significantly reduces the percentage of unknown wordstherefore we expect even by definition this quotient is oo if there is only one possible tag for a given word higher results when testing on every 10th sentence instead of a contiguous set of 10in the following accuracy denotes the number of correctly assigned tags divided by the number of tokens in the corpus processedthe tagger is allowed to assign exactly one tag to each tokenwe distinguish the overall accuracy taking into account all tokens in the test corpus and separate accuracies for known and unknown tokensthe latter are interesting since usually unknown tokens are much more difficult to process than known tokens for which a list of valid tags can be found in the lexiconthe german negra corpus consists of 20000 sentences of newspaper texts that are annotated with partsofspeech and predicateargument structures it was developed at the saarland university in saarbriicken2part of it was tagged at the ims stuttgartthis evaluation only uses the partofspeech annotation and ignores structural annotationstagging accuracies for the negra corpus are shown in table 2figure 3 shows the learning curve of the tagger ie the accuracy depending on the amount of training datatraining length is the number of tokens used for trainingeach training length was tested ten times training and test sets were randomly chosen and disjoint results were averagedthe training length is given on a logarithmic scaleit is remarkable that tagging accuracy for known words is very high even for very small training corporathis means that we have a good chance of getting the right tag if a word is seen at least once during trainingaverage percentages of unknown tokens are shown in the bottom line of each diagramwe exploit the fact that the tagger not only determines tags but also assigns probabilitiesif there is an alternative that has a probability quotclose toquot that of the best assignment this alternative can be viewed as almost equally well suitedthe notion of quotclose toquot is expressed by the distance of probabilities and this in turn is expressed by the quotient of probabilitiesso the distance of the probabilities of a best tag tbest and an alternative tag tau is expressed by pp7 which is some value greater or equal to 1 since the best tag assignment has the highest probabilityfigure 4 shows the accuracy when separating assignments with quotients larger and smaller than the threshold as expected we find that accuracies for percentage known unknown overall unknowns acc acc acc a table 5 partofspeech tagging accuracy for the penn treebankthe table shows the percentage of unknown tokens separate accuracies and standard deviations for known and unknown tokens as well as the overall accuracy percentage known unknown overall unknowns acc acc acc reliable assignments are much higher than for unreliable assignmentsthis distinction is eg useful for annotation projects during the cleaning process or during preprocessing so the tagger can emit multiple tags if the best tag is classified as unreliablewe use the wall street journal as contained in the penn treebank for our experimentsthe annotation consists of four parts 1 a contextfree structure augmented with traces to mark movement and discontinuous constituents 2 phrasal categories that are annotated as node labels 3 a small set of grammatical functions that are annotated as extensions to the node labels and 4 partofspeech tags this evaluation only uses the partofspeech annotationthe wall street journal part of the penn treebank consists of approx50000 sentences tagging accuracies for the penn treebank are shown in table 5figure 6 shows the learning curve of the tagger ie the accuracy depending on the amount of training datatraining length is the number of tokens used for trainingeach training length was tested ten timestraining and test sets were disjoint results are averagedthe training length is given on a logarithmic scaleas for the negra corpus tagging accuracy is very high for known tokens even with small amounts of training datawe exploit the fact that the tagger not only determines tags but also assigns probabilitiesfigure 7 shows the accuracy when separating assignments with quotients larger and smaller than the threshold again we find that accuracies for reliable assignments are much higher than for unreliable assignmentsaverage partofspeech tagging accuracy is between 96 and 97 depending on language and tagset which is at least on a par with stateoftheart results found in the literature possibly betterfor the penn treebank reports an accuracy of 966 using the maximum entropy approach our much simpler and therefore faster hmm approach delivers 967this comparison needs to be reexamined since we use a tenfold crossvalidation and averaging of results while ratnaparkhi only makes one test runthe accuracy for known tokens is significantly higher than for unknown tokensfor the german newspaper data results are 87 better when the word was seen before and therefore is in the lexicon than when it was not seen before accuracy for known tokens is high even with very small amounts of training dataas few as 1000 tokens are sufficient to achieve 9596 accuracy for themit is important for the tagger to have seen a word at least once during trainingstochastic taggers assign probabilities to tagswe exploit the probabilities to determine reliability of assignmentsfor a subset that is determined during processing by the tagger we achieve accuracy rates of over 99the accuracy of the complement set is much lowerthis information can eg be exploited in an annotation project to give an additional treatment to the unreliable assignments or to pass selected ambiguities to a subsequent processing stepwe have shown that a tagger based on markov models yields stateoftheart results despite contrary claims found in the literaturefor example the markov model tagger used in the comparison of yielded worse results than all other taggersin our opinion a reason for the wrong claim is that the basic algorithms leave several decisions to the implementorthe rather large amount of freedom was not handled in detail in previous publications handling of start and endofsequence the exact smoothing technique how to determine the weights for context probabilities details on handling unknown words and how to determine the weights for unknown wordsnote that the decisions we made yield good results for both the german and the english corpusthey do so for several other corpora as wellthe architecture remains applicable to a large variety of languagesaccording to current tagger comparisons and according to a comparsion of the results presented here with those in the maximum entropy framework seems to be the only other approach yielding comparable results to the one presented hereit is a very interesting future research topic to determine the advantages of either of these approaches to find the reason for their high accuracies and to find a good combination of bothtnt is freely available to universities and related organizations for research purposes many thanks go to hans uszkoreit for his support during the development of tntmost of the work on tnt was carried out while the author received a grant of the deutsche forschungsgemeinschaft in the graduiertenkolleg kognitionswissenschaft saarbriickenlarge annotated corpora are the prerequisite for developing and testing partofspeech taggers and they enable the generation of highquality language modelstherefore i would like to thank all the people who took the effort to annotate the penn treebank the susanne corpus the stuttgarter referenzkorpus the negra corpus the verbmobil corpora and several othersand last but not least i would like to thank the users of tnt who provided me with bug reports and valuable suggestions for improvements
A00-1031
sentence reduction for automatic text summarizationwe present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purposethe system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed including syntactic knowledge context information and statistics computed from a corpus which consists of examples written by human professionalsreduction can significantly improve the conciseness of automatic summarieswe study a new method to remove extraneous phrase from sentences by using multiple source of knowledge to decide which phrase in the sentences can be removedin our approach decisions about which material to includedelete in the sentence summaries do not rely on relative frequency information on words but rather on probability models of subtree deletions that are learned from a corpus of parses for sentences and their summaries
sentence reduction for automatic text summarization figure 2 sample sentence and parse tree we have an input sentence t dg b4a because both the human and the program made decisions two out of the five decisions agree so the rate is 25 the rate defined as man and the program have made the same decision success rate the total of edges along which both the human and the progam have made decisions 313 note that the edges along which only the human or the program has made a decision are not considered in the computation of success rate since there is no agreement issue in such cases 32 evaluation result in the evaluation we used 400 sentences in the corpus to compute the probabilities that a phrase is removed reduced or unchanged we tested the program on the rest 100 sentences using fivefold validation the program achieved an average success rate of 813 if we consider the baseline as removing all the prepositional phrases clauses toinfinitives and gerunds the baseline performance is 432 we also computed the success rate of program decisions on particular types of phrases for the decisions on removing or keeping a clause the system has a success rate of 781 for the decisions on removing or keeping a toinfinitive the system has a success rate of 852 we found out that the system has a low success rate on removing adjectives of noun phrases or removing adverbs of a sentence or a verb phrase one reason for this is that our probability model can hardly capture the dependencies between a particular adjective and the head noun since the training corpus is not large enough while the other sources of information including grammar or context information provide little evidence on whether an adjective or an adverb should be removed given that whether or not an adjective or an adverb is removed does not affect the conciseness of the sentence significantly and the system lacks of reliability in making such decisions we decide not to remove adjectives and adverbs on average the system reduced the length of the 500 sentence by 327 while humans reduced it by 418 the probabilities we computed from the training corpus covered 58 of instances in the test corpus when the corpus probability is absent for a case the system makes decisions based on the other two sources of knowledge some of the errors made by the system result from the errors by the syntactic parser we randomly checked 50 sentences and found that 8 of the errors made by the system are due to parsing errors there are two main reasons responsible for this relative low percentage of errors resulted from mistakes in parsing one reason is that we have taken some special measures to avoid errors introduced by mistakes in parsing for example pp attachment is a difficult problem in parsing and it is not rare that a pp is wrongly attached therefore we take this into account when marking the obligatory components using subcategorization knowledge from the lexicon we not only look at the pps that are attached to a verb phrase but also pps that are next to the verb phrase but not attached in case it is part of the verb phrase we also wrote a preprocessor to deal with particular structures that the parser often has problems with such as appositions the other reason is that parsing errors do not always result in reduction errors for example given a sentence quotthe spokesperson of the university said that quot although thatclause in the sentence may have a complicated structure and the parser gets it wrong the reduction system is not necessarily affected since it may decide in this case to keep thatclause as it is as humans often do so the parsing errors will not matter in this example 4 discussion and related work the reduction algorithm we present assumes generic summarization that is we want to generate a summary that includes the most important information in an article we can tailor the reduction system to queriesbased summarization in that case the task of the reduction is not to remove phrases that are extraneous in terms of the main topic of an article but phrases that are not very relevant to users queries we extended our sentence reduction program to querybased summarization by adding another step in the algorithm to measure the relevance of users queries to phrases in the sentence in the last step of reduction when the system makes the final decision the relevance of a phrase to the query is taken into account together with syntactic context and corpus information ideally the sentence reduction module should interact with other modules in a summarization system it should be able to send feedback to the extraction module if it finds that a sentence selected by the extraction module may be inappropriate it should also be able to interact with the modules that run after it such as the sentence combination module so that it can revise reduction decisions according to the feedback from these modules some researchers suggested removing phrases or clauses from sentences for certain applications proposed to remove phrases in sentences to produce a telegraphic text that can be used to provide audio scanning service for the blind proposed to remove clauses in sentences before indexing documents for information retrieval both studies removed phrases based only on their syntactic categories while the focus of our system is on deciding when it is appropriate to remove a phrase researchers worked on the text simplifica 314 tion problem which usually involves in simplifying text but not removing any phrases for example discussed simplifying newspaper text by replacing uncommon words with common words or replacing complicated syntactic structures with simpler structures to assist people with reading disabilities discussed text simplification in general the difference between these studies on text simplification and our system is that a text simplification system usually not from an original sentence although it may change its structure or words but our system removes extraneous phrases from the extracted sentences 5 conclusions and future work we present a novel sentence reduction system which removes extraneous phrases from sentences that are extracted from an article in text summarization the deleted phrases can be prepositional phrases clauses toinfinitives or gerunds and multiple phrases can be removed form a single sentence the focus of this work is on determining for a sentence in a particular context which phrases in the sentence are less important and can be removed our system makes intelligent reduction decisions based on multiple sources of knowledge including syntactic knowledge context and probabilities computed from corpus analysis we also created a corpus consisting of 500 sentences and their reduced forms produced by human professionals and used this corpus for training and testing the system the evaluation shows that 813 of reduction decisions made by the system agreed with those of humans in the future we would like to integrate our sentence reduction system with extractionbased summarization systems other than the one we have developed improve the performance of the system further by introducing other sources of knowledge necessary for reduction and explore other interesting applications of the reduction system acknowledgment this material is based upon work supported by the national science foundation under grant no iri 9619124 and iri 9618797 any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not current automatic summarizers usually rely on sentence extraction to produce summarieshuman professionals also often reuse the input documents to generate summaries however rather than simply extracting sentences and stringing them together as most current summarizers do humans often quoteditquot the extracted sentences in some way so that the resulting summary is concise and coherentwe analyzed a set of articles and identified six major operations that can be used for editing the extracted sentences including removing extraneous phrases from an extracted sentence combining a reduced sentence with other sentences syntactic transformation substituting phrases in an extracted sentence with their paraphrases substituting phrases with more general or specific descriptions and reordering the extracted sentences we call the operation of removing extraneous phrases from an extracted sentence sentence reductionit is one of the most effective operations that can be used to edit the extracted sentencesreduction can remove material at any granularity a word a prepositional phrase a gerund a toinfinitive or a clausewe use the term quotphrasequot here to refer to any of the above components that can be removed in reductionthe following example shows an original sentence and its reduced form written by a human professional original sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to seereduced sentence by humans the vchip will give parents a device to block out programs they do not want their children to seewe implemented an automatic sentence reduction systeminput to the reduction system includes extracted sentences as well as the original documentoutput of reduction are reduced forms of the extracted sentences which can either be used to produce summaries directly or be merged with other sentencesthe reduction system uses multiple sources of knowledge to make reduction decisions including syntactic knowledge context and statistics computed from a training corpuswe evaluated the system against the output of human professionalsthe program achieved a success rate of 813 meaning that 813 of reduction decisions made by the system agreed with those of humanssentence reduction improves the conciseness of automatically generated summaries making it concise and on targetit can also improve the coherence of generated summaries since extraneous phrases that can potentially introduce incoherece are removedwe collected 500 sentences and their corresponding reduced forms written by humans and found that humans reduced the length of these 500 sentences by 442 on averagethis indicates that a good sentence reduction system can improve the conciseness of generated summaries significantlyin the next section we describe the sentence reduction algorithm in detailsin section 3 we introduce the evaluation scheme used to access the performance of the system and present evaluation resultsin section 4 we discuss other applications of sentence reduction the interaction between reduction and other modules in a summarization system and related work on sentence simplicationfinally we the goal of sentence reduction is to quotreduce without major lossquot that is we want to remove as many extraneous phrases as possible from an extracted sentence so that it can be concise but without detracting from the main idea the sentence conveysideally we want to remove a phrase from an extracted sentence only if it is irrelevant to the main topicto achieve this the system relies on multiple sources of knowledge to make reduction decisionswe first introduce the resources in the system and then describe the reduction algorithm the corpusone of the key features of the system is that it uses a corpus consisting of original sentences and their corresponding reduced forms written by humans for training and testing purposethis corpus was created using an automatic program we have developed to automatically analyze humanwritten abstractsthe program called the decomposition program matches phrases in a humanwritten summary sentence to phrases in the original document the humanwritten abstracts were collected from the free daily news service quotcommunicationsrelated headlinesquot provided by the benton foundation the articles in the corpus are news reports on telecommunication related issues but they cover a wide range of topics such as law labor and company mergers database to dateit provides lexical relations between words including synonymy antonymy meronymy entailment or causation these lexical links are used to identify the focus in the local context the syntactic parserwe use the english slot grammar parser developed at ibm to analyze the syntactic structure of an input sentence and produce a sentence parse treethe esg parser not only annotates the syntactic category of a phrase it also annotates the thematic role of a phrase there are five steps in the reduction program step 1 syntactic parsingwe first parse the input sentence using the esg parser and produce the sentence parse treethe operations in all other steps are performed based on this parse treeeach following step annotates each node in the parse tree with additional information such as syntactic or context importance which are used later to determine which phrases can be considered extraneous and thus removedstep 2 grammar checkingin this step we determine which components of a sentence must not be deleted to keep the sentence grammaticalto do this we traverse the parse tree produced in the first step in topdown order and mark for each node in the parse tree which of its children are grammatically obligatorywe use two sources of knowledge for this purposeone source includes simple linguisticbased rules that use the thematic role structure produced by the esg parserfor instance for a sentence the main verb the subject and the object are essential if they exist but a prepositional phrase is not for a noun phrase the head noun is essential but an adjective modifier of the head noun is notthe other source we rely on is the largescale lexicon we described earlierthe information in the lexicon is used to mark the obligatory arguments of verb phrasesfor example for the verb quotconvincequot the lexicon has the following entry this entry indicates that the verb quotconvincequot can be followed by a noun phrase and a prepositional phrase starting with the preposition quotof it can also be followed by a noun phrase and a toinfinitive phrase this information prevents the system from deleting the quotofquot prepositional phrase or the toinfinitive that is part of the verb phraseat the end of this step each node in the parse tree including both leaf nodes and intermediate nodes is annotated with a value indicating whether it is grammatically obligatorynote that whether a node is obligatory is relative to its parent node onlyfor example whether a determiner is obligatory is relative to the noun phrase it is in whether a prepositional phrase is obligatory is relative to the sentence or the phrase it is instep 3 context informationin this step the system decides which components in the sentence are most related to the main topic being discussedto measure the importance of a phrase in the local context the system relies on lexical links between wordsthe hypothesis is that the more connected a word is with other words in the local context the more likely it is to be the focus of the local contextwe link the words in the extracted sentence with words in its local context if they are repetitions morphologically related or linked in wordnet through one of the lexical relationsthe system then computes an importance score for each word in the extracted sentence based on the number of links it has with other words and the types of linksthe formula for computing the context importance score for a word w is as follows here i represents the different types of lexical relations the system considered including repetition inflectional relation derivational relation and the lexical relations from wordnetwe assigned a weight to each type of lexical relation represented by li in the formularelations such as repetition or inflectional relation are considered more important and are assigned higher weights while relations such as hypernym are considered less important and assigned lower weightsnu in the formula represents the number of a particular type of lexical links the word w has with words in the local contextafter an importance score is computed for each word each phrase in the entence gets a score by adding up the scores of its children nodes in the parse treethis score indicates how important the phrase is in the local contextstep 4 corpus evidencethe program uses a corpus consisting of sentences reduced by human professionals and their corresponding original sentences to compute how likely humans remove a certain phrasethe system first parsed the sentences in the corpus using esg parserit then marked which subtrees in these parse trees were removed by humansusing this corpus of marked parse trees we can compute how likely a subtree is removed from its parent nodefor example we can compute the probability that the quotwhenquot temporal clause is removed when the main verb is quotgivequot represented as prob or the probability that the toinfinitive modifier of the head noun quotdevicequot is removed represented as probthese probabilities are computed using bayes rulefor example the probability that the quotwhenquot temporal clause is removed when the main verb is quotgivequot prob is computed as the product of prob and prob divided by prob besides computing the probability that a phrase is removed we also compute two other types of probabilities the probability that a phrase is reduced and the probability that a phrase is unchanged at all these corpus probabilities help us capture human practicefor example for sentences like quotthe agency reported that quot quotthe other source says that quot quotthe new study suggests that quot the thatclause following the sayverb in each sentence is very rarely changed at all by professionalsthe system can capture this human practice since the probability that thatclause of the verb say or report being unchanged at all will be relatively high which will help the system to avoid removing components in the thatclausethese corpus probabilities are computed beforehand using a training corpusthey are then stored in a table and loaded at running timestep 5 final decisionthe final reduction decisions are based on the results from all the earlier stepsto decide which phrases to remove the system traverses the sentence parse tree which now have been annotated with different types of information from earlier steps in the topdown order and decides which subtrees should be removed reduced or unchangeda subtree is removed only if it is not grammatically obligatory not the focus of the local context and has a reasonable probability of being removed by humansfigure 1 shows sample output of the reduction programthe reduced sentences produced by humans are also provided for comparisonwe define a measure called success rate to evaluate the performance of our sentence reduction programoriginal sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to seethe success rate computes the percentage of system reduction decisions that agree with those of humanswe compute the success rate in the following waythe reduction process can be considered as a series of decisionmaking process along the edges of a sentence parse treeat each node of the parse tree both the human and the program make a decision whether to remove the node or to keep itif a node is removed the subtree with that node as the root is removed as a whole thus no decisions are needed for the descendants of the removed nodeif the node is kept we consider that node as the root and repeat this processsuppose we have an input sentence which has a parse tree shown in figure 2suppose a human reduces the sentence to which can be translated to a series of decisions made along edges in the sentence parse tree as shown in figure 3the symbol quotyquot along an edge means the node it points to will be kept and quotnquot means the node will be removedsuppose the program reduces the sentence to which can be translated similarly to the annotated tree shown in figure 4we can see that along five edges t dg b4a because both the human and the program made decisionstwo out of the five decisions agree so the success rate is 25 the success rate is defined as of edges along which the human and the program have made the same decision success rate the total of edges along which both the human and the progam have made decisions note that the edges along which only the human or the program has made a decision are not considered in the computation of success rate since there is no agreement issue in such casesin the evaluation we used 400 sentences in the corpus to compute the probabilities that a phrase is removed reduced or unchangedwe tested the program on the rest 100 sentencesusing fivefold validation the program achieved an average success rate of 813if we consider the baseline as removing all the prepositional phrases clauses toinfinitives and gerunds the baseline performance is 432we also computed the success rate of program decisions on particular types of phrasesfor the decisions on removing or keeping a clause the system has a success rate of 781 for the decisions on removing or keeping a toinfinitive the system has a success rate of 852we found out that the system has a low success rate on removing adjectives of noun phrases or removing adverbs of a sentence or a verb phraseone reason for this is that our probability model can hardly capture the dependencies between a particular adjective and the head noun since the training corpus is not large enough while the other sources of information including grammar or context information provide little evidence on whether an adjective or an adverb should be removedgiven that whether or not an adjective or an adverb is removed does not affect the conciseness of the sentence significantly and the system lacks of reliability in making such decisions we decide not to remove adjectives and adverbson average the system reduced the length of the 500 sentence by 327 while humans reduced it by 418the probabilities we computed from the training corpus covered 58 of instances in the test corpuswhen the corpus probability is absent for a case the system makes decisions based on the other two sources of knowledgesome of the errors made by the system result from the errors by the syntactic parserwe randomly checked 50 sentences and found that 8 of the errors made by the system are due to parsing errorsthere are two main reasons responsible for this relative low percentage of errors resulted from mistakes in parsingone reason is that we have taken some special measures to avoid errors introduced by mistakes in parsingfor example pp attachment is a difficult problem in parsing and it is not rare that a pp is wrongly attachedtherefore we take this into account when marking the obligatory components using subcategorization knowledge from the lexicon we not only look at the pps that are attached to a verb phrase but also pps that are next to the verb phrase but not attached in case it is part of the verb phrasewe also wrote a preprocessor to deal with particular structures that the parser often has problems with such as appositionsthe other reason is that parsing errors do not always result in reduction errorsfor example given a sentence quotthe spokesperson of the university said that quot although thatclause in the sentence may have a complicated structure and the parser gets it wrong the reduction system is not necessarily affected since it may decide in this case to keep thatclause as it is as humans often do so the parsing errors will not matter in this examplethe reduction algorithm we present assumes generic summarization that is we want to generate a summary that includes the most important information in an articlewe can tailor the reduction system to queriesbased summarizationin that case the task of the reduction is not to remove phrases that are extraneous in terms of the main topic of an article but phrases that are not very relevant to users querieswe extended our sentence reduction program to querybased summarization by adding another step in the algorithm to measure the relevance of users queries to phrases in the sentencein the last step of reduction when the system makes the final decision the relevance of a phrase to the query is taken into account together with syntactic context and corpus informationideally the sentence reduction module should interact with other modules in a summarization systemit should be able to send feedback to the extraction module if it finds that a sentence selected by the extraction module may be inappropriate it should also be able to interact with the modules that run after it such as the sentence combination module so that it can revise reduction decisions according to the feedback from these modulessome researchers suggested removing phrases or clauses from sentences for certain applications proposed to remove phrases in sentences to produce a telegraphic text that can be used to provide audio scanning service for the blind proposed to remove clauses in sentences before indexing documents for information retrievalboth studies removed phrases based only on their syntactic categories while the focus of our system is on deciding when it is appropriate to remove a phraseother researchers worked on the text simplification problem which usually involves in simplifying text but not removing any phrasesfor example discussed simplifying newspaper text by replacing uncommon words with common words or replacing complicated syntactic structures with simpler structures to assist people with reading disabilities discussed text simplification in generalthe difference between these studies on text simplification and our system is that a text simplification system usually does not remove anything from an original sentence although it may change its structure or words but our system removes extraneous phrases from the extracted sentenceswe present a novel sentence reduction system which removes extraneous phrases from sentences that are extracted from an article in text summarizationthe deleted phrases can be prepositional phrases clauses toinfinitives or gerunds and multiple phrases can be removed form a single sentencethe focus of this work is on determining for a sentence in a particular context which phrases in the sentence are less important and can be removedour system makes intelligent reduction decisions based on multiple sources of knowledge including syntactic knowledge context and probabilities computed from corpus analysiswe also created a corpus consisting of 500 sentences and their reduced forms produced by human professionals and used this corpus for training and testing the systemthe evaluation shows that 813 of reduction decisions made by the system agreed with those of humansin the future we would like to integrate our sentence reduction system with extractionbased summarization systems other than the one we have developed improve the performance of the system further by introducing other sources of knowledge necessary for reduction and explore other interesting applications of the reduction systemthis material is based upon work supported by the national science foundation under grant noiri 9619124 and iri 9618797any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation
A00-1043
advances in domain independent linear text segmentationthis paper describes a method for linear text segmentation which is twice as accurate and over seven times as fast as the stateoftheart intersentence similarity is replaced by rank in the local contextboundary locations are discovered by divisive clusteringwe design an artificial dataset built by concatenating short pieces of texts that have been extracted from the brown corpus
advances in domain independent linear text segmentation this paper describes a method for linear text segmentation which is twice as accurate and over seven times as fast as the stateoftheart intersentence similarity is replaced by rank in the local context boundary locations are discovered by divisive clustering even moderately long documents typically address several topics or different aspects of the same topicthe aim of linear text segmentation is to discover the topic boundariesthe uses of this procedure include information retrieval summarization text understanding anaphora resolution language modelling and improving document navigation for the visually disabled this paper focuses on domain independent methods for segmenting written textwe present a new algorithm that builds on previous work by reynar the primary distinction of our method is the use of a ranking scheme and the cosine similarity measure in formulating the similarity matrixwe propose that the similarity values of short text segments is statistically insignificantthus one can only rely on their order or rank for clusteringexisting work falls into one of two categories lexical cohesion methods and multisource methods the former stem from the work of halliday and hasan they proposed that text segments with similar vocabulary are likely to be part of a coherent topic segmentimplementations of this idea use word stem repetition context vectors entity repetition semantic similarity word distance model and word frequency model to detect cohesionmethods for finding the topic boundaries include sliding window lexical chains dynamic programming agglomerative clustering and divisive clustering lexical cohesion methods are typically used for segmenting written text in a collection to improve information retrieval multisource methods combine lexical cohesion with other indicators of topic shift such as cue phrases prosodic features reference syntax and lexical attraction using decision trees and probabilistic models work in this area is largely motivated by the topic detection and tracking initiative the focus is on the segmentation of transcribed spoken text and broadcast news stories where the presentation format and regular cues can be exploited to improve accuracyour segmentation algorithm takes a list of tokenized sentences as inputa tokenizer and a sentence boundary disambiguation algorithm or eagle may be used to convert a plain text document into the acceptable input formatpunctuation and uninformative words are removed from each sentence using a simple regular expression pattern matcher and a stopword lista stemming algorithm is then applied to the remaining tokens to obtain the word stemsa dictionary of word stem frequencies is constructed for each sentencethis is represented as a vector of frequency countslet fii denote the frequency of word j in sentence ithe similarity between a pair of sentences 1 y for short text segments the absolute value of sim is unreliablean additional occurrence of a common word causes a disproportionate increase in sim unless the denominator is largethus in the context of text segmentation where a segment has typically 100 informative tokens one can only use the metric to estimate the order of similarity between sentences eg a is more similar to b than c furthermore language usage varies throughout a documentfor instance the introduction section of a document is less cohesive than a section which is about a particular topicconsequently it is inappropriate to directly compare the similarity values from different regions of the similarity matrixin nonparametric statistical analysis one compares the rank of data sets when the qualitative behaviour is similar but the absolute quantities are unreliablewe present a ranking scheme which is an adaptation of that described in the contrast of the image has been adjusted to highlight the image featureseach value in the similarity matrix is replaced by its rank in the local regionthe rank is the number of neighbouring elements with a lower similarity valuefigure 2 shows an example of image ranking using a 3 x 3 rank mask with output range 0 8for segmentation we used a 11 x 11 rank maskthe output is expressed as a ratio are to circumvent normalisation problems of elements with a lower value to demonstrate the effect of image ranking the process was applied to the matrix shown in figure 1 to produce figure 32notice the contrast has been improved significantlyfigure 4 illustrates the more subtle effects of our ranking scheme r is the rank of which is a sine wave with decaying mean amplitude and frequency the final process determines the location of the topic boundariesthe method is based on reynar maximisation algorithm a text segment is defined by two sentences i j this is represented as a square region along the diagonal of the rank matrixlet sij denote the sum of the rank values in a segment and aij 2 be the inside areab b1 1974 is a list of in coherent text segments sk and ak refers to the sum of rank and area of segment k in bd is the inside density of b ak to initialise the process the entire document is placed in b as one coherent text segmenteach step of the process splits one of the segments in bthe split point is a potential boundary which maximises d figure 5 shows a working examplethe number of segments to generate in is determined automaticallyden is the inside density of n segments and sd den den1 is the gradientfor a document with b potential boundaries b steps of divisive clustering generates d d and bd od an unusually large reduction in 6d suggests the optiinal clustering has been obtained3 the running time of each step is dominated by the computation of skgiven sij is constant our algorithm precomputes all the values to improve speed performancethe procedure computes the values along diagonals starting from the main diagonal and works towards the cornerthe method has a complexity of order 1715n2let rij refer to the rank value in the rank matrix r and s to the sum of rank matrixgiven r of size n x 77 s is computed in three steps figure 8 shows the result of applying this procedure to the rank matrix in figure 5the definition of a topic segment ranges from complete stories to summaries given the quality of an algorithm is task dependent the following experiments focus on the relative performanceour evaluation strategy is a variant of that described in and the tdt segmentation task we assume a good algorithm is one that finds the most prominent topic boundariesan artificial test corpus of 700 samples is used to assess the accuracy and speed performance of segmentation algorithmsa sample is a concatenation of ten text segmentsa segment is the first n sentences of a randomly selected document from the brown corpusa sample is characterised by the range of n the corpus was generated by an automatic procedure5table 1 presents the corpus statistics p pp pp speed performance is measured by the average number of cpu seconds required to process a test sample6segmentation accuracy is measured by the error metric proposed in low error probability indicates high accuracyother performance measures include the popular precision and recall metric fuzzy pr and edit distance the problems associated with these metrics are discussed in five degenerate algorithms define the baseline for the experimentsb does not propose any boundariesb reports all potential boundaries as real boundariesb partitions the sample into regular segmentsb randomly selects any number of boundaries as real boundariesb randomly selects b boundaries as real boundariesthe accuracy of the last two algorithms are computed analyticallywe consider the status of in potential boundaries as a bit string the terms p awl p in equation 6 corresponds to p and p 1 pequation 7 8 and 9 gives the general form of p b and berm respectivelytable 2 presents the experimental resultsthe values in row two and three four and five are not actually the samehowever their differences are insignificant according to the kolmogorovsmirnov or kstest we compare three versions of the texttiling algorithm h94 is hearst c implementation with default parametersh94 uses the recommended parameters k 6 w 20h94 is my implementation of the algorithmexperimental result shows h94 and h94 are more accurate than h94we suspect this is due to the use of a different stopword list and stemming algorithmfive versions of reynar optimisation algorithm were evaluatedr98 and r98 are exact implementations of his maximisation and minimisation algorithmr98 is my version of the maximisation algorithm which uses the cosine coefficient instead of dot density for measuring similarityit incorporates the optimisations described in section 34r98 is the modularised version of r98 for experimenting with different similarity measuresr98 uses a variant of kozima semantic similarity measure to compute block similarityword similarity is a function of word cooccurrence statistics in the given documentwords that belong to the same sentence are considered to be relatedgiven the cooccurrence frequencies f the transition probability matrix t is computed by equation 10equation 11 defines our spread activation scheme s denotes the word similarity matrix x is the number of activation steps and norm converts a matrix y into a transition matrix x 5 was used in the experimentexperimental result shows the cosine coefficient and our spread activation method improved segmentation accuracythe speed optimisations significantly reduced the execution timewe compare three versions of segmenter k98 is the original perl implementation of the algorithm k98 is my implementation of the algorithmk98 is a version of k98 which uses a document specific chain breaking strategythe distribution of link distances are used to identify unusually long linksthe threshold is a function p c x vt of the mean p and variance we found c 1 works well in practicetable 5 summarises the experimental resultsk98 performed performed significantly better than k98this is due to the use of a different partofspeech tagger and shallow parserthe difference in speed is largely due to the programming languages and term clustering strategiesour chain breaking strategy improved accuracy with k98 two versions of our algorithm were developed c99 and c99the former is an exact implementation of the algorithm described in this paperthe latter is given the expected number of topic segments for fair comparison with r98both algorithms used a 11 x 11 ranking maskthe first experiment focuses on the impact of our automatic termination strategy on c99 c99 is marginally more accurate than c99this indicates our automatic termination strategy is effective but not optimalthe minor reduction in speed performance is acceptablethe second experiment investigates the effect of different ranking mask size on the performance of c99 execution time increases with mask sizea 1 x 1 ranking mask reduces all the elements in the rank matrix to zerointerestingly the increase in ranking mask size beyond 3 x 3 has insignificant effect on segmentation accuracythis suggests the use of extrema for clustering has a greater impact on accuracy than linearising the similarity scores experimental result shows our algorithm c99 is more accurate than existing algorithmsa twofold increase in accuracy and sevenfold increase in speed was achieved with r98if one disregards segmentation accuracy h94 has the best algorithmic performance c99 k98 and r98 are all polynomial time algorithmsthe significance of our results has been confirmed by both ttest and kstesta segmentation algorithm has two key elements a clustering strategy and a similarity measureour results show divisive clustering is more precise than sliding window and lexical chains for locating topic boundariesfour similarity measures were examinedthe cosine coefficient and dot density measure yield similar resultsour spread activation based semantic measure improved accuracythis confirms that although kozima approach is computationally expensive it does produce more precise segmentationthe most significant improvement was due to our ranking scheme which linearises the cosine coefficientour experiments demonstrate that given insufficient data the qualitative behaviour of the cosine measure is indeed more reliable than the actual valuesalthough our evaluation scheme is sufficient for this comparative study further research requires a large scale task independent benchmarkit would be interesting to compare c99 with the multisource method described in using the tdt corpuswe would also like to develop a linear time and multisource version of the algorithmthis paper has benefitted from the comments of mary mcgee wood and the anonymous reviewersthanks are due to my parents and department for making this work possible jeffrey reynar for discussions and guidance on the segmentation problem hideki kozima for help on the spread activation measure minyen kan and marti hearst for their segmentation algorithms daniel oram for references to image processing techniques magnus rattray and stephen marsland for help on statistics and mathematics
A00-2004
a simple approach to building ensembles of naive bayesian classifiers for word sense disambiguationthis paper presents a corpusbased approach to word sense disambiguation that builds an ensemble of naive bayesian classifiers each of which is based on lexical features that represent cooccurring words in varying sized windows of contextdespite the simplicity of this approach empirical results disambiguating the widely studied nouns line and interest show that such an ensemble achieves accuracy rivaling the best previously published resultswe present an ensemble of eightyone naive bayesian classifiers based on varying sized windows of context to the left and right of the target word that define cooccurrence features
a simple approach to building ensembles of naive bayesian classifiers for word sense disambiguation this paper presents a corpusbased approach to word sense disambiguation that builds an ensemble of naive bayesian classifiers each of which is based on lexical features that represent cooccurring words in varying sized windows of context despite the simplicity of this approach empirical results disamthe widely studied nouns show that such an ensemble achieves accuracy rivaling the best previously published results word sense disambiguation is often cast as a problem in supervised learning where a disambiguator is induced from a corpus of manually sensetagged text using methods from statistics or machine learningthese approaches typically represent the context in which each sensetagged instance of a word occurs with a set of linguistically motivated featuresa learning algorithm induces a representative model from these features which is employed as a classifier to perform disambiguationthis paper presents a corpusbased approach that results in high accuracy by combining a number of very simple classifiers into an ensemble that performs disambiguation via a majority votethis is motivated by the observation that enhancing the feature set or learning algorithm used in a corpusbased approach does not usually improve disambiguation accuracy beyond what can be attained with shallow lexical features and a simple supervised learning algorithmfor example a naive bayesian classifier is based on a blanket assumption about the interactions among features in a sensetagged corpus and does not learn a representative modeldespite making such an assumption this proves to be among the most accurate techniques in comparative studies of corpusbased word sense disambiguation methodologies these studies represent the context in which an ambiguous word occurs with a wide variety of featureshowever when the contribution of each type of feature to overall accuracy is analyzed shallow lexical features such as cooccurrences and collocations prove to be stronger contributors to accuracy than do deeper linguistically motivated features such as partofspeech and verbobject relationshipsit has also been shown that the combined accuracy of an ensemble of multiple classifiers is often significantly greater than that of any of the individual classifiers that make up the ensemble in natural language processing ensemble techniques have been successfully applied to part ofspeech tagging and parsing when combined with a history of disambiguation success using shallow lexical features and naive bayesian classifiers these findings suggest that word sense disambiguation might best be improved by combining the output of a number of such classifiers into an ensemblethis paper begins with an introduction to the naive bayesian classifierthe features used to represent the context in which ambiguous words occur are presented followed by the method for selecting the classifiers to include in the ensemblethen the line and interesi data is describedexperimental results disambiguating these words with an ensemble of naive bayesian classifiers are shown to rival previously published resultsthis paper closes with a discussion of the choices made in formulating this methodology and plans for future worka naive bayesian classifier assumes that all the feature variables representing a problem are conditionally independent given the value of a classification variablein word sense disambiguation the context in which an ambiguous word occurs is represented by the feature variables and the sense of the ambiguous word is represented by the classification variable in this paper all feature variables fi are binary and represent whether or not a particular word occurs within some number of words to the left or right of an ambiguous word ie a window of contextfor a naive bayesian classifier the joint probability of observing a certain combination of contextual features with a particular sense is expressed as the parameters of this model are p and fils the sufficient statistics ie the summaries of the data needed for parameter estimation are the frequency counts of the events described by the interdependent variables in this paper these counts are the number of sentences in the sensetagged text where the word represented by fi occurs within some specified window of context of the ambiguous word when it is used in sense s any parameter that has a value of zero indicates that the associated word never occurs with the specified sense valuethese zero values are smoothed by assigning them a very small default probabilityonce all the parameters have been estimated the model has been trained and can be used as a classifier to perform disambiguation by determining the most probable sense for an ambiguous word given the context in which it occursthe contextual features used in this paper are binary and indicate if a given word occurs within some number of words to the left or right of the ambiguous wordno additional positional information is contained in these features they simply indicate if the word occurs within some number of surrounding wordspunctuation and capitalization are removed from the windows of contextall other lexical items are included in their original form no stemming is performed and noncontent words remainthis representation of context is a variation on the bagofwords feature set where a single window of context includes words that occur to both the left and right of the ambiguous wordan early use of this representation is described in where word sense disambiguation is performed with a naive bayesian classifierthe work in this paper differs in that there are two windows of context one representing words that occur to the left of the ambiguous word and another for those to the rightthe left and right windows of context have nine different sizes 0 1 2 3 4 5 10 25 and 50 wordsthe first step in the ensemble approach is to train a separate naive bayesian classifier for each of the 81 possible combination of left and right window sizesnaive_bayes represents a classifier where the model parameters have been estimated based on frequency counts of shallow lexical features from two windows of context one including 1 words to the left of the ambiguous word and the other including r words to the rightnote that naive_bayes includes no words to the left or right this classifier acts as a majority classifier that assigns every instance of an ambiguous word to the most frequent sense in the training dataonce the individual classifiers are trained they are evaluated using previously heldout test datathe crucial step in building an ensemble is selecting the classifiers to include as membersthe approach here is to group the 81 naive bayesian classifiers into general categories representing the sizes of the windows of contextthere are three such ranges narrow corresponds to windows 0 1 and 2 words wide medium to windows 3 4 and 5 words wide and wide to windows 10 25 and 50 words widethere are nine possible range categories since there are separate left and right windowsfor example naive_bayes belongs to the range category since it is based on a one word window to the left and a three word window to the rightthe most accurate classifier in each of the nine range categories is selected for inclusion in the ensembleeach of the nine member classifiers votes for the most probable sense given the particular context represented by that classifier the ensemble disambiguates by assigning the sense that receives a majority of the votesthe line data was created by by tagging every occurrence of line in the acldci wall street journal corpus and the american printing house for the blind corpus with one of six possible wordnet sensesthese senses and their frequency distribution are shown in table 1this data has since been used in studies by and in that work as well as in this paper a subset of the corpus is utilized such that each sense is uniformly distributed this reduces the accuracy of the majority classifier to 17the uniform distribution is created by randomly sampling 349 sensetagged examples from each sense resulting in a training corpus of 2094 sensetagged sentencesthe interest data was created by by tagging all occurrences of interest in the acldci wall street journal corpus with senses from the longman dictionary of contemporary englishthis data set was subsequently used for word sense disambiguation experiments by and the previous studies and this paper use the entire 2368 sensetagged sentence corpus in their experimentsthe senses and their fresense count product 2218 written or spoken text 405 telephone connection 429 formation of people or things queue 349 an artificial division boundary 376 a thin flexible object cord 371 total 4148 table 1 distribution of senses for line the experiments in this paper and previous work use a uniformly distributed subset of this corpus where each sense occurs 349 times sense count money paid for the use of money 1252 a share in a company or business 500 readiness to give attention 361 advantage advancement or favor 178 activity that one gives attention to 66 causing attention to be given to 11 total 2368 table 2 distribution of senses for interest the experiments in this paper and previous work use the entire corpus where each sense occurs the number of times shown above quency distribution are shown in table 2unlike line the sense distribution is skewed the majority sense occurs in 53 of the sentences while the smallest minority sense occurs in less than 1eightyone naive bayesian classifiers were trained and tested with the line and interest datafivefold cross validation was employed all of the sensetagged examples for a word were randomly shuffled and divided into five equal foldsfour folds were used to train the naive bayesian classifier while the remaining fold was randomly divided into two equal sized test setsthe first devtest was used to evaluate the individual classifiers for inclusion in the ensemblethe second test was used to evaluate the accuracy of the ensemblethus the training data for each word consists of 80 of the available sensetagged text while each of the test sets contains 10this process is repeated five times so that each fold serves as the source of the test data oncethe average accuracy of the individual naive bayesian classifiers across the five folds is reported in tables 3 and 4the standard deviations were between 01 and 025 and are not shown given their relative consistencyeach classifier is based upon a distinct representation of context since each employs a different combination of right and left window sizesthe size and range of the left window of context is indicated along the horizontal margin in tables 3 and 4 while the right window size and range is shown along the vertical marginthus the boxes that subdivide each table correspond to a particular range categorythe classifier that achieves the highest accuracy in each range category is included as a member of the ensemblein case of a tie the classifier with the smallest total window of context is included in the ensemblethe most accurate single classifier for line is naive_bayes which attains accuracy of 84 the accuracy of the ensemble created from the most accurate classifier in each of the range categories is 88the single most accurate classifier for interest is naive_bayes which attains accuracy of 86 while the ensemble approach reaches 89the increase in accuracy achieved by both ensembles over the best individual classifier is statistically significant as judged by mcnemar test with p 01these experiments use the same sensetagged corpora for interest and line as previous studiessummaries of previous results in tables 5 and 6 show that the accuracy of the naive bayesian ensemble is comparable to that of any other approachhowever due to variations in experimental methodologies it can not be concluded that the differences among the most accurate methods are statistically significantfor example in this work fivefold cross validation is employed to assess accuracy while train and test using 100 randomly sampled sets of datasimilar differences in training and testing methodology exist among the other studiesstill the results in this paper are encouraging due to the simplicity of the approachthe interest data was first studied by they employ a representation of context that includes the partofspeech of the two words surrounding interest a morphological feature indicating whether or not interest is singular or plural and the three most statistically significant cooccurring words in the sentence with interest as determined by a test of independencethese features are abbreviated as pos morph and cooccur in table 5a decomposable probabilistic model is induced from the sensetagged corpora using a backward sequential search where candidate models are evaluated with the loglikelihood ratio testthe selected model was used as a probabilistic classifier on a heldout set of test data and achieved accuracy of 78the interest data was included in a study by who represent the context of an ambiguous word with the partofspeech of three words to the left and right of interest a morphological feature indicating if interest is singular or plural an unordered set of frequently occurring keywords that surround interest local collocations that include interest and verbobject syntactic relationshipsthese features are abbreviated pos morph cooccur collocates and verbobj in table 5a nearestneighbor classifier was employed and achieved an average accuracy of 87 over repeated trials using randomly drawn training and test sets and present studies that utilize the original bruce and wiebe feature set and include the interest datathe first compares a range of probabilistic model selection methodologies and finds that none outperform the naive bayesian classifier which attains accuracy of 74the second compares a range of machine learning algorithms and finds that a decision tree learner and a naive bayesian classifier are most accuratethe line data was first studied by they evaluate the disambiguation accuracy of a naive bayesian classifier a content vector and a neural networkthe context of an ambiguous word is represented by a bagofwords where the window of context is two sentences widethis feature set is abbreviated as 2 sentence bow in table 6when the naive bayesian classifier is evaluated words are not stemmed and capitalization remainshowever with the content vector and the neural network words are stemmed and words from a stoplist are removedthey report no significant differences in accuracy among the three approaches the naive bayesian classifier achieved 71 accuracy the content vector 72 and the neural network 76the line data was studied again by where seven different machine learning methodologies are comparedall learning algorithms represent the context of an ambiguous word using the bagofwords with a two sentence window of contextin these experiments words from a stoplist are removed capitalization is ignored and words are stemmedthe two most accurate methods in this study proved to be a naive bayesian classifier and a perceptron the line data was recently revisited by both and the former take an ensemble approach where the output from two neural networks is combined one network is based on a representation of local context while the other represents topical contextthe latter utilize a naive bayesian classifierin both cases context is represented by a set of topical and local featuresthe topical features correspond to the openclass words that occur in a two sentence window of contextthe local features occur within a window of context three words to the left and right of the ambiguous word and include cooccurrence features as well as the partofspeech of words in this windowthese features are represented as local topical bow and pos in table 6 report accuracy of 87 while report accuracy of 84the word sense disambiguation ensembles in this paper have the following characteristics each point is discussed belowthe naive bayesian classifier has emerged as a consistently strong performer in a wide range of comparative studies of machine learning methodologiesa recent survey of such results as well as possible explanations for its success is presented in a similar finding has emerged in word sense disambiguation where a number of comparative studies have all reported that no method achieves significantly greater accuracy than the naive bayesian classifier in many ensemble approaches the member classifiers are learned with different algorithms that are trained with the same datafor example an ensemble could consist of a decision tree a neural network and a nearest neighbor classifier all of which are learned from exactly the same set of training datathis paper takes a different approach where the learning algorithm is the same for all classifiers but the training data is differentthis is motivated by the belief that there is more to be gained by varying the representation of context than there is from using many different learning algorithms on the same datathis is especially true in this domain since the naive bayesian classifier has a history of success and since there is no generally agreed upon set of features that have been shown to be optimal for word sense disambiguationshallow lexical features such as cooccurrences and collocations are recognized as potent sources of disambiguation informationwhile many other contextual features are often employed it is not clear that they offer substantial advantagesfor example report that local collocations alone achieve 80 accuracy disambiguating interest while their full set of features result in 87preliminary experiments for this paper used feature sets that included collocates cooccurrences partof speech and grammatical information for surrounding wordshowever it was clear that no combination of features resulted in disambiguation accuracy significantly higher than that achieved with cooccurrence featuresthe most accurate classifier from each of nine possible category ranges is selected as a member of the ensemblethis is based on preliminary experiments that showed that member classifiers with similar sized windows of context often result in little or no overall improvement in disambiguation accuracythis was expected since slight differences in window sizes lead to roughly equivalent representations of context and classifiers that have little opportunity for collective improvementfor example an ensemble was created for interest using the nine classifiers in the range category the accuracy of this ensemble was 84 slightly less than the most accurate individual classifiers in that range which achieved accuracy of 86early experiments also revealed that an ensemble based on a majority vote of all 81 classifiers performed rather poorlythe accuracy for interest was approximately 81 and line was disambiguated with slightly less than 80 accuracythe lesson taken from these results was that an ensemble should consist of classifiers that represent as differently sized windows of context as possible this reduces the impact of redundant errors made by classifiers that represent very similarly sized windows of contextthe ultimate success of an ensemble depends on the ability to select classifiers that make complementary errorsthis is discussed in the context of combining partofspeech taggers in they provide a measure for assessing the complementarity of errors between two taggers that could be adapted for use with larger ensembles such as the one discussed here which has nine membersin this paper ensemble disambiguation is based on a simple majority vote of the nine member classifiersan alternative strategy is to weight each vote by the estimated joint probability found by the naive bayesian classifierhowever a preliminary study found that the accuracy of a naive bayesian ensemble using a weighted vote was poorfor interest it resulted in accuracy of 83 while for line it was 82the simple majority vote resulted in accuracy of 89 for interest and 88 for linea number of issues have arisen in the course of this work that merit further investigationthe simplicity of the contextual representation can lead to large numbers of parameters in the naive bayesian model when using wide windows of contextsome combination of stoplists and stemming could reduce the numbers of parameters and thus improve the overall quality of the parameter estimates made from the training datain addition to simple cooccurrence features the use of collocation features seems promisingthese are distinct from cooccurrences in that they are words that occur in close proximity to the ambiguous word and do so to a degree that is judged statistically significantone limitation of the majority vote in this paper is that there is no mechanism for dealing with outcomes where no sense gets a majority of the votesthis did not arise in this study but will certainly occur as naive bayesian ensembles are applied to larger sets of datafinally further experimentation with the size of the windows of context seems warrantedthe current formulation is based on a combination of intuition and empirical studyan algorithm to determine optimal windows sizes is currently under developmentthis paper shows that word sense disambiguation accuracy can be improved by combining a number of simple classifiers into an ensemblea methodology for formulating an ensemble of naive bayesian classifiers is presented where each member classifier is based on cooccurrence features extracted from a different sized window of contextthis approach was evaluated using the widely studied nouns line and interest which are disambiguated with accuracy of 88 and 89 which rivals the best previously published resultsthis work extends ideas that began in collaboration with rebecca bruce and janyce wiebeclaudia leacock and raymond mooney provided valuable assistance with the line datai am indebted to an anonymous reviewer who pointed out the importance of separate test and devtest data setsa preliminary version of this paper appears in
A00-2009
a maximumentropyinspired parserwe present a new parser for parsing down to penn treebank style parse trees that achieves 901 average precisionrecall for sentences of length 40 and less and 895 for sentences of length 100 and less when trained and tested on the previously established 59101517 standard sections of the wall street journal tree bankthis represents a 13 decrease in error rate over the best singleparser results on this corpus 9the major technical innovation is the use of a maximumentropyinspired model for conditioning and smoothing that let us successfully to test and combine many different conditioning eventswe also present some partial results showing the effects of different conditioning information including a surprising 2 improvement due to guessing the lexical head preterminal before guessing the lexical headas an alternative to hard coded heuristics we proposed to recover the penn functional tags automaticallyour parser is also a twostage ctf model where the first stage is a smoothed markov grammar and the second stage is a lexicalized markov grammar with extra annotations about parents and grandparents
a maximumentropyinspired parser we present a new parser for parsing down to penn treebank style parse trees that achieves 901 average precisionrecall for sentences of 40 and less and for of length 100 and less when trained and tested on the previously established 59101517 quotstandardquot sections of the wall street journal treebank this represents a 13 decrease in error rate over the best singleparser results on this corpus 9 the major technical innovation is the use of a quotmaximumentropyinspiredquot model for conditioning and smoothing that let us successfully to test and combine many different conditioning events we also present some partial results showing the effects of different conditioning information including a surprising 2 improvement due to guessing the lexical head preterminal before guessing the lexical head we present a new parser for parsing down to penn treebank style parse trees 16 that achieves 901 average precisionrecall for sentences of length 40 and 895 for sentences of length 100 when trained and tested on the previously established 59101517 quotstandardquot sections of the wall street journal treebankthis represents a 13 decrease in error rate over the best singleparser results on this corpus 9following 510 our parser is based upon a probabilistic generative modelthat is for all sentences s and all parses 7r the parser assigns a probability p p the equality holding when we restrict consideration to 7r whose yield this research was supported in part by nsf grant lis sbr 9720368the author would like to thank mark johnson and all the rest of the brown laboratory for linguistic information processing is s then for any s the parser returns the parse ir that maximizes this probabilitythat is the parser implements the function arg maxrp arg maxirp arg maxrpwhat fundamentally distinguishes probabilistic generative parsers is how they compute p and it is to that topic we turn nextthe model assigns a probability to a parse by a topdown process of considering each constituent c in ir and for each c first guessing the preterminal of c t then the lexical head of c h and then the expansion of c into further constituents ethus the probability of a parse is given by the equation where 1 is the label of c verbphrase etc and h is the relevant history of c information outside c that our probability model deems important in determining the probability in questionmuch of the interesting work is determining what goes into h whenever it is clear to which constituent we are referring we omit the in eg hin this notation the above equation takes the following form next we describe how we assign a probability to the expansion e of a constituentin section 5 we present some results in which the possible expansions of a constituent are fixed in advanced by extracting a treebank grammar 3 from the training corpusthe method that gives the best results however uses a markov grammar a method for assigning probabilities to any possible expansion using statistics gathered from the training corpus 61015the method we use follows that of 10in this scheme a traditional probabilistic contextfree grammar rule can be thought of as consisting of a lefthand side with a label 1 drawn from the nonterminal symbols of our grammar and a righthand side that is a sequence of one or more such symbolsfor us the nonterminal symbols are those of the treebank augmented by the symbols aux and auxg which have been assigned deterministically to certain auxiliary verbs such as quothavequot or quothavingquotfor each expansion we distinguish one of the righthand side labels as the quotmiddlequot or quotheadquot symbol mm is the constituent from which the head lexical item h is obtained according to deterministic rules that pick the head of a constituent from among the heads of its childrento the left of m is a sequence of one or more left labels li including the special termination symbol a which indicates that there are no more symbols to the left and similarly for the labels to the right rithus an expansion e looks like the expansion is generated by guessing first m then in order l1 through l1 and similarly for ri through in a pure markov pcfg we are given the lefthand side label 1 and then probabilistically generate the righthand side conditioning on no information other than 1 and previously generated pieces of the righthand side itselfin the simplest of such models a zeroorder markov grammar each label on the righthand side is generated conditioned only on that is according to the distributions p p and pmore generally one can condition on the m previously generated labels thereby obtaining an mthorder markov grammarso for example in a secondorder markov pcfg l2 would be conditioned on l1 and m in our complete model of course the probability of each label in the expansions is also conditioned on other material as specified in equation 1 eg pthus we would use pnote that the as on both ends of the expansion in expression 2 are conditioned just like any other label in the expansionthe major problem confronting the author of a generative parser is what information to use to condition the probabilities required in the model and how to smooth the empirically obtained probabilities to take the sting out of the sparse data problems that are inevitable with even the most modest conditioningfor example in a secondorder markov grammar we conditioned the l2 label according to the distribution palso remember that h is a placeholder for any other information beyond the constituent c that may be useful in assigning c a probabilityin the past few years the maximum entropy or loglinear approach has recommended itself to probabilistic model builders for its flexibility and its novel approach to smoothing 117a complete review of loglinear models is beyond the scope of this paperrather we concentrate on the aspects of these models that most directly influenced the model presented hereto compute a probability in a loglinear model one first defines a set of quotfeaturesquot functions from the space of configurations over which one is trying to compute probabilities to integers that denote the number of times some pattern occurs in the inputin our work we assume that any feature can occur at most once so features are booleanvalued 0 if the pattern does not occur 1 if it doesin the parser we further assume that features are chosen from certain feature schemata and that every feature is a boolean conjunction of subfeaturesfor example in computing the probability of the head preterminal t we might want a feature schema f that returns 1 if the observed preterminal of c t and the label of c 1 and zero otherwisethis feature is obviously composed of two subfeatures one recognizing t the other 1if both return 1 then the feature returns 1now consider computing a conditional probability p with a set of features h that connect a to the history h in a loglinear model the probability function takes the following form here the ai are weights between negative and positive infinity that indicate the relative importance of a feature the more relevant the feature to the value of the probability the higher the absolute value of the associated athe function z called the partition function is a normalizing constant so the probabilities over all a sum to onenow for our purposes it is useful to rewrite this as a sequence of multiplicative functions gi for 0 i j here go 11z and gi eai fithe intuitive idea is that each factor gi is larger than one if the feature in question makes the probability more likely one if the feature has no effect and smaller than one if it makes the probability less likelymaximumentropy models have two benefits for a parser builderfirst as already implicit in our discussion factoring the probability computation into a sequence of values corresponding to various quotfeaturesquot suggests that the probability model should be easily changeable just change the set of features usedthis point is emphasized by ratnaparkhi in discussing his parser 17second and this is a point we have not yet mentioned the features used in these models need have no particular independence of one anotherthis is useful if one is using a loglinear model for smoothingthat is suppose we want to compute a conditional probability p but we are not sure that we have enough examples of the conditioning event b c in the training corpus to ensure that the empirically obtained probability p is accuratethe traditional way to handle this is also to compute p and perhaps p as well and take some combination of these values as one best estimate for pthis method is known as quotdeleted interpolationquot smoothingin maxentropy models one can simply include features for all three events f1 f2 and f3 and combine them in the model according to equation 3 or equivalently equation 4the fact that the features are very far from independent is not a concernnow let us note that we can get an equation of exactly the same form as equation 4 in the following fashion note that the first term of the equation gives a probability based upon little conditioning information and that each subsequent term is a number from zero to positive infinity that is greater or smaller than one if the new information being considered makes the probability greater or smaller than the previous estimateas it stands this last equation is pretty much contentfreebut let us look at how it works for a particular case in our parsing schemeconsider the probability distribution for choosing the preterminal for the head of a constituentin equation 1 we wrote this as pas we discuss in more detail in section 5 several different features in the context surrounding c are useful to include in h the label head preterminal and head of the parent of c the label of c left sibling and the label of the grandparent of c that is we wish to compute pwe can now rewrite this in the form of equation 5 as follows here we have sequentially conditioned on steadily increasing portions of c historyin many cases this is clearly warrantedfor example it does not seem to make much sense to condition on say hp without first conditioning on tiin other cases however we seem to be conditioning on apples and oranges so to speakfor example one can well imagine that one might want to condition on the parent lexical head without conditioning on the left sibling or the grandparent labelone way to do this is to modify the simple version shown in equation 6 to allow this note the changes to the last three terms in equation 7rather than conditioning each term on the previous ones they are now conditioned only on those aspects of the history that seem most relevantthe hope is that by doing this we will have less difficulty with the splitting of conditioning events and thus somewhat less difficulty with sparse datawe make one more point on the connection of equation 7 to a maximum entropy formulationsuppose we were in fact going to compute a true maximum entropy model based upon the features used in equation 7 ii f2 f3 this requires finding the appropriate ais for equation 3 which is accomplished using an algorithm such as iterative scaling ii in which values for the ai are initially quotguessedquot and then modified until they converge on stable valueswith no prior knowledge of values for the ai one traditionally starts with ai 0 this being a neutral assumption that the feature has neither a positive nor negative impact on the probability in questionwith some prior knowledge nonzero values can greatly speed up this process because fewer iterations are required for convergencewe comment on this because in our example we can substantially speed up the process by choosing values picked so that when the maximumentropy equation is expressed in the form of equation 4 the gi have as their initial values the values of the corresponding terms in equation 7now we observe that if we were to use a maximumentropy approach but run iterative scaling zero times we would in fact just have equation 7the major advantage of using equation 7 is that one can generally get away without computing the partition function zin the simple form it is clear that z 1in the more interesting version equation 7 this is not true in general but one would not expect it to differ much from one and we assume that as long as we are not publishing the raw probabilities the difference from one should be unimportantas partitionfunction calculation is typically the major online computational problem for maximumentropy models this simplifies the model significantlynaturally the distributions required by equation 7 cannot be used without smoothingin a pure maximumentropy model this is done by feature selection as in ratnaparkhi maximumentropy parser 17while we could have smoothed in the same fashion we choose instead to use standard deleted interpolationwe created a parser based upon the maximumentropyinspired model of the last section smoothed using standard deleted interpolationas the generative model is topdown and we use a standard bottomup bestfirst probabilistic chart parser 27 we use the chart parser as a first pass to generate candidate possible parses to be evaluated in the second pass by our probabilistic modelfor runs with the generative model based upon markov grammar statistics the first pass uses the same statistics but conditioned only on standard pcfg informationthis allows the second pass to see expansions not present in the training corpuswe use the gathered statistics for all observed words even those with very low counts though obviously our deleted interpolation smoothing gives less emphasis to observed probabilities for rare wordswe guess the preterminals of words that are not observed in the training data using statistics on capitalization hyphenation word endings and the probability that a given preterminal is realized using a previously unobserved wordas noted above the probability model uses five smoothed probability distributions one each for li mrit and h the equation for the conditional probability distribution for t is given in equation 7the other four equations can be found in a longer version of this paper available on the author website l and r are conditioned on three previous labels so we are using a thirdorder markov grammaralso the label of the parent constituent lp is conditioned upon even when it is not obviously related to the further conditioning eventsthis is due to the importance of this factor in parsing as noted in eg 14in keeping with the standard methodology 5 9101517 we used the penn wall street journal treebank 16 with sections 221 for training section 23 for testing and section 24 for development performance on the test corpus is measured using the standard measures from 591017in particular we measure labeled precision and recall average number of crossbrackets per sentence percentage of sentences with zero cross brackets and percentage of sentences with 2 cross brackets again as standard we take separate measurements for all sentences of length 40 and all sentences of length 100note that the definitions of labeled precision and recall are those given in 9 and used in all of the previous workas noted in 5 these definitions typically give results about 04 higher than the more obvious onesthe results for the new parser as well as for the previous topthree individual parsers on this corpus are given in figure 1as is typical all of the standard measures tell pretty much the same story with the new parser outperforming the other three parserslooking in particular at the precision and recall figures the new parser give us a 13 error reduction over the best of the previous work co1199 9in the previous sections we have concentrated on the relation of the parser to a maximumentropy approach the aspect of the parser that is most novelhowever we do not think this aspect is the sole or even the most important reason for its comparative successhere we list what we believe to be the most significant contributions and give some experimental results on how well the program behaves without themwe take as our starting point the parser labled char97 in figure 1 5 as that is the program from which our current parser derivesthat parser as stated in figure 1 achieves an average precisionrecall of 875as noted in 5 that system is based upon a quottreebank grammarquot a grammar read directly off the training corpusthis is as opposed to the quotmarkovgrammarquot approach used in the current parseralso the earlier parser uses two techniques not employed in the current parserfirst it uses a clustering scheme on words to give the system a quotsoftquot clustering of heads and subheadssecond char97 uses unsupervised learning in that the original system was run on about thirty million words of unparsed text the output was taken as quotcorrectquot and statistics were collected on the resulting parseswithout these enhancements char97 performs at the 866 level for sentences of length 40in this section we evaluate the effects of the various changes we have made by running various versions of our current programto avoid repeated evaluations based upon the testing corpus here our evaluation is based upon sentences of length 40 from the development corpuswe note here that this corpus is somewhat more difficult than the quotofficialquot test corpusfor example the final version of our system achieves an average precisionrecall of 901 on the test corpus but an average precisionrecall of only 897 on the development corpusthis is indicated in figure 2 where the model labeled quotbestquot has precision of 898 and recall of 896 for an average of 897 04 lower than the results on the official test corpusthis is in accord with our experience that developmentcorpus results are from 03 to 05 lower than those obtained on the test corpusthe model labeled quotoldquot attempts to recreate the char97 system using the current programit makes no use of special maximumentropyinspired features it does not guess the preterminal before guessing the lexical head and it uses a treebank grammar rather than a markov grammarthis parser achieves an average precisionrecall of 862this is consistent with the average precisionrecall of 866 for 5 mentioned above as the latter was on the test corpus and the former on the development corpusbetween the old model and the best model figure 2 gives precisionrecall measurements for several different versions of our parserone of the first and without doubt the most significant change we made in the current parser is to move from two stages of probabilistic decisions at each node to threeas already noted char97 first guesses the lexical head of a constituent and then given the head guesses the pcfg rule used to expand the constituent in questionin contrast the current parser first guesses the head preterminal then the head and then the expansionit turns out that usefulness of this process had already been discovered by collins 10 who in turn notes that it was previously used by eisner 12however collins in 10 does not stress the decision to guess the head preterminal first and it might be lost on the casual readerindeed it was lost on the present author until he went back after the fact and found it therein figure 2 we show that this one factor improves performance by nearly 2it may not be obvious why this should make so great a difference since most words are effectively unambiguouswe believe that two factors contribute to this performance gainthe first is simply that if we first guess the preterminal when we go to guess the head the first thing we can condition upon is the preterminal ie we compute pthis quantity is a relatively intuitive one and it seems particularly good to condition upon here since we use it in effect as the unsmoothed probability upon which all smoothing of p is basedthis one quotfixquot makes slightly over a percent difference in the resultsthe second major reason why first guessing the preterminal makes so much difference is that it can be used when backing off the lexical head in computing the probability of the rule expansionfor example when we first guess the lexical head we can move from computing p to pso eg even if the word quotconflatingquot does not appear in the training corpus the quotngquot ending allows our program to guess with relative security that the word has the vbg preterminal and thus the probability of various rule expansions can be considerable sharpenedfor example the treebank pcfg probability of the rule quotvp vbg npquot is 00145 whereas once we condition on the fact that the lexical head is a vbg we get a probability of 0214the second modification is the explicit marking of noun and verbphrase coordinationwe have already noted the importance of conditioning on the parent label pso for example information about an np is conditioned on the parent eg an s vp pp etcnote that when an np is part of an np coordinate structure the parent will itself be an np and similarly for a vpbut nps and vps can occur with np and vp parents in noncoordinate structures as wellfor example in the penn treebank a vp with both main and auxiliary verbs has the structure shown in figure 3note that the subordinate vp has a vp parentthus np and vp parents of constituents are marked to indicate if the parents are a coordinate structurea vp coordinate structure is defined here as a constituent with two or more vp children one or more of the constituents comma cc conjp and nothing else coordinate np phrases are defined similarlysomething very much like this is done in 15as shown in figure 2 conditioning on this information gives a 06 improvementwe believe that this is mostly due to improvements in guessing the subconstituent preterminal and headgiven we are already at the 88 level of accuracy we judge a 06 improvement to be very much worth whilenext we add the less obvious conditioning events noted in our previous discussion of the final model grandparent label lg and left sibling label bwhen we do so using our maximumentropyinspired conditioning we get another 045 improvement in average precisionrecall as indicated in figure 2 on the line labeled quotmaxentinspirednote that we also tried including this information using a standard deletedinterpolation modelthe results here are shown in the line quotstandard interpolationquotincluding this information within a standard deletedinterpolation model causes a 06 decrease from the results using the less conventional modelindeed the resulting performance is worse than not using this information at allup to this point all the models considered in this section are treebank grammar modelsthat is the pcfg grammar rules are read directly off the training corpusas already noted our best model uses a markovgrammar approachas one can see in figure 2 a firstorder markov grammar performs slightly worse than the equivalent treebankgrammar parserhowever a secondorder grammar does slightly better and a thirdorder grammar does significantly better than the treebank parserwe have presented a lexicalized markov grammar parsing model that achieves an average precisionrecall of 911 on sentences of length 40 and 895 on sentences of length 100this corresponds to an error reduction of 13 over the best previously published single parser results on this test set those of collins 9that the previous three best parsers on this test 5917 all perform within a percentage point of each other despite quite different basic mechanisms led some researchers to wonder if there might be some maximum level of parsing performance that could be obtained using the treebank for training and to conjecture that perhaps we were at itthe results reported here disprove this conjecturethe results of 13 achieved by combining the aforementioned threebest parsers also suggest that the limit on treebank trained parsers is much higher than previously thoughtindeed it may be that adding this new parser to the mix may yield still higher resultsfrom our perspective perhaps the two most important numbers to come out of this research are the overall error reduction of 13 over the results in 9 and the intermediateresult improvement of nearly 2 on labeled precisionrecall due to the simple idea of guessing the head preterminal before guessing the headneither of these results were anticipated at the start of this researchas noted above the main methodological innovation presented here is our quotmaximumentropyinspiredquot model for conditioning and smoothingtwo aspects of this model deserve some commentthe first is the slight but important improvement achieved by using this model over conventional deleted interpolation as indicated in figure 2we expect that as we experiment with other more semantic conditioning information the importance of this aspect of the model will increasemore important in our eyes though is the flexibility of the maximumentropyinspired modelthough in some respects not quite as flexible as true maximum entropy it is much simpler and in our estimation has benefits when it comes to smoothingultimately it is this flexibility that let us try the various conditioning events to move on to a markov grammar approach and to try several markov grammars of different orders without significant programmingindeed we initiated this line of work in an attempt to create a parser that would be flexible enough to allow modifications for parsing down to more semantic levels of detailit is to this project that our future parsing work will be devoted
A00-2018
an unsupervised method for detecting grammatical errorswe present an unsupervised method for detecting grammatical errors by inferring negative evidence from edited textual corporathe system was developed and tested using essaylength responses to prompts on the test of english as a foreign language the errorrecognition system alek performs with about 80 precision and 20 recallwe attempt to identify errors on the basis of context more specifically a 2 word window around the word of interest from which we consider function words and pos tagswe use a mutual information measure in addition to raw frequency of n gramsthe grammar feature covers errors such as sentence fragments verb form errors and pronoun errorswe utilize mutual information and chisquare statistics to identify typical contexts for a small set of targeted words from a large wellformed corpus
an unsupervised method for detecting grammatical errors we present an unsupervised method for detecting grammatical errors by inferring negative evidence from edited textual corpora the system was developed and tested using essaylength responses to prompts on the test of english as a foreign language the errorrecognition system alek performs with about 80 precision and 20 recall a good indicator of whether a person knows the meaning of a word is the ability to use it appropriately in a sentence much information about usage can be obtained from quite a limited context choueka and lusignan found that people can typically recognize the intended sense of a polysemous word by looking at a narrow window of one or two words around itstatisticallybased computer programs have been able to do the same with a high level of accuracy the goal of our work is to automatically identify inappropriate usage of specific vocabulary words in essays by looking at the local contextual cues around a target wordwe have developed a statistical system alek that uses statistical analysis for this purposea major objective of this research is to avoid the laborious and costly process of collecting errors for each word that we wish to evaluateinstead we train alek on a general corpus of english and on edited text containing example uses of the target wordthe system identifies inappropriate usage based on differences between the word local context cues in an essay and the models of context it has derived from the corpora of wellformed sentencesa requirement for alek has been that all steps in the process be automated beyond choosing the words to be tested and assessing the resultsonce a target word is chosen preprocessing building a model of the word appropriate usage and identifying usage errors in essays is performed without manual interventionalek has been developed using the test of english as a foreign language administered by the educational testing servicetoefl is taken by foreign students who are applying to us undergraduate and graduatelevel programsapproaches to detecting errors by nonnative writers typically produce grammars that look for specific expected error types under this approach essays written by esl students are collected and examined for errorsparsers are then adapted to identify those error types that were found in the essay collectionwe take a different approach initially viewing error detection as an extension of the word sense disambiguation problemcorpusbased wsd systems identify the intended sense of a polysemous word by collecting a set of example sentences for each of its various senses and extracting salient contextual cues from these sets to build a statistical model for each sensethey identify the intended sense of a word in a novel sentence by extracting its contextual cues and selecting the most similar word sense model yarowsky golding showed how methods used for wsd could be adapted to detect errors resulting from common spelling confusions among sets such as there their and they arehe extracted contexts from correct usage of each confusable word in a training corpus and then identified a new occurrence as an error when it matched the wrong contexthowever most grammatical errors are not the result of simple word confusionsthis complicates the task of building a model of incorrect usageone approach we considered was to proceed without such a model represent appropriate word usage in a single model and compare a novel example to that modelthe most appealing part of this formulation was that we could bypass the knowledge acquisition bottleneckall occurrences of the word in a collection of edited text could be automatically assigned to a single training set representing appropriate usageinappropriate usage would be signaled by contextual cues that do not occur in trainingunfortunately this approach was not effective for error detectionan example of a word usage error is often very similar to the model of appropriate usagean incorrect usage can contain two or three salient contextual elements as well as a single anomalous elementthe problem of error detection does not entail finding similarities to appropriate usage rather it requires identifying one element among the contextual cues that simply does not fitwhat kinds of anomalous elements does alek identifywriters sometimes produce errors that violate basic principles of english syntax while other mistakes show a lack of information about a specific vocabulary item in order to detect these two types of problems alek uses a 30million word general corpus of english from the san jose mercury news and for each target word a set of 10000 example sentences from north american newspaper text the corpora are extracted from the acldci corporain selecting the sentences for the word alek infers negative evidence from the contextual cues that do not cooccur with the target word either in the word specific corpus or in the general english oneit uses two kinds of contextual cues in a 2 word window around the target word function words and partofspeech tags the brill tagger output is postprocessed to quotenrichquot some closed class categories of its tag set such as subject versus object pronoun and definite versus indefinite determinerthe enriched tags were adapted from francis and kaera after the sentences have been preprocessed alek counts sequences of adjacent partofspeech tags and function words for example the sequence aat fulltimei1 jobinn contributes one occurrence each to the bigrams atjj jjnn ajj and to the partofspeech tag trigram atjjnneach individual tag and function word also contributes to its own unigram countthese frequencies form the basis for the error detection measuresfrom the general corpus alek computes a mutual information measure to determine which sequences of partofspeech tags and function words are unusually rare and are therefore likely to be ungrammatical in english mutual information has often been used to detect combinations of words that occur more frequently than we would expect based on the assumption that the words are independenthere we use this measure for the opposite purpose to find combinations that occur less often than expectedalek also looks for sequences that are common in general but unusual in the word specific corpus these divergences between the two corpora reflect syntactic properties that are peculiar to the target wordthe system computes mutual information comparing the proportion of observed occurrences of bigrams in the general corpus to the proportion expected based on the assumption of independence as shown below here p is the probability of the occurrence of the ab bigram estimated from its frequency in the general corpus and p and p are the probabilities of the first and second elements of the bigram also estimated from the general corpusungrammatical sequences should produce bigram probabilities that are much smaller than the product of the unigram probabilities trigram sequences are also used but in this case the mutual information computation compares the cooccurrence of abc to a model in which a and c are assumed to be conditionally independent given b once again a negative value is often indicative of a sequence that violates a rule of englishalek also uses mutual information to compare the distributions of tags and function words in the wordspecific corpus to the distributions that are expected based on the general corpusthe measures for bigrams and trigrams are similar to those given above except that the probability in the numerator is estimated from the wordspecific corpus and the probabilities in the denominator come from the general corpusto return to a previous example the phrase a knowledge contains the tag bigram for singular determiner followed by singular noun this sequence is much less common in the wordspecific corpus for knowledge than would be expected from the general corpus unigram probabilities of at and nnin addition to bigram and trigram measures alek compares the target word partofspeech tag in the wordspecific corpus and in the general corpusspecifically it looks at the conditional probability of the partofspeech tag given the major syntactic category in both distributions by computing the following valuefor example in the general corpus about half of all noun tokens are plural but in the training set for the noun knowledge the plural knowledges occurs rarely if at allthe mutual information measures provide candidate errors but this approach overgenerates it finds rare but still quite grammatical sequencesto reduce the number of false positives no candidate found by the mi measures is considered an error if it appears in the wordspecific corpus at least two timesthis increases alek precision at the price of reduced recallfor example a knowledge will not be treated as an error because it appears in the training corpus as part of the longer a knowledge of sequence alek also uses another statistical technique for finding rare and possibly ungrammatical tag and function word bigrams by computing the x2 statistic for the difference between the bigram proportions found in the wordspecific and in the general corpus the x2 measure faces the same problem of overgenerating errorsdue to the large sample sizes extreme values can be obtained even though effect size may be minusculeto reduce false positives alek requires that effect sizes be at least in the moderatetosmall range direct evidence from the word specific corpus can also be used to control the overgeneration of errorsfor each candidate error alek compares the larger context in which the bigram appears to the contexts that have been analyzed in the wordspecific corpusfrom the wordspecific corpus alek forms templates sequences of words and tags that represent the local context of the targetif a test sentence contains a low probability bigram the local context of the target is compared to all the templates of which it is a partexceptions to the error that is longer grammatical sequences that contain rare subsequences are found by examining conditional probabilitiesto illustrate this consider the example of a knowledge and a knowledge of the conditional probability of of given a knowledge is high as it accounts for almost all of the occurrences of a knowledge in the wordspecific corpusbased on this high conditional probability the system will use the template for a knowledge of to keep it from being marked as an errorother function words and tags in the 1 position have much lower conditional probability so for example a knowledge is will not be treated as an exception to the errortoefl essays are graded on a 6 point scale where 6 demonstrates quotclear competencequot in writing on rhetorical and syntactic levels and 1 demonstrates quotincompetence in writingquotif low probability ngrams signal grammatical errors then we would expect toefl essays that received lower scores to have more of these ngramsto test this prediction we randomly selected from the toefl pool 50 essays for each of the 6 score values from 10 to 60for each score value all 50 essays were concatenated to form a superessayin every superessay for each adjacent pair and triple of tags containing a noun verb or adjective the bigram and trigram mutual information values were computed based on the general corpustable 1 shows the proportions of bigrams and trigrams with mutual information less than 360as predicted there is a significant negative correlation between the score and the proportion of low probability bigrams and trigrams alek was developed using three target words that were extracted from toefl essays concentrate interest and knowledgethese words were chosen because they represent different parts of speech and varying degrees of polysemyeach also occurred in at least 150 sentences in what was then a small pool of toefl essaysbefore development began each occurrence of these words was manually labeled as an appropriate or inappropriate usage without taking into account grammatical errors that might have been present elsewhere in the sentence but which were not within the target word scopecritical values for the statistical measures were set during this development phasethe settings were based empirically on alek performance so as to optimize precision and recall on the three development wordscandidate errors were those local context sequences that produced a mutual information value of less than 360 based on the general corpus mutual information of less than 500 for the specificgeneral comparisons or a x2 value greater than 1282 with an effect size greater than 030precision and recall for the three words are shown belowalek was tested on 20 wordsthese words were randomly selected from those which met two criteria they appear in a university word list as words that a student in a us university will be expected to encounter and there were at least 1000 sentences containing the word in the toefl essay poolto build the usage model for each target word 10000 sentences containing it were extracted from the north american news corpuspreprocessing included detecting sentence boundaries and partofspeech taggingas in the development system the model of general english was based on bigram and trigram frequencies of function words and partofspeech tags from 30million words of the san jose mercury newsfor each test word all of the test sentences were marked by alek as either containing an error or not containing an errorthe size of the test set for each word ranged from 1400 to 20000 with a mean of 8000 sentencesto evaluate the system for each test word we randomly extracted 125 sentences that alek classified as containing no error and 125 sentences which it labeled as containing an error these 250 sentences were presented to a linguist in a random order for blind evaluationthe linguist who had no part in alek development marked each usage of the target word as incorrect or correct and in the case of incorrect usage indicated how far from the target one would have to look in order to recognise that there was an errorfor example in the case of quotan periodquot the error occurs at a distance of one word from periodwhen the error is an omission as in quotlived in victorian periodquot the distance is where the missing word should have appearedin this case the missing determiner is 2 positions away from the targetwhen more than one error occurred the distance of the one closest to the target was markedtable 3 lists the precision and recall for the 20 test wordsthe column labelled quotrecallquot is the proportion of humanjudged errors in the 250sentence sample that were detected by alekquottotal recallquot is an estimate that extrapolates from the human judgements of the sample to the entire test setwe illustrate this with the results for pollutionthe human judge marked as incorrect usage 912 of the sample from alek eset and 184 of the sample from its csetto estimate overall incorrect usage we computed a weighted mean of these two rates where the weights reflected the proportion of sentences that were in the eset and csetthe eset contained 83 of the pollution sentences and the cset had the remaining 917with the human judgements as the gold standard the estimated overall rate of incorrect usage is 245alek estimated recall is the proportion of sentences in the eset times its precision divided by the overall estimated error rate 245 310the precision results vary from word to wordconclusion and pollution have precision in the low to middle 90 while individual precision is 57overall alek predictions are about 78 accuratethe recall is limited in part by the fact that the system only looks at syntactic information while many of the errors are semanticnicholls identifies four error types an unnecessary word a missing word a word or phrase that needs replacing a word used in the wrong form alek recognizes all of these types of errorsfor closed class words alek identified whether a word was missing the wrong word was used and when an extra word was usedopen class words have a fourth error category form including inappropriate compounding and verb agreementduring the development stage we found it useful to add additional error categoriessince teofl graders are not supposed to take punctuation into account punctuation errors were only marked when they caused the judge to quotgarden pathquot or initially misinterpret the sentencespelling was marked either when a function word was misspelled causing partofspeech tagging errors or when the writer intent was unclearthe distributions of categories for hits and misses shown in table 4 are not strikingly differenthowever the hits are primarily syntactic in nature while the misses are both semantic and syntactic alek is sensitive to openclass word confusions where the part of speech differs or where the target word is confused with another word in both cases the system recognizes that the target is in the wrong syntactic environmentmisses can also be syntactic when the target word is confused with another word but the syntactic environment fails to trigger an errorin addition alek does not recognize semantic errors when the error involves the misuse of an openclass word in combination with the target closed class words typically are either selected by or agree with a head wordso why are there so many misses especially with prepositionsthe problem is caused in part by polysemy when one sense of the word selects a preposition that another sense does notwhen concentrate is used spatially it selects the preposition in as quotthe stores were concentrated in the downtown areaquotwhen it denotes mental activity it selects the preposition on as in quotsusan concentrated on her studiesquotsince alek trains on all senses of concentrate it does not detect the error in quotsusan concentrated in her studiesquotanother because is that adjuncts especially temporal and locative adverbials distribute freely in the wordspecific corpora as in quotsusan concentrated in her roomquot this second problem is more tractable than the polysemy problem and would involve training the system to recognize certain types of adjunctsfalse positives when alek quotidentifiesquot an error where none exists fall into six major categoriesthe percentage of each false positive type in a random sample of 200 false positives is shown in table 5domain mismatch mismatch of the newspaperdomain wordspecific corpora and essaydomain test corpusone notable difference is that some toefl essay prompts call for the writer opinionconsequently toefl essays often contain first person references whereas newspaper articles are written in the third personwe need to supplement the wordspecific corpora with material that more closely resembles the test corpustagger incorrect analysis by the partofspeech taggerwhen the partofspeech tag is wrong alek often recognizes the resulting ngram as anomalousmany of these errors are caused by training on the brown corpus instead of a corpus of essayssyntactic analysis errors resulting from using partofspeech tags instead of supertags or a full parse which would give syntactic relations between constituentsfor example alek false alarms on arguments of ditransitive verbs such as offer and flags as an error quotyou benefitsquot in quotoffers you benefitsquotfree distribution elements that distribute freely such as adverbs and conjunctions as well as temporal and locative adverbial phrases tend to be identified as errors when they occur in some positionspunctuation most notably omission of periods and commassince these errors are not indicative of one ability to use the target word they were not considered as errors unless they caused the judge to misanalyze the sentenceinfrequent tagsan undesirable result of our quotenrichedquot tag set is that some tags eg the postdeterminer last occur too infrequently in the corpora to provide reliable statisticssolutions to some of these problems will clearly be more tractable than to otherscomparison of these results to those of other systems is difficult because there is no generally accepted test set or performance baselinegiven this limitation we compared alek performance to a widely used grammar checker the one incorporated in microsoft word97we created files of sentences used for the three development words concentrate interest and knowledge and manually corrected any errors outside the local context around the target before checking them with word97the performance for concentrate showed overall precision of 089 and recall of 007for interest precision was 085 with recall of 011in sentences containing knowledge precision was 099 and recall was 030word97 correctly detected the ungrammaticality of knowledges as well as a knowledge while it avoided flagging a knowledge ofin summary word97 precision in error detection is impressive but the lower recall values indicate that it is responding to fewer error types than does alekin particular word97 is not sensitive to inappropriate selection of prepositions for these three words of course word97 detects many kinds of errors that alek does notresearch has been reported on grammar checkers specifically designed for an esl populationthese have been developed by hand based on small training and test setsschneider and mccoy developed a system tailored to the error productions of american sign language signersthis system was tested on 79 sentences containing determiner and agreement errors and 101 grammatical sentenceswe calculate that their precision was 78 with 54 recallpark palmer and washburn adapted a categorial grammar to recognize quotclasses of errors that dominatequot in the nine essays they inspectedthis system was tested on eight essays but precision and recall figures are not reportedthe unsupervised techniques that we have presented for inferring negative evidence are effective in recognizing grammatical errors in written textpreliminary results indicate that alek error detection is predictive of toefl scoresif alek accurately detects usage errors then it should report more errors in essays with lower scores than in those with higher scoreswe have already seen in table 1 that there is a negative correlation between essay score and two of alek component measures the general corpus ngramshowever the data in table 1 were not based on specific vocabulary items and do not reflect overall system performance which includes the other measures as welltable 6 shows the proportion of test word occurrences that were classified by alek as containing errors within two positions of the target at each of 6 toefl score pointsas predicted the correlation is negative these data support the validity of the system as a detector of inappropriate usage even when only a limited number of words are targeted and only the immediate context of each target is examinedalek and by a human judge for comparison table 6 also gives the estimated proportions of inappropriate usage by score point based on the human judge classificationhere too there is a negative correlation rs 90 n 5 p 05 twotailedalthough the system recognizes a wide range of error types as table 6 shows it detects only about onefifth as many errors as a human judge doesto improve recall research needs to focus on the areas identified in section 32 and to improve precision efforts should be directed at reducing the false positives described in 33alek is being developed as a diagnostic tool for students who are learning english as a foreign languagehowever its techniques could be incorporated into a grammar checker for native speakerswe thank susanne wolff for evaluating the test sentences and robert kantor ken sheppard and 3 anonymous reviewers for their helpful suggestions
A00-2019
cut and paste based text summarizationwe present a cut and paste based text summarizer which uses operations derived from an analysis of human written abstractsthe summarizer edits extracted sentences using reduction to remove inessential phrases and combination to merge resuiting phrases together as coherent sentencesour work includes a statistically based sentence decomposition program that identifies where the phrases of a summary originate in the original document producing an aligned corpus of summaries and articles which we used to develop the summarizerwe first extract sentences then remove redundant phrases and use recombination rules to produce coherent outputwe manually analyze 30 humanwritten summaries and find that 19 of sentences can not be explained by cutandpaste operations from the source text
cut and paste based text summarization we present a cut and paste based text summarizer which uses operations derived from an analysis of human written abstracts the summarizer edits extracted sentences using reduction to remove inessential phrases and combination to merge resulting phrases together as coherent sentences our work includes a statistically based sentence decomposition program that identifies where the phrases of a summary originate in the original document producing an aligned corpus of summaries and articles which we used to develop the summarizer there is a big gap between the summaries produced by current automatic summarizers and the abstracts written by human professionalscertainly one factor contributing to this gap is that automatic systems can not always correctly identify the important topics of an articleanother factor however which has received little attention is that automatic summarizers have poor text generation techniquesmost automatic summarizers rely on extracting key sentences or paragraphs from an article to produce a summarysince the extracted sentences are disconnected in the original article when they are strung together the resulting summary can be inconcise incoherent and sometimes even misleadingwe present a cut and paste based text summarization technique aimed at reducing the gap between automatically generated summaries and humanwritten abstractsrather than focusing on how to identify key sentences as do other researchers we study how to generate the text of a summary once key sentences have been extractedthe main idea of cut and paste summarization is to reuse the text in an article to generate the summaryhowever instead of simply extracting sentences as current summarizers do the cut and paste system will quotsmoothquot the extracted sentences by editing themsuch edits mainly involve cutting phrases and pasting them together in novel waysthe key features of this work areing operationswe identified six operations that can be used alone or together to transform extracted sentences into sentences in humanwritten abstractsthe operations were identified based on manual and automatic comparison of humanwritten abstracts and the original articlesexamples include sentence reduction sentence combination syntactic transformation and lexical paraphrasing development of an automatic system to perform cut and paste operationstwo operations sentence reduction and sentence combination are most effective in transforming extracted sentences into summary sentences that are as concise and coherent as in humanwritten abstractswe implemented a sentence reduction module that removes extraneous phrases from extracted sentences and a sentence combination module that merges the extracted sentences or the reduced forms resulting from sentence reductionour sentence reduction model determines what to cut based on multiple sources of information including syntactic knowledge context and statistics learned from corpus analysisit improves the conciseness of extracted sentences making them concise and on targetour sentence combination module implements combination rules that were identified by observing examples written by human professionalsit improves the coherence of extracted sentences sentencesthe cut and paste technique we propose here is a new computational model which we based on analysis of humanwritten abstractsto do this analysis we developed an automatic system that can match a phrase in a humanwritten abstract to the corresponding phrase in the article identifying its most likely locationthis decomposition program allows us to analyze the construction of sentences in a humanwritten abstractits results have been used to train and test the sentence reduction and sentence combination modulein section 2 we discuss the cut and paste technique in general from both a professional and computational perspectivewe also describe the six cut and paste operationsin section 3 we describe the system architecturethe major components of the system including sentence reduction sentence combination decomposition and sentence selection are described in section 4the evaluation results are shown in section 5related work is discussed in section 6finally we conclude and discuss future workprofessionals take two opposite positions on whether a summary should be produced by cutting and pasting the original textone school of scholars is opposed quot your own words do not keep too close to the words before youquot states an early book on abstracting for american high school students another study however shows that professional abstractors actually rely on cutting and pasting to produce summaries quottheir professional role tells abstractors to avoid inventing anythingthey follow the author as closely as possible and reintegrate the most important points of a document in a shorter textquot some studies are somewhere in between quotsummary language may or may not follow that of authorquot other guidelines or books on abstracting do not discuss the issueour cut and paste based summarization is a computational model we make no claim that humans use the same cut and paste operationswe manually analyzed 30 articles and their corresponding humanwritten summaries the articles and their summaries come from different domains and the summaries were written by professionals from different organizationswe found that reusing article text for summarization is almost universal in the corpus we studiedwe defined six operations that can be used alone sequentially or simultaneously to transform selected sentences from an article into the corresponding summary sentences in its humanwritten abstractremove extraneous phrases from a selected sentence as in the following example 1 the examples in this section were produced by human professionals document sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to seesummary sentence the vchip will give parents a device to block out programs they do not want their children to seethe deleted material can be at any granularity a word a phrase or a clausemultiple components can be removedmerge material from several sentencesit can be used together with sentence reduction as illustrated in the following example which also uses paraphrasing text sentence 1 but it also raises serious questions about the privacy of such highly personal information wafting about the digital worldtext sentence 2 the issue thus fits squarely into the broader debate about privacy and security on the internet whether it involves protecting credit card number or keeping children from offensive informationsummary sentence but it also raises the issue of privacy of such personal information and this issue hits the head on the nail in the broader debate about privacy and security on the internetin both sentence reduction and combination syntactic transformations may be involvedfor example the position of the subject in a sentence may be moved from the end to the frontreplace phrases with their paraphrasesfor instance the summaries substituted point out with note and fits squarely into with a more picturesque description hits the head on the nail in the previous examples generalization or specification replace phrases or clauses with more general or specific descriptionsexamples of generalization and specification include generalization quota proposed new law that would require web publishers to obtain parental consent before collecting personal information from childrenquot quotlegislation to protect children privacy onlinequot specification quotthe white house top drug officialquot 4 quotgen barry r mccaffrey the white house top drug officialquotchange the order of extracted sentencesfor instance place an ending sentence in an article at the beginning of an abstractin humanwritten abstracts there are of course sentences that are not based on cut and paste but completely written from scratchwe used our decomposition program to automatically analyze 300 humanwritten abstracts and found that 19 of sentences in the abstracts were written from scratchthere are also other cut and paste operations not listed here due to their infrequent occurrencethe architecture of our cut and paste based text summarization system is shown in figure 1input to the system is a single document from any domainin the first stage extraction key sentences in the article are identified as in most current summarizersin the second stage cut and paste based generation a sentence reduction module and a sentence combination module implement the operations we observed in humanwritten abstractsthe cut and paste based component receives as input not only the extracted key sentences but also the original articlethis component can be ported to other singledocument summarizers to serve as the generation component since most current summarizers extract key sentences exactly what the extraction module in our system doesother resources and tools in the summarization system include a corpus of articles and their humanwritten abstracts the automatic decomposition program a syntactic parser a coreference resolution system the wordnet lexical database and a largescale lexicon we combined from multiple resourcesthe components in dotted lines are existing tools or resources all the others were developed by ourselvesthe main focus of our work is on decomposition of summaries sentence reduction and sentence combinationwe also describe the sentence extraction module although it is not the main focus of our workthe decomposition program see for details is used to analyze the construction of sentences in humanwritten abstractsthe results from decomposition are used to build the training and testing corpora for sentence reduction and sentence combinationthe decomposition program answers three questions about a sentence in a humanwritten abstract is the sentence constructed by cutting and pasting phrases from the input article if so what phrases in the sentence come from the original article where in the article do these phrases come fromwe used a hidden markov model solution to the decomposition problemwe first mathematically formulated the problem reducing it to a problem of finding for each word in a summary sentence a document position that it most likely comes fromthe position of a word in a document is uniquely identified by the position of the sentence where the word appears and the position of the word within the sentencebased on the observation of cut and paste practice by humans we produced a set of general heuristic rulessample heuristic rules include two adjacent words in a summary sentence are most likely to come from two adjacent words in the original document adjacent words in a summary sentence are not very likely to come from sentences that are far apart in the original documentwe use these heuristic rules to create a hidden markov modelthe viterbi algorithm is used to efficiently find the most likely document position for each word in the summary sentencefigure 2 shows sample output of the programfor the given summary sentence the program correctly identified that the sentence was combined from four sentences in the input articleit also divided the summary sentence into phrases and pinpointed the exact document origin of each phrasea phrase in the summary sentence is annotated as where fnum is the sequential number of the phrase and snum is the number of the document sentence where the phrase comes fromsnum 1 means that the component does not come from the original documentthe phrases in the document sentences are annotated as the task of the sentence reduction module described in detail in is to remove extraneous phrases from extracted sentencesthe goal of reduction is to quotreduce without major lossquot that is we want to remove as many extraneous phrases as possible from an extracted sentence so that it can be concise but without detracting from the main idea that the sentence conveysideally we want to remove a phrase from an extracted sentence only if it is irrelavant to the main topicour reduction module makes decisions based on multiple sources of knowledge original sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to see by adding up the scores of its children nodes in the parse treethis score indicates how important the phrase is to the main topic in discussionthe phrases we remove from an extracted sentence include clauses prepositional phrases gerunds and toinfinitivesthe result of sentence reduction is a shortened version of an extracted sentence 2this shortened text can be used directly as a summary or it can be fed to the sentence combination module to be merged with other sentencesfigure 3 shows two examples produced by the reduction programthe corresponding sentences in humanwritten abstracts are also provided for comparison21t is actually also possible that the reduction program decides no phrase in a sentence should be removed thus the result of reduction is the same as the inputto build the combination module we first manually analyzed a corpus of combination examples produced by human professionals automatically created by the decomposition program and identified a list of combination operationstable 1 shows the combination operationsto implement a combination operation we need to do two things decide when to use which combination operation and implement the combining actionsto decide when to use which operation we analyzed examples by humans and manually wrote a set of rulestwo simple rules are shown in figure 4sample outputs using these two simple rules are shown in figure 5we are currently exploring using machine learning techniques to learn the combination rules from our corpusthe implementation of the combining actions involves joining two parse trees substituting a subtree with another or adding additional nodeswe implemented these actions using a formalism based on tree adjoining grammar the extraction module is the front end of the summarization system and its role is to extract key sentencesour method is primarily based on lexical relationsfirst we link words in a sentence with other words in the article through repetitions morphological relations or one of the lexical relations encoded in wordnet similar to step 2 in sentence reductionan importance score is computed for each word in a sentence based on the number of lexical links it has with other words the type of links and the directions of the linksafter assigning a score to each word in a sentence we then compute a score for a sentence by adding up the scores for each wordthis score is then normalif and then replace the phrase with the full name plus the full description if and and then merge the two sentences by removing the subject in the second sentence and then combining it with the first sentence using connective quotandquot ized over the number of words a sentence containsthe sentences with high scores are considered importantthe extraction system selects sentences based on the importance computed as above as well as other indicators including sentence positions cue phrases and tfidf scoresour evaluation includes separate evaluations of each module and the final evaluations of the overall systemwe evaluated the decomposition program by two experiments described in in the first experiment we selected 50 humanwritten abstracts consisting of 305 sentences in totala human subject then read the decomposition results of these sentences to judge whether they are correct938 of the sentences were correctly decomposedin the second experiment we tested the system in a summary alignment taskwe ran the decomposition program to identify the source document sentences that were used to construct the sentences in humanwritten abstractshuman subjects were also asked to select the document sentences that are semanticequivalent to the sentences in the abstractswe compared the set of sentences identified by the program with the set of sentences selected by the majority of human subjects which is used as the gold standard in the computation of precision and recallthe program achieved an average 815 precision 785 recall and 791 fmeasure for 10 documentsthe average performance of 14 human judges is 888 precision 844 recall and 857 fmeasurerecently we have also tested the system on legal documents and the program works well on those documents toothe evaluation of sentence reduction for details used a corpus of 500 sentences and their reduced forms in humanwritten abstracts400 sentences were used to compute corpus probabilities and 100 sentences were used for testingthe results show that 813 of the reduction decisions made by the system agreed with those of humansthe humans reduced the length of the 500 sentences by 442 on average and the system reduced the length of the 100 test sentences by 327the evaluation of sentence combination module is not as straightforward as that of decomposition or reduction since combination happens later in the pipeline and it depends on the output from prior the new measure is an echo of the original bad idea blurred just enough to cloud prospects both for enforcement and for court reviewunlike the 1996 act this one applies only to commercial web sites thus sidestepping conversation deemed quotindecentquot by somebody somewherethe new version also replaces the vague quotindecencyquot standard to which the court objected with the betterdefined one of material ruled quotharmful to minorsquot combined sentences the new measure is an echo of the original bad ideathe new version applies only to commercial web sites and replaces the vague quotindecencyquot standard with the betterdefined one of material ruled quotharmful to minorsquot modulesto evaluate just the combination component we assume that the system makes the same reduction decision as humans and the coreference system has a perfect performancethis involves manual tagging of some examples to prepare for the evaluation this preparation is in progressthe evaluation of sentence combination will focus on the accessment of combination rulesthe overall system evaluation includes both intrinsic and extrinsic evaluationin the intrinsic evaluation we asked human subjects to compare the quality of extractionbased summaries and their revised versions produced by our sentence reduction and combination moduleswe selected 20 documents three different automatic summarizers were used to generate a summary for each document producing 60 summaries in totalthese summaries are all extractionbasedwe then ran our sentence reduction and sentence combination system to revise the summaries producing a revised version for each summarywe presented human subjects with the full documents the extractionbased summaries and their revised versions and asked them to compare the extractionbased summaries and their revised versionsthe human subjects were asked to score the conciseness of the summaries based on a scale from 0 to 10 the higher the score the more concise a summary isthey were also asked to score the coherence of the summaries based on a scale from 0 to 10on average the extractionbased summaries have a score of 42 for conciseness while the revised summaries have a score of 79 the average improvement for the three systems are 78 105 and 88 respectivelythe revised summaries are on average 41 shorter than the original extractionbased summariesfor summary coherence the average score for the extractionbased summaries is 39 while the average score for the revised summaries is 61 the average improvement for the three systems are 69 57 and 53 respectivelywe are preparing a taskbased evaluation in which we will use the data from the summarization evaluation conference and compare how our revised summaries can influence humans performance in tasks like text categorization and adhoc retrieval addressed the problem of revising summaries to improve their qualitythey suggested three types of operations elimination aggregation and smoothingthe goal of the elimination operation is similar to that of the sentence reduction operation in our systemthe difference is that while elimination always removes parentheticals sentenceinitial pps and certain adverbial phrases for every extracted sentence our sentence reduction module aims to make reduction decisions according to each case and removes a sentence component only if it considers it appropriate to do sothe goal of the aggregation operation and the smoothing operation is similar to that of the sentence combination operation in our systemhowever the combination operations and combination rules that we derived from corpus analysis are significantly different from those used in the above system which mostly came from operations in traditional natural language generationthis paper presents a novel architecture for text summarization using cut and paste techniques observed in humanwritten abstractsin order to automatically analyze a large quantity of humanwritten abstracts we developed a decomposition programthe automatic decomposition allows us to build large corpora for studying sentence reduction and sentence combination which are two effective operations in cut and pastewe developed a sentence reduction module that makes reduction decisions using multiple sources of knowledgewe also investigated possible sentence combination operations and implemented the combination modulea sentence extraction module was developed and used as the front end of the summarization systemwe are preparing the taskbased evaluation of the overall systemwe also plan to evaluate the portability of the system by testing it on another corpuswe will also extend the system to querybased summarization and investigate whether the system can be modified for multiple document summarizationwe thank ibm for licensing us the esg parser and the mitre corporation for licensing us the coreference resolution systemthis material is based upon work supported by the national science foundation under grant noiri 9619124 and mi 9618797any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation
A00-2024
trainable methods for surface natural language generationwe present three systems for surface natural language generation that are trainable from annotated corporathe first two systems called nlg1 and nlg2 require a corpus marked only with domainspecific semantic attributes while the last system called nlg3 requires a corpus marked with both semantic attributes and syntactic dependency informationall systems attempt to produce a grammatical natural language phrase from a domainspecific semantic representationnlg1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step while nlg2 and nlg3 use maximum entropy probability models to individually generate each word in the phrasethe systems nlg2 and nlg3 learn to determine both the word choice and the word order of the phrasewe present experiments in which we generate phrases to describe flights in the air travel domainwe use maximum entropy models to drive generation with word bigram or dependency representations taking into account semantic featureswe use a large collection of generation templates for surface realizationwe present maximum entropy models to learn attribute ordering and lexical choice for sentence generation from a semantic representation of attributevalue pairs restricted to an air travel domain
trainable methods for surface natural language generation we present three systems for surface natural language generation that are trainable from annotated corpora the first two systems called nlg1 and nlg2 require a corpus marked only with domainspecific semantic attributes while the last system called nlg3 requires a corpus marked with both semantic attributes and syntactic dependency information all systems attempt to produce a grammatical natural language phrase from a domainspecific semantic representation nlg1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step while nlg2 and nlg3 use maximum entropy probability models to individually generate each word in the phrase the systems nlg2 and nlg3 learn to determine both the word choice and the word order of the phrase we present experiments in which we generate phrases to describe flights in the air travel domain this paper presents three trainable systems for surface natural language generation surface nlg for our purposes consists of generating a grammatical natural language phrase that expresses the meaning of an input semantic representationthe systems take a quotcorpusbasedquot or quotmachinelearningquot approach to surface nlg and learn to generate phrases from semantic input by statistically analyzing examples of phrases and their corresponding semantic representationsthe determination of the content in the semantic representation or quotdeepquot generation is not discussed hereinstead the systems assume that the input semantic representation is fixed and only deal with how to express it in natural languagethis paper discusses previous approaches to surface nlg and introduces three trainable systems for surface nlg called nlg1 nlg2 and nlg3quantitative evaluation of experiments in the air travel domain will also be discussedtemplates are the easiest way to implement surface nlga template for describing a flight noun phrase in the air travel domain might be flight departing from cityfr at timedep and arriving in cityto at timearr where the words starting with quotquot are actually variables representing the departure city and departure time the arrival city and the arrival time respectively whose values will be extracted from the environment in which the template is usedthe approach of writing individual templates is convenient but may not scale to complex domains in which hundreds or thousands of templates would be necessary and may have shortcomings in maintainability and text quality for a discussionthere are more sophisticated surface generation packages such as fufsurge kpml mumble and realpro which produce natural language text from an abstract semantic representationthese packages require linguistic sophistication in order to write the abstract semantic representation but they are flexible because minor changes to the input can accomplish major changes to the generated textthe only trainable approaches to surface generation are the purely statistical machine translation systems such as and the corpusbased generation system described in the mt systems of learn to generate text in the target language straight from the source language without the aid of an explicit semantic representationin contrast uses corpusderived statistical knowledge to rank plausible hypotheses from a grammarbased surface generation componentin trainable surface nlg the goal is to learn the mapping from semantics to words that would otherwise need to be specified in a grammar or knowledge baseall systems in this paper use attributevalue pairs as a semantic representation which suffice as a representation for a limited domain like air travelfor example the set of attributevalue pairs cityfr new york city cityto seattle timedep 6 am datedep wednesday represent the meaning of the noun phrase quota flight to seattle that departs from new york city at 6 am on wednesdayquotthe goal more specifically is then to learn the optimal attribute ordering and lexical choice for the text to be generated from the attributevalue pairsfor example the nlg system should automatically decide if the attribute ordering in quotflights to new york in the eveningquot is better or worse than the ordering in quotflights in the evening to new yorkquotfurthermore it should automatically decide if the lexical choice in quotflights departing to new yorkquot is better or worse than the choice in quotflights leaving to new yorkquotthe motivation for a trainable surface generator is to solve the above two problems in a way that reflects the observed usage of language in a corpus but without the manual effort needed to construct a grammar or knowledge baseall the trainable nlg systems in this paper assume the existence of a large corpus of phrases in which the values of interest have been replaced with their corresponding attributes or in other words a corpus of generation templatesfigure 1 shows a sample of training data where only words marked with a quot8quot are attributesall of the nlg systems in this paper work in two steps as shown in table 2the systems nlg1 nlg2 and nlg3 all implement step 1 they produce a sequence of words intermixed with attributes ie a template from the the attributes alonethe values are ignored until step 2 when they replace their corresponding attributes in the phrase produced by step 1the surface generation model nlg1 simply chooses the most frequent template in the training data that corresponds to a given set of attributesits performance is intended to serve as a baseline result to the more sophisticated models discussed laterspecifically nlgi returns the phrase that corresponds to the attribute set a empty string ta where ta are the phrases that have occurred with a in the training data and where c is the training data frequency of the natural language phrase phrase and the set of attributes a nlg1 will fail to generate anything if a is a novel combination of attributesthe surface generation system nlg2 assumes that the best choice to express any given attributevalue set is the word sequence with the highest probability that mentions all of the input attributes exactly oncewhen generating a word it uses local information captured by word ngrams together with certain nonlocal information namely the subset of the original attributes that remain to be generatedthe local and nonlocal information is integrated with use of features in a maximum entropy probability model and a highly pruned search procedure attempts to find the best scoring word sequence according to the modelthe probability model in nlg2 is a conditional distribution over v you stop where v is the generation vocabulary and where stop is a special quotstopquot symbolthe generation vocabulary v consists of all the words seen in the training datathe form of the maximum entropy probability model is identical to the one used in where wi ranges over v you stop and wii wi2 attri is the history where wi denotes the ith word in the phrase and attri denotes the attributes that remain to be generated at position i in the phrasethe h where f3 e 01 are called features and capture any information in the history that might be useful for estimating pthe features used in nlg2 are described in the next section and the feature weights ai obtained from the improved iterative scaling algorithm are set to maximize the likelihood of the training datathe probability of the sequence w wn given the attribute set a is the feature patterns used in nlg2 are shown in table 3the actual features are created by matching the patterns over the training data eg an actual feature derived from the word higram template might be f01 if w from and wii flight and city fr e attri otherwise input to step 1 1 cityfr cityto timedep datedep 1 output of step 1 quota flight to cityto that departs from cityfr at timedep on datedepquot input to step 2 quota flight to cityto that departs from cityfr at timedep on datedepquot cityfr new york city cityto seattle timedep 6 am datedep wednesday output of step 2 quota flight to seattle that departs from new york city at 6 am on wednesdayquot low frequency features involving word ngrams tend to be unreliable the nlg2 system therefore only uses features which occur k times or more in the training datathe search procedure attempts to find a word sequence wi wn of any length n m for the input attribute set a such that and where m is an heuristically set maximum phrase lengththe search is similar to a lefttoright breadthfirstsearch except that only a fraction of the word sequences are consideredmore specifically the search procedure implements the recurrence vvna top 147mi11 top the set wni is the top n scoring sequences of length i and the expression next returns all sequences w1 w1 such that wi w e wn and wifi e v you stopthe expression top finds the top n sequences in nextduring the search any sequence that ends with stop is removed and placed in the set of completed sequencesif n completed hypotheses are discovered or if wnm is computed the search terminatesany incomplete sequence which does not satisfy condition is discarded and any complete sequence that does not satisfy condition is also discardedwhen the search terminates there will be at most n completed sequences of possibly differing lengthscurrently there is no normalization for different lengths ie all sequences of length n m are equiprobable nlg2 chooses the best answer to express the attribute set a as follows where wnig2 are the completed word sequences that satisfy the conditions of the nlg2 search described abovenlg3 addresses a shortcoming of nlg2 namely that the previous two words are not necessarily the best informants when predicting the next wordinstead nlg3 assumes that conditioning on syntactically related words in the history will result on more accurate surface generationthe search procedure in nlg3 generates a syntactic dependency tree from toptobottom instead of a word sequence from lefttoright where each word is predicted in the context of its syntactically related parent grandparent and siblingsnlg3 requires a corpus that has been annotated with tree structure like the sample dependency tree shown in figure 1the probability model for nlg3 shown in figure 2 conditions on the parent the two closest siblings the direction of the child relative to the parent and the attributes that remain to be generatedjust as in nlg2 p is a distribution over v you stop and the improved iterative scaling algorithm is used to find the feature weights a3the expression chi denotes the ith closest child to the headword w par denotes the parent of the headword w dir e left right denotes the direction of the child relative to the parent and attri denotes the attributes that remain to be generated in the tree when headword w is predicting its ith childfor example in figure 1 if w quotflightsquot then chi quoteveningquot when generating the left children and chl quotfromquot when generating the right childrenas shown in figure 3 the probability of a dependency tree that expresses an attribute set a can be found by computing for each word in the tree the probability of generating its left children and then its right childrenin this formulation the left children are generated independently from the right childrenas in nlg2 nlg3 assumes the uniform distribution for the length probabilities pr and pr up to a certain maximum length m 10the feature patterns for nlg3 are shown in table 4as before the actual features are created by matching the patterns over the training datathe features in nlg3 have access to syntactic information whereas the features in nlg2 do notlow frequency features involving word ngrams tend to be unreliable the nlg3 system therefore only uses features which occur k times or more in the training datafurthermore if a feature derived from table 4 looks at a particular word chi and attribute a we only allow it if a has occurred as a descendent of 1we use a dummy root node to generate the top most head word of the phrase chi in some dependency tree in the training setas an example this condition allows features that look at chi quottoquot and citytoe attri but disallows features that look at chi quottoquot and cityfre the idea behind the search procedure for nlg3 is similar to the search procedure for nlg2 namely to explore only a fraction of the possible trees by continually sorting and advancing only the top n trees at any given pointhowever the dependency trees are not built lefttoright like the word sequences in nlg2 instead they are built from the current head in the following order as before any incomplete trees that have generated a particular attribute twice as well as completed trees that have not generated a necessary attribute are discarded by the searchthe search terminates when either n complete trees or n trees of the maximum length m are discoverednlg3 chooses the best answer to express the attribute set a as follows where tn193 are the completed dependency trees that satisfy the conditions of the nlg3 search described abovethe training and test sets used to evaluate nlg1 nlg2 and nlg3 were derived semiautomatically from a preexisting annotated corpus of user queries in the air travel domainthe annotation scheme used a total of 26 attributes to represent flightsthe training set consisted of 6000 templates describing flights while the test set consisted of 1946 templates describing flightsall systems used the same training set and were tested on the attribute sets extracted from the phrases in the test setfor example if the test set contains the template quotflights to cityto leaving at timedepquot the surface generation systems will be told to generate a phrase for the attribute set cityto timedep the output of nlg3 on the attribute set cityto cityfr timedep is shown in table 9there does not appear to be an objective automatic evaluation method2 for generated text that correlates with how an actual person might judge the outputtherefore two judges the author and a colleague manually evaluated the output of all three systemseach judge assigned each phrase from each of the three systems one of the following rankings ok tense or agreement is wrong but word choice is correctbad words are missing or extraneous words are present no output the system failed to produce any output while there were a total 1946 attribute sets from the test examples the judges only needed to evaluate the 190 unique attribute sets eg the attribute set cityfr cityto occurs 741 times in the test datasubjective evaluation of generation output is 2measuring word overlap or edit distance between the system output and a quotreferencequot set would be an automatic scoring methodwe believe that such a method does not accurately measure the correctness or grammaticality of the text not ideal but is arguably superior than an automatic evaluation that fails to correlate with human linguistic judgementthe results of the manual evaluation as well as the values of the search and feature selection parameters for all systems are shown in tables 5 6 7 and 8the weighted results in tables 5 and 6 account for multiple occurrences of attribute sets whereas the unweighted results in tables 7 and 8 count each unique attribute set once ie cityfr scityto is counted 741 times in the weighted results but once in the unweighted resultsusing the weighted results which represent testing conditions more realistically than the unweighted results both judges found an improvement from nlg1 to nlg2 and from nlg2 to nlg3nlg3 cuts the error rate from nlg1 by at least 33 nlg2 cuts the error rate by at least 22 and underperforms nlg3 but requires far less annotation in its training datanlg1 has no chance of generating anything for 3 of the data it fails completely on novel attribute setsusing the unweighted results both judges found an improvement from nlg1 to nlg2 but surprisingly judge a found a slight decrease while judge b found an increase in accuracy from nlg2 to nlg3the unweighted results show that the baseline nlg1 does well on the common attribute sets since it correctly generates only less than 50 of the unweighted cases but over 80 of the weighted casesthe nlg2 and nlg3 systems automatically attempt to generalize from the knowledge inherent in the training corpus of templates so that they can generate templates for novel attribute setsthere probability 0107582 000822441 000564712 000343372 00012465 generated text timedep flights from cityfr to cityto timedep flights between cityfr and cityto timedep flights cityfr to cityto flights from cityfr to cityto at timedep timedep flights from cityfr to to cityto table 9 sample output from nlg3typical values for attributes timedep quot10 amquot cityfr quotnew yorkquot cityto quotmiamiquot is some additional cost associated with producing the syntactic dependency annotation necessary for nlg3 but virtually no additional cost is associated with nlg2 beyond collecting the data itself and identifying the attributesthe trainable surface nlg systems in this paper differ from grammarbased systems in how they determine the attribute ordering and lexical choicenlg2 and nlg3 automatically determine attribute ordering by simultaneously searching multiple orderingsin grammarbased approaches such preferences need to be manually encodednlg2 and nlg3 solve the lexical choice problem by learning the words that correlate with a given attribute and local context whereas uses a rulebased approach to decide the word choicewhile trainable approaches avoid the expense of crafting a grammar to determine attribute ordering and lexical choice they are less accurate than grammarbased approachesfor short phrases accuracy is typically 100 with grammarbased approaches since the grammar writer can either correct or add a rule to generate the phrase of interest once an error is detectedwhereas with nlg2 and nlg3 one can tune the feature patterns search parameters and training data itself but there is no guarantee that the tuning will result in 100 generation accuracyour approach differs from the corpusbased surface generation approaches of and maps from semantics to words with a concept ontology grammar and lexicon and ranks the resulting word lattice with corpusbased statistics whereas nlg2 and nlg3 automatically learn the mapping from semantics to words from a corpus describes a statistical machine translation approach that generates text in the target language directly from the source textnlg2 and nlg3 are also statistical learning approaches but generate from an actual semantic representationthis comparison suggests that statistical mt systems could also generate text from an quotinterlinguaquot in a way similar to that of knowledgebased translation systemswe suspect that our statistical generation approach should perform accurately in domains of similar complexity to air travelin the air travel domain the length of a phrase fragment to describe an attribute is usually only a few wordsdomains which require complex and lengthy phrase fragments to describe a single attribute will be more challenging to model with features that only look at word ngrams for n e 2 3domains in which there is greater ambiguity in word choice will require a more thorough search ie a larger value of n at the expense of cpu time and memorymost importantly the semantic annotation scheme for air travel has the property that it is both rich enough to accurately represent meaning in the domain but simple enough to yield useful corpus statisticsour approach may not scale to domains such as freely occurring newspaper text in which the semantic annotation schemes do not have this propertyour current approach has the limitation that it ignores the values of attributes even though they might strongly influence the word order and word choicethis limitation can be overcome by using features on values so that nlg2 and nlg3 might discover to use a hypothetical example that quotflights leaving cityfrquot is preferred over quotflights from cityfrquot when cityfr is a particular value such as quotmiamiquotthis paper presents the first systems that use a statistical learning approach to produce natural language text directly from a semantic representationinformation to solve the attribute ordering and lexical choice problems which would normally be specified in a large handwritten grammar is automatically collected from data with a few feature patterns and is combined via the maximum entropy frameworknlg2 shows that using just local ngram information can outperform the baseline and nlg3 shows that using syntactic information can further improve generation accuracywe conjecture that nlg2 and nlg3 should work in other domains which have a complexity similar to air travel as well as available annotated datathe author thanks scott mccarley for serving as the second judge and scott axelrod kishore papineni and todd ward for their helpful comments on this workthis work was supported in part by darpa contract mda97297c0012
A00-2026
"a novel use of statistical parsing to extract information from textsince 1995 a few statistical par(...TRUNCATED)
"a novel use of statistical parsing to extract information from text since 1995 a few statistical pa(...TRUNCATED)
A00-2030
"assigning function tags to parsed textit is generally recognized that the common nonterminal labels(...TRUNCATED)
"assigning function tags to parsed text and vincent j della pietra 1996 a maximum entropy approach t(...TRUNCATED)
A00-2031

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5
Add dataset card