{ "paper_id": "D09-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:40:18.737124Z" }, "title": "Automatically Evaluating Content Selection in Summarization without Human Models", "authors": [ { "first": "Annie", "middle": [], "last": "Louis", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "nenkova@seas.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a fully automatic method for content selection evaluation in summarization that does not require the creation of human model summaries. Our work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other. Results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons are very effective for the evaluation of content selection. Our automatic methods rank participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness. The best feature, Jensen-Shannon divergence, leads to a correlation as high as 0.88 with manual pyramid and 0.73 with responsiveness evaluations.", "pdf_parse": { "paper_id": "D09-1032", "_pdf_hash": "", "abstract": [ { "text": "We present a fully automatic method for content selection evaluation in summarization that does not require the creation of human model summaries. Our work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other. Results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons are very effective for the evaluation of content selection. Our automatic methods rank participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness. The best feature, Jensen-Shannon divergence, leads to a correlation as high as 0.88 with manual pyramid and 0.73 with responsiveness evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The most commonly used evaluation method for summarization during system development and for reporting results in publications is the automatic evaluation metric ROUGE (Lin, 2004; Lin and Hovy, 2003) . ROUGE compares system summaries against one or more model summaries by computing n-gram word overlaps between the two. The wide adoption of such automatic measures is understandable because they are convenient and greatly reduce the complexity of evaluations. ROUGE scores also correlate well with manual evaluations of content based on comparison with a single model summary, as used in the early editions of the Document Understanding Conferences (Over et al., 2007) .", "cite_spans": [ { "start": 168, "end": 179, "text": "(Lin, 2004;", "ref_id": "BIBREF8" }, { "start": 180, "end": 199, "text": "Lin and Hovy, 2003)", "ref_id": "BIBREF6" }, { "start": 651, "end": 670, "text": "(Over et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our work, we take the idea of automatic evaluation to an extreme and explore the feasibility of developing a fully automatic evaluation method for content selection that does not make use of human model summaries at all. To this end, we show that evaluating summaries by comparing them with the input obtains good correlations with manual evaluations for both query focused and update summarization tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results have important implications for future development of summarization systems and their evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "duced with the fully automatic method and manual evaluations show that the new evaluation measures can be used during system development when human model summaries are not available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "High correlations between system ranking pro-", "sec_num": null }, { "text": "Our results provide validation of several features that can be optimized in the development of new summarization systems when the objective is to improve content selection on average, over a collection of test inputs. However, none of the features is consistently predictive of good summary content for individual inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "High correlations between system ranking pro-", "sec_num": null }, { "text": "We find that content selection performance on standard test collections can be approximated well by the proposed fully automatic method. This result greatly underlines the need to require linguistic quality evaluations alongside content selection ones in future evaluations and research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "High correlations between system ranking pro-", "sec_num": null }, { "text": "Proposals for developing fully automatic methods for summary evaluation have been put forward in the past. Their attractiveness is obvious for large scale evaluations, or for evaluation on nonstandard test sets for which human models are not available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model-free methods for evaluation", "sec_num": "2" }, { "text": "For example in Radev et al. (2003) , a large scale fully automatic evaluation of eight summarization systems on 18,000 documents was performed without any human effort. A search engine was used to rank documents according to their relevance to a given query. The summaries for each document were also ranked for relevance with respect to the same query. For good summarization systems, the relevance ranking of summaries is expected to be similar to that of the full documents. Based on this intuition, the correlation between relevance rankings of summaries and original documents was used to compare the different systems. The approach was motivated by the assumption that the distribution of terms in a good summary is similar to the distribution of terms in the original document.", "cite_spans": [ { "start": 15, "end": 34, "text": "Radev et al. (2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Model-free methods for evaluation", "sec_num": "2" }, { "text": "Even earlier, Donaway et al. (2000) suggested that there are considerable benefits to be had in adopting model-free methods of evaluation involving direct comparisons between the original document and its summary. The motivation for their work was the considerable variation in content selection choices in model summaries (Rath et al., 1961) . The identity of the model writer significantly affects summary evaluations (also noted by McKeown et al. (2001) , Jing et al. (1998) ) and evaluations of the same systems can be rather different when different models are used. In their experiments, Donaway et al. (2000) demonstrated that the correlations between manual evaluation using a model summary and a) manual evaluation using a different model summary b) automatic evaluation by directly comparing input and summary 1 , are the same. Their conclusion was that such automatic methods should be seriously considered as an alternative to model based evaluation.", "cite_spans": [ { "start": 14, "end": 35, "text": "Donaway et al. (2000)", "ref_id": "BIBREF2" }, { "start": 323, "end": 342, "text": "(Rath et al., 1961)", "ref_id": "BIBREF14" }, { "start": 435, "end": 456, "text": "McKeown et al. (2001)", "ref_id": "BIBREF9" }, { "start": 459, "end": 477, "text": "Jing et al. (1998)", "ref_id": "BIBREF3" }, { "start": 594, "end": 615, "text": "Donaway et al. (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Model-free methods for evaluation", "sec_num": "2" }, { "text": "In this paper, we present a comprehensive study of fully automatic summary evaluation without any human models. A summary's content is judged for quality by directly estimating its closeness to the input. We compare several probabilistic and information-theoretic approaches for characterizing the similarity and differences between input and summary content. A simple informationtheoretic measure, Jensen Shannon divergence between input and summary, emerges as the best fea-ture. System rankings produced using this measure lead to correlations as high as 0.88 with human judgements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model-free methods for evaluation", "sec_num": "2" }, { "text": "Two types of summaries, query-focused and update summaries, were evaluated in the summarization track of the 2008 Text Analysis Conference (TAC) 2 . Query-focused summaries were produced from input documents in response to a stated user information need. The update summaries require more sophistication: two sets of articles on the same topic are provided. The first set of articles represents the background of a story and users are assumed to be already familiar with the information contained in them. The update task is to produce a multi-document summary from the second set of articles that can serve as an update to the user. This task is reminiscent of the novelty detection task explored at TREC (Soboroff and Harman, 2005) .", "cite_spans": [ { "start": 706, "end": 733, "text": "(Soboroff and Harman, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Query-focused and Update Summaries", "sec_num": "3.1" }, { "text": "The test set for the TAC 2008 summarization task contains 48 inputs. Each input consists of two sets of 10 documents each, called docsets A and B. Both A and B are on the same general topic but B contains documents published later than those in A. In addition, the user's information need associated with each input is given by a query statement consisting of a title and narrative. An example query statement is shown below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "Title: Airbus A380 Narrative: Describe developments in the production and launch of the Airbus A380. A system must produce two summaries: (1) a query-focused summary of docset A, (2) a compilation of updates from docset B, assuming that the user has read all the documents in A. The maximum length for both types of summaries is 100 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "There were 57 participating systems in TAC 2008. We use the summaries and evaluations of these systems for the experiments reported in the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "Both manual and automatic evaluations were conducted at NIST to assess the quality of summaries produced by the systems. Pyramid evaluation: The pyramid evaluation method (Nenkova and Passonneau, 2004) has been developed for reliable and diagnostic assessment of content selection quality in summarization and has been used in several large scale evaluations (Nenkova et al., 2007) . It uses multiple human models from which annotators identify semantically defined Summary Content Units (SCU). Each SCU is assigned a weight equal to the number of human model summaries that express that SCU. An ideal maximally informative summary would express a subset of the most highly weighted SCUs, with multiple maximally informative summaries being possible. The pyramid score for a system summary is equal to the ratio between the sum of weights of SCUs expressed in a summary (again identified manually) and the sum of weights of an ideal summary with the same number of SCUs. Four human summaries provided by NIST for each input and task were used for the pyramid evaluation at TAC. Responsiveness evaluation: Responsiveness of a summary is a measure of overall quality combining both content selection and linguistic quality: summaries must present useful content in a structured fashion in order to better satisfy the user's need. Assessors directly assigned scores on a scale of 1 (poor summary) to 5 (very good summary) to each summary. These assessments are done without reference to any model summaries. The (Spearman) correlation between the pyramid and responsiveness metrics is high but not perfect: 0.88 and 0.92 respectively for query focused and update summarization. ROUGE evaluation: NIST also evaluated the summaries automatically using ROUGE (Lin, 2004; Lin and Hovy, 2003) . Comparison between a summary and the set of four model summaries is computed using unigram (R1) and bigram overlaps (R2) 3 . The correlations between ROUGE and manual evaluations is shown in Table 1 and varies between 0.80 and 0.94. Linguistic quality evaluation: Assessors scored summaries on a scale from 1 (very poor) to 5 (very good) for five factors of linguistic quality: grammaticality, non-redundancy, referential clarity, focus, structure and coherence.", "cite_spans": [ { "start": 171, "end": 201, "text": "(Nenkova and Passonneau, 2004)", "ref_id": "BIBREF10" }, { "start": 359, "end": 381, "text": "(Nenkova et al., 2007)", "ref_id": "BIBREF11" }, { "start": 1753, "end": 1764, "text": "(Lin, 2004;", "ref_id": "BIBREF8" }, { "start": 1765, "end": 1784, "text": "Lin and Hovy, 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 1978, "end": 1985, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "3.3" }, { "text": "We do not make use of any of the linguistic quality evaluations. Our work focuses on fully automatic evaluation of content selection, so manual pyramid and responsiveness scores are used for comparison with our automatic method. The pyramid metric measures content selection exclusively, while responsiveness incorporates at least some aspects of linguistic quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "3.3" }, { "text": "We describe three classes of features to compare input and summary content: distributional similarity, summary likelihood and use of topic signatures. Both input and summary words were stopword filtered and stemmed before computing the features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for content evaluation", "sec_num": "4" }, { "text": "Measures of similarity between two probability distributions are a natural choice for the task at hand. One would expect good summaries to be characterized by low divergence between probability distributions of words in the input and summary, and by high similarity with the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "We experimented with three common measures: KL and Jensen Shannon divergence and cosine similarity. These three metrics have already been applied for summary evaluation, albeit in different contexts. In Lin et al. (2006) , KL and JS divergences between human and machine summary distributions were used to evaluate content selection. The study found that JS divergence always outperformed KL divergence. Moreover, the performance of JS divergence was better than standard ROUGE scores for multi-document summarization when multiple human models were used for the comparison.", "cite_spans": [ { "start": 203, "end": 220, "text": "Lin et al. (2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "The use of cosine similarity in Donaway et al. (2000) is more directly related to our work. They show that the difference between evaluations based on two different human models is about the same as the difference between system ranking based on one model summary and the ranking produced using input-summary similarity. Inputs and summaries were compared using only one metric: cosine similarity. Kullback Leibler (KL) divergence: The KL divergence between two probability distributions P and Q is given by", "cite_spans": [ { "start": 32, "end": 53, "text": "Donaway et al. (2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D(P ||Q) = w pP (w) log 2 pP (w) pQ(w)", "eq_num": "(1)" } ], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "It is defined as the average number of bits wasted by coding samples belonging to P using another distribution Q, an approximate of P . In our case, the two distributions are those for words in the input and summary respectively. Since KL divergence is not symmetric, both input-summary and summary-input divergences are used as features. In addition, the divergence is undefined when p P (w) > 0 but p Q (w) = 0. We perform simple smoothing to overcome the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w) = C + \u03b4 N + \u03b4 * B", "eq_num": "(2)" } ], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "Here C is the count of word w and N is the number of tokens; B = 1.5|V |, where V is the input vocabulary and \u03b4 was set to a small value of 0.0005 to avoid shifting too much probability mass to unseen events. Jensen Shannon (JS) divergence: The JS divergence incorporates the idea that the distance between two distributions cannot be very different from the average of distances from their mean distribution. It is formally defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J(P ||Q) = 1 2 [D(P ||A) + D(Q||A)],", "eq_num": "(3)" } ], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "where A = P +Q 2 is the mean distribution of P and Q. In contrast to KL divergence, the JS distance is symmetric and always defined. We use both smoothed and unsmoothed versions of the divergence as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "Similarity between input and summary: The third metric is cosine overlap between the tf * idf vector representations (with max-tf normalization) of input and summary contents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos\u03b8 = vinp.vsumm ||vinp||||vsumm||", "eq_num": "(4)" } ], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "We compute two variants:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "1. Vectors contain all words from input and summary 2. Vectors contain only topic signatures from the input and all words of the summary Topic signatures are words highly descriptive of the input, as determined by the application of loglikelihood test (Lin and Hovy, 2000) . Using only topic signatures from the input to represent text is expected to be more accurate because the reduced vector has fewer dimensions compared with using all the words from the input.", "cite_spans": [ { "start": 252, "end": 272, "text": "(Lin and Hovy, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Similarity", "sec_num": "4.1" }, { "text": "The likelihood of a word appearing in the summary is approximated as being equal to its probability in the input. We compute both a summary's unigram probability as well as its probability under a multinomial model. Unigram summary probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary likelihood", "sec_num": "4.2" }, { "text": "(pinpw1) n 1 (pinpw2) n 2 ...(pinpwr) nr (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary likelihood", "sec_num": "4.2" }, { "text": "where p inp w i is the probability in the input of word w i , n i is the number of times w i appears in the summary, and w 1 ...w r are all words in the summary vocabulary. Multinomial summary probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary likelihood", "sec_num": "4.2" }, { "text": "N ! n1!n2!...nr ! (pinpw1) n 1 (pinpw2) n 2 ...(pinpwr) nr (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary likelihood", "sec_num": "4.2" }, { "text": "where N = n 1 + n 2 + ... + n r is the total number of words in the summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary likelihood", "sec_num": "4.2" }, { "text": "Summarization systems that directly optimize for more topic signatures during content selection have fared very well in evaluations (Conroy et al., 2006) . Hence the number of topic signatures from the input present in a summary might be a good indicator of summary content quality. We experiment with two features that quantify the presence of topic signatures in a summary:", "cite_spans": [ { "start": 132, "end": 153, "text": "(Conroy et al., 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Use of topic words in the summary", "sec_num": "4.3" }, { "text": "1. Fraction of the summary composed of input's topic signatures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Use of topic words in the summary", "sec_num": "4.3" }, { "text": "2. Percentage of topic signatures from the input that also appear in the summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Use of topic words in the summary", "sec_num": "4.3" }, { "text": "While both features will obtain higher values for summaries containing many topic words, the first is guided simply by the presence of any topic word while the second measures the diversity of topic words used in the summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Use of topic words in the summary", "sec_num": "4.3" }, { "text": "We also evaluated the performance of a linear regression metric combining all of the above features. The value of the regression-based score for each summary was obtained using a leave-oneout approach. For a particular input and systemsummary combination, the training set consisted only of examples which included neither the same input nor the same system. Hence during training, no examples of either the test input or system were seen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature combination using linear regression", "sec_num": "4.4" }, { "text": "In this section, we report the correlations between system ranking using our automatic features and the manual evaluations. We studied the predictive power of features in two scenarios. MACRO LEVEL; PER SYSTEM: The values of features were computed for each summary submitted for evaluation. For each system, the feature values were averaged across all inputs. All participating systems were ranked based on the average value. Similarly, the average manual score, pyramid or responsiveness, was also computed for each system. The correlations between the two rankings are shown in Tables 2 and 4 . MICRO LEVEL; PER INPUT: The systems were ranked for each input separately, and correlations between the summary rankings for each input were computed (Table 3) . The two levels of analysis address different questions: Can we automatically identify system performance across all test inputs (macro level) and can we identify which summaries for a given input were good and which were bad (micro level)? For the first task, the answer is a definite \"yes\" while for the second task the results are mixed.", "cite_spans": [], "ref_spans": [ { "start": 580, "end": 594, "text": "Tables 2 and 4", "ref_id": "TABREF2" }, { "start": 747, "end": 756, "text": "(Table 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Correlations with manual evaluations", "sec_num": "5" }, { "text": "In addition, we compare our results to modelbased evaluations using ROUGE and analyze the effects of stemming the input and summary vocabularies. In order to allow for in-depth discussion, we will analyze our findings only for query focused summaries. Similar results were obtained for the evaluation of update summaries and are described in Section 7. 48 inputs. We find that both distributional similarity and the topic signature features produce system rankings very similar to those produced by humans. Summary probabilities, on the other hand, turn out to be unpredictive of content selection performance. The linear regression combination of features obtains high correlations with manual scores but does not lead to better results than the single best feature: JS divergence. JS divergence outperforms other features including the regression metric and obtains the best correlations with both types of manual scores, 0.88 with pyramid score and 0.74 with responsiveness. The regression metric performs comparably with correlations of 0.86 and 0.70. The correlations obtained by both JS divergence and the regression metric with pyramid evaluations are in fact better than that obtained by ROUGE-1 recall (0.85).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlations with manual evaluations", "sec_num": "5" }, { "text": "The best topic signature based featurepercentage of input's topic signatures that are present in the summary-ranks next only to JS divergence and regression. The correlation between this feature and pyramid and responsiveness evaluations is 0.79 and 0.62 respectively. The proportion of summary content composed of topic words performs worse as an evaluation metric with correlations 0.71 and 0.60. This result indicates that summaries that cover more topics from the input are judged to have better content than those in which fewer topics are mentioned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance at macro level", "sec_num": "5.1" }, { "text": "Cosine overlaps and KL divergences obtain good correlations but still lower than JS divergence or percentage of input topic words. Further, rankings based on unigram and multinomial sum-mary probabilities do not correlate significantly with manual scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance at macro level", "sec_num": "5.1" }, { "text": "On a per input basis, the proposed metrics are not that effective in distinguishing which summaries have better content. The minimum and maximum correlations with manual evaluations across the 48 inputs are given in Table 3 . The number and percentage of inputs for which correlations were significant are also reported. Now, JS divergence obtains significant correlations with pyramid scores for 73% of the inputs but for particular inputs, the correlation can be as low as 0.27. The results are worse for other features and for comparison with responsiveness scores.", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Performance on micro level", "sec_num": "5.2" }, { "text": "At the micro level, combining features with regression gives the best result overall, in contrast to the findings for the macro level setting. This result has implications for system development; no single feature can reliably predict good content for a particular input. Even a regression combination of all features is a significant predictor of content selection quality in only 77% of the cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on micro level", "sec_num": "5.2" }, { "text": "We should note however, that our features are based only on the distribution of terms in the input and therefore less likely to inform good content for all input types. For example, a set of documents each describing different opinion on a given issue will likely have less repetition on both lexical and content unit level. The predictiveness of features like ours will be limited for such inputs 4 . However, model summaries written for the specific input would give better indication of what information in the input was important and interesting. This indeed is the case as we shall see in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on micro level", "sec_num": "5.2" }, { "text": "Overall, the micro level results suggest that the fully automatic measures we examined will not be useful for providing information about summary quality for an individual input. For averages over many test sets, the fully automatic evaluations give more reliable and useful results, highly correlated with rankings produced by manual evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on micro level", "sec_num": "5.2" }, { "text": "The analysis presented so far is on features computed after stemming the input and summary words. We also computed the values of the same features without stemming and found that divergence metrics benefit greatly when stemming is done. The biggest improvements in correlations are for JS and KL divergences with respect to responsiveness. For JS divergence, the correlation increases from 0.57 to 0.73 and for KL divergence (summary-input), from 0.52 to 0.69.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effects of stemming", "sec_num": "5.3" }, { "text": "Before stemming, the topic signature and bag of words overlap features are the best predictors of responsiveness (correlations are 0.63 and 0.64 respectively) but do not change much after stemming (topic overlap-0.62, bag of words-0.64). Divergences emerge as better metrics only after stemming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effects of stemming", "sec_num": "5.3" }, { "text": "Stemming also proves beneficial for the likelihood features. Before stemming, their correlations are directed in the wrong direction, but they improve after stemming to being either positive or closer to zero. However, even after stemming, summary probabilities are not good predictors of content quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effects of stemming", "sec_num": "5.3" }, { "text": "Overall, we find that correlations with pyramid scores are higher than correlations with responsiveness. Clearly our features are designed to compare input-summary content only. Since responsiveness judgements were based on both content and linguistic quality of summaries, it is not surprising that these rankings are harder to replicate using our content based features. Nevertheless, responsiveness scores are dominated by content quality and the correlation between responsiveness and JS divergence is high, 0.73. Clearly, metrics of linguistic quality should be integrated with content evaluations to allow for better predictions of responsiveness. To date, few attempts have been made to automatically evaluate linguistic quality in summarization. Lapata and Barzilay (2005) proposed a method for coherence evaluation which holds promise but has not been validated so far on large datasets such as those used in TAC and DUC. In a simpler approach, Conroy and Dang (2008) ", "cite_spans": [ { "start": 754, "end": 780, "text": "Lapata and Barzilay (2005)", "ref_id": "BIBREF4" }, { "start": 954, "end": 976, "text": "Conroy and Dang (2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Difference in correlations: pyramid and responsiveness scores", "sec_num": "5.4" }, { "text": "For manual pyramid scores, the best correlation, 0.88, we observed in our experiments was with JS divergence. This result is unexpectedly high for a fully automatic evaluation metric. Note that the best correlation between pyramid scores and ROUGE (for R2) is 0.90, practically identical with JS divergence. For ROUGE-1, the correlation is 0.85.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with ROUGE", "sec_num": "6" }, { "text": "In the case of manual responsiveness, which combines aspects of linguistic quality along with content selection evaluation, the correlation with JS divergence is 0.73. For ROUGE, it is 0.80 for R1 and 0.87 for R2. Using higher order ngrams is obviously beneficial as observed from the differences between unigram and bigram ROUGE scores. So a natural extension of our features would be to use distance between bigram distri-butions. At the same time, for responsiveness, ROUGE-1 outperforms all the fully automatic features. This is evidence that the model summaries provide information that is unlikely to ever be approximated by information from the input alone, regardless of feature sophistication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with ROUGE", "sec_num": "6" }, { "text": "At the micro level, ROUGE does clearly better than all the automatic measures. The results are shown in the last two rows of Table 3 . ROUGE-1 recall obtains significant correlations for over 95% of inputs for responsiveness and 98% of inputs for pyramid evaluation compared to 73% (JS divergence) and 77% (regression). Undoubtedly, at the input level, comparison with model summaries is substantially more informative.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Comparison with ROUGE", "sec_num": "6" }, { "text": "When reference summaries are available, ROUGE provides scores that agree best with human judgements. However, when model sum-maries are not available, our features can provide reliable estimates of system quality when averaged over a set of test inputs. For predictions at the level of individual inputs, our fully automatic features are less useful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with ROUGE", "sec_num": "6" }, { "text": "In Table 4 , we report the performance of our features for system evaluation on the update task. The column, \"update input only\" summarizes the correlations obtained by features comparing the summaries with only the update inputs (set B). We also compared the summaries individually to the update and background (set A) inputs. The two sets of features were then combined by a) averaging (\"avg. update and background\") and b) linear regression (last line of Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 458, "end": 466, "text": "Table 4)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Update Summarization", "sec_num": "7" }, { "text": "As in the case of query focused summarization, JS divergence and percentage of input topic signatures in summary are the best features for the update task as well. The overall best feature is JS divergence between the update input and the summaries-correlations of 0.82 and 0.76 with pyramid and responsiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Update Summarization", "sec_num": "7" }, { "text": "Interestingly, the features combining both update and background inputs do not lead to better correlations than those obtained using the update input only. The best performance from combined features is given by the linear regression metric. Although the correlation of this regression feature with pyramid scores (0.80) is comparable to JS divergence with update inputs, its correlation with responsiveness (0.67) is clearly lower. These results show that the term distributions in the update input are sufficiently good predictors of content for update summaries. The role of the background input appears to be negligable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Update Summarization", "sec_num": "7" }, { "text": "We have presented a successful framework for model-free evaluations of content which uses the input as reference. The power of model-free evaluations generalizes across at least two summarization tasks: query focused and update summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "We have analyzed a variety of features for inputsummary comparison and demonstrated that the strength of different features varies considerably. Similar term distributions in the input and the summary and diverse use of topic signatures in the summary are highly indicative of good content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "We also find that preprocessing like stemming improves the performance of KL and JS divergence features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "Very good results were obtained from a correlation analysis with human judgements, showing that input can indeed substitute for model summaries and manual efforts in summary evaluation. The best correlations were obtained by a single feature, JS divergence (0.88 with pyramid scores and 0.73 with responsiveness at system level).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "Our best features can therefore be used to evaluate the content selection performance of systems in a new domain where model summaries are unavailable. However, like all other content evaluation metrics, our features must be accompanied by judgements of linguistic quality to obtain wholesome indicators of summary quality and system performance. Evidence for this need is provided by the lower correlations with responsiveness than the content-only pyramid evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "The results of our analysis zero in on JS divergence and topic signature as desirable objectives to optimize during content selection. On the macro level, they are powerful predictors of content quality. These findings again emphasize the need for always including linguistic quality as a component of evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "Observations from our input-based evaluation also have important implications for the design of novel summarization tasks. We find that high correlations with manual evaluations are obtained by comparing query-focused summaries with the entire input and making no use of the query at all. Similarly in the update summarization task, the best predictions of content for update summaries were obtained using only the update input. The uncertain role of background inputs and queries expose possible problems with the task designs. Under such conditions, it is not clear if queryfocused content selection or ability to compile updates are appropriately captured by any evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "They used cosine similarity to perform the inputsummary comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.nist.gov/tac", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The scores were computed after stemming but stop words were retained in the summaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In fact, it would be surprising to find an automatically computable feature or feature combination which would be able to consistently predict good content for all individual inputs. If such features existed, an ideal summarization system would already exist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mind the gap: Dangers of divorcing evaluations of summary content from linguistic quality", "authors": [ { "first": "J", "middle": [], "last": "Conroy", "suffix": "" }, { "first": "H", "middle": [], "last": "Dang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Conroy and H. Dang. 2008. Mind the gap: Dangers of divorcing evaluations of summary content from linguistic quality. In Proceedings of the 22nd Inter- national Conference on Computational Linguistics (Coling 2008), pages 145-152.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Topic-focused multi-document summarization using an approximate oracle score", "authors": [ { "first": "J", "middle": [], "last": "Conroy", "suffix": "" }, { "first": "J", "middle": [], "last": "Schlesinger", "suffix": "" }, { "first": "D", "middle": [], "last": "O'leary", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Conroy, J. Schlesinger, and D. O'Leary. 2006. Topic-focused multi-document summarization using an approximate oracle score. In Proceedings of ACL, short paper.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A comparison of rankings produced by summarization evaluation measures", "authors": [ { "first": "R", "middle": [], "last": "Donaway", "suffix": "" }, { "first": "K", "middle": [], "last": "Drummey", "suffix": "" }, { "first": "L", "middle": [], "last": "Mather", "suffix": "" } ], "year": 2000, "venue": "NAACL-ANLP Workshop on Automatic Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Donaway, K. Drummey, and L. Mather. 2000. A comparison of rankings produced by summarization evaluation measures. In NAACL-ANLP Workshop on Automatic Summarization.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Summarization evaluation methods: Experiments and analysis", "authors": [ { "first": "H", "middle": [], "last": "Jing", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "M", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 1998, "venue": "AAAI Symposium on Intelligent Summarization", "volume": "", "issue": "", "pages": "60--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Jing, R. Barzilay, K. Mckeown, and M. Elhadad. 1998. Summarization evaluation methods: Experi- ments and analysis. In In AAAI Symposium on Intel- ligent Summarization, pages 60-68.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic evaluation of text coherence: Models and representations", "authors": [ { "first": "M", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2005, "venue": "IJCAI'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Lapata and R. Barzilay. 2005. Automatic evalua- tion of text coherence: Models and representations. In IJCAI'05.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The automated acquisition of topic signatures for text summarization", "authors": [ { "first": "C", "middle": [], "last": "Lin", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th conference on Computational linguistics", "volume": "", "issue": "", "pages": "495--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Lin and E. Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Pro- ceedings of the 18th conference on Computational linguistics, pages 495-501.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic evaluation of summaries using n-gram co-occurance statistics", "authors": [ { "first": "C", "middle": [], "last": "Lin", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Lin and E. Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurance statistics. In Proceedings of HLT-NAACL 2003.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An information-theoretic approach to automatic evaluation of summaries", "authors": [ { "first": "C", "middle": [], "last": "Lin", "suffix": "" }, { "first": "G", "middle": [], "last": "Cao", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "J", "middle": [], "last": "Nie", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", "volume": "", "issue": "", "pages": "463--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Lin, G. Cao, J. Gao, and J. Nie. 2006. An information-theoretic approach to automatic evalu- ation of summaries. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 463-470.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "ROUGE: a package for automatic evaluation of summaries", "authors": [ { "first": "C", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "ACL Text Summarization Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Lin. 2004. ROUGE: a package for automatic eval- uation of summaries. In ACL Text Summarization Workshop.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Columbia multi-document summarization: Approach and evaluation", "authors": [ { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "D", "middle": [], "last": "Evans", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "B", "middle": [], "last": "Schiffman", "suffix": "" }, { "first": "S", "middle": [], "last": "Teufel", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. McKeown, R. Barzilay, D. Evans, V. Hatzivas- siloglou, B. Schiffman, and S. Teufel. 2001. Columbia multi-document summarization: Ap- proach and evaluation. In DUC'01.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Evaluating content selection in summarization: The pyramid method", "authors": [ { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "R", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2004, "venue": "HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Nenkova and R. Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In HLT/NAACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The pyramid method: Incorporating human content selection variation in summarization evaluation", "authors": [ { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "R", "middle": [], "last": "Passonneau", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2007, "venue": "ACM Trans. Speech Lang. Process", "volume": "4", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Nenkova, R. Passonneau, and K. McKeown. 2007. The pyramid method: Incorporating human con- tent selection variation in summarization evaluation. ACM Trans. Speech Lang. Process., 4(2):4.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Duc in context", "authors": [ { "first": "P", "middle": [], "last": "Over", "suffix": "" }, { "first": "H", "middle": [], "last": "Dang", "suffix": "" }, { "first": "D", "middle": [], "last": "Harman", "suffix": "" } ], "year": 2007, "venue": "Inf. Process. Manage", "volume": "43", "issue": "6", "pages": "1506--1520", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Over, H. Dang, and D. Harman. 2007. Duc in con- text. Inf. Process. Manage., 43(6):1506-1520.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evaluation challenges in large-scale multi-document summarization: the mead project", "authors": [ { "first": "D", "middle": [], "last": "Radev", "suffix": "" }, { "first": "S", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "H", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "W", "middle": [], "last": "Lam", "suffix": "" }, { "first": "J", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "H", "middle": [], "last": "Qi", "suffix": "" }, { "first": "A", "middle": [], "last": "Elebi", "suffix": "" }, { "first": "D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "E", "middle": [], "last": "Drabek", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL 2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Radev, S. Teufel, H. Saggion, W. Lam, J. Blitzer, H. Qi, A. \u00c7 elebi, D. Liu, and E. Drabek. 2003. Evaluation challenges in large-scale multi-document summarization: the mead project. In Proceedings of ACL 2003, Sapporo, Japan.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The formation of abstracts by the selection of sentences: Part 1: sentence selection by man and machines", "authors": [ { "first": "G", "middle": [ "J" ], "last": "Rath", "suffix": "" }, { "first": "A", "middle": [], "last": "Resnick", "suffix": "" }, { "first": "R", "middle": [], "last": "Savage", "suffix": "" } ], "year": 1961, "venue": "", "volume": "2", "issue": "", "pages": "139--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. J. Rath, A. Resnick, and R. Savage. 1961. The formation of abstracts by the selection of sentences: Part 1: sentence selection by man and machines. American Documentation, 2(12):139-208.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Novelty detection: the trec experience", "authors": [ { "first": "I", "middle": [], "last": "Soboroff", "suffix": "" }, { "first": "D", "middle": [], "last": "Harman", "suffix": "" } ], "year": 2005, "venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "105--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Soboroff and D. Harman. 2005. Novelty detec- tion: the trec experience. In HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Pro- cessing, pages 105-112.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "content": "
score0.8590.905
responsiveness0.8060.873
Update summaries
pyramid score0.9120.941
responsiveness0.8650.884
", "num": null, "html": null, "text": "" }, "TABREF1": { "type_str": "table", "content": "
: Spearman correlation between manual
scores and ROUGE-1 and ROUGE-2 recall. All
correlations are highly significant with p-value <
0.00001.
", "num": null, "html": null, "text": "" }, "TABREF2": { "type_str": "table", "content": "
Featurespyramid respons.
JS div-0.880-0.736
JS div smoothed-0.874-0.737
% of input topic words0.7950.627
KL div summ-inp-0.763-0.694
cosine overlap0.7120.647
% of summ = topic wd0.7120.602
topic overlap0.6990.629
KL div inp-summ-0.688-0.585
mult. summary prob.0.2220.235
unigram summary prob. -0.188-0.101
regression0.8670.705
ROUGE-1 recall0.8590.806
ROUGE-2 recall0.9050.873
", "num": null, "html": null, "text": "shows the Spearman correlation between manual and automatic scores averaged across the" }, "TABREF3": { "type_str": "table", "content": "
: Spearman correlation on macro level for
the query focused task. All results are highly sig-
nificant with p-values < 0.000001 except unigram
and multinomial summary probability, which are
not significant even at the 0.05 level.
", "num": null, "html": null, "text": "" }, "TABREF5": { "type_str": "table", "content": "
update input onlyavg. update & background
featurespyramid respons. pyramidrespons.
JS div-0.827-0.764-0.716-0.669
JS div smoothed-0.825-0.764-0.713-0.670
% of input topic words0.7700.7090.6770.616
KL div summ-inp-0.749-0.709-0.651-0.624
KL div inp-summ-0.741-0.717-0.644-0.638
cosine overlap0.7270.6910.6490.631
% of summary = topic wd0.7210.7070.6470.636
topic overlap0.7070.6740.6450.619
mult. summmary prob.0.2840.3550.1520.224
unigram summary prob.-0.0930.038-0.151-0.053
regression0.7890.6050.6990.522
ROUGE-1 recall0.9120.865..
ROUGE-2 recall0.9410.884..
regression combining features comparing with background and update inputs (without averaging)
correlations = 0.8058 with pyramid, 0.6729 with responsiveness
", "num": null, "html": null, "text": "Spearman correlations at micro level (query focused task). Only the minimum, maximum values of the significant correlations are reported together with the number and percentage of significant correlations." }, "TABREF6": { "type_str": "table", "content": "", "num": null, "html": null, "text": "Spearman correlations at macro level for update summarization. Results are reported separately for features comparing update summaries with the update input only or with both update and background inputs and averaging the two." } } } }